src
stringlengths 100
132k
| tgt
stringlengths 10
710
| paper_id
stringlengths 3
9
| title
stringlengths 9
254
β | discipline
dict |
---|---|---|---|---|
Abstract-With the advent of Cloud computing, large-scale virtualized compute and data centers are becoming common in the computing industry. These distributed systems leverage commodity server hardware in mass quantity, similar in theory to many of the fastest Supercomputers in existence today. However these systems can consume a cities worth of power just to run idle, and require equally massive cooling systems to keep the servers within normal operating temperatures. This produces CO2 emissions and significantly contributes to the growing environmental issue of Global Warming. Green computing, a new trend for high-end computing, attempts to alleviate this problem by delivering both high performance and reduced power consumption, effectively maximizing total system efficiency. This paper focuses on scheduling virtual machines in a compute cluster to reduce power consumption via the technique of Dynamic Voltage Frequency Scaling (DVFS). Specifically, we present the design and implementation of an efficient scheduling algorithm to allocate virtual machines in a DVFS-enabled cluster by dynamically scaling the supplied voltages. The algorithm is studied via simulation and implementation in a multi-core cluster. Test results and performance discussion justify the design and implementation of the scheduling algorithm. | In REF , they use technique of Dynamic Voltage Frequency Scaling (DVFS) to save energy consumption on CPU in cluster on scheduling. | 12231480 | Power-aware scheduling of virtual machines in DVFS-enabled clusters | {
"venue": "2009 IEEE International Conference on Cluster Computing and Workshops",
"journal": "2009 IEEE International Conference on Cluster Computing and Workshops",
"mag_field_of_study": [
"Computer Science"
]
} |
With the large volume of new information created every day, determining the validity of information in a knowledge graph and filling in its missing parts are crucial tasks for many researchers and practitioners. To address this challenge, a number of knowledge graph completion methods have been developed using low-dimensional graph embeddings. Although researchers continue to improve these models using an increasingly complex feature space, we show that simple changes in the architecture of the underlying model can outperform state-of-the-art models without the need for complex feature engineering. In this work, we present a shared variable neural network model called ProjE that fills-in missing information in a knowledge graph by learning joint embeddings of the knowledge graph's entities and edges, and through subtle, but important, changes to the standard loss function. In doing so, ProjE has a parameter size that is smaller than 11 out of 15 existing methods while performing 37% better than the current-best method on standard datasets. We also show, via a new fact checking task, that ProjE is capable of accurately determining the veracity of many declarative statements. Knowledge Graphs (KGs) have become a crucial resource for many tasks in machine learning, data mining, and artificial intelligence applications including question answering [34] , entity disambiguation [7] , named entity linking [14] , fact checking [32] , and link prediction [28] to name a few. In our view, KGs are an example of a heterogeneous information network containing entity-nodes and relationship-edges corresponding to RDF-style triples h, r, t where h represents a head entity, and r is a relationship that connects h to a tail entity t. KGs are widely used for many practical tasks, however, their correctness and completeness are not guaranteed. Therefore, it is necessary to develop knowledge graph completion (KGC) methods to find missing or errant relationships with the goal of improving the general quality of KGs, which, in turn, can be used to improve or create interesting downstream applications. The KGC task can be divided into two non-mutually exclusive sub-tasks: (i) entity prediction and (ii) relationship prediction. The entity prediction task takes a partial triple h, r, ? as input and produces a ranked list of candidate entities as output: Definition 1. (Entity Ranking Problem) Given a Knowledge Graph G = {E, R} and an input triple h, r, ? , the entity ranking problem attempts to find the optimal ordered list such that βe j βe i ((e j β E β β§ e i β E + ) β e i βΊ e j ), where E + = {e β {e 1 , e 2 , . . . , e l }| h, r, e β G} and E β = {e β {e l+1 , e l+2 , . . . , e |E| }| h, r, e / β G}. Distinguishing between head and tail-entities is usually arbitrary, so we can easily substitute h, r, ? for ?, r, t . The relationship prediction task aims to find a ranked list of relationships that connect a head-entity with a tail-entity, i.e., h, ?, t . When discussing the details of the present work, we focus specifically on the entity prediction task; however, it is straightforward to adapt the methodology to the relationship prediction task by changing the input. A number of KGC algorithms have been developed in recent years, and the most successful models all have one thing in common: they use low-dimensional embedding vectors to represent entities and relationships. Many embedding models, e.g., Unstructured [3], TransE [4] , TransH [35], and TransR [25], use a margin-based pairwise ranking loss function, which measures the score of each possible result as the L n -distance between h + r and t. In these models the loss functions are all the same, so models differ in how they transform the 1 arXiv:1611.05425v1 [cs.AI] | ProjE REF uses a simple but effective shared variable neural network. | 18367155 | ProjE: Embedding Projection for Knowledge Graph Completion | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science",
"Mathematics"
]
} |
The "Disaggregated Server" concept has been proposed for datacenters where the same type server resources are aggregated in their respective pools, for example a compute pool, memory pool, network pool, and a storage pool. Each server is constructed dynamically by allocating the right amount of resources from these pools according to the workload's requirements. Modularity, higher packaging and cooling efficiencies, and higher resource utilization are among the suggested benefits. With the emergence of very large datacenters, "clouds" containing tens of thousands of servers, datacenter efficiency has become an important topic. Few computer chip and systems vendors are working on and making frequent announcements on silicon photonics and disaggregated memory systems. In this paper we study the trade-off between cost and performance of building a disaggregated memory system where DRAM modules in the datacenter are pooled, for example in memory-only chassis and racks. The compute pool and the memory pool are interconnected by an optical interconnect to overcome the distance and bandwidth issues of electrical fabrics. We construct a simple cost model that includes the cost of latency, cost of bandwidth and the savings expected from a disaggregated memory system. We then identify the level at which a disaggregated memory system becomes cost competitive with a traditional direct attached memory system. Our analysis shows that a rack-scale disaggregated memory system will have a non-trivial performance penalty, and at the datacenter scale the penalty is impractically high, and the optical interconnect costs are at least a factor of 10 more expensive than where they should be when compared to the traditional direct attached memory systems. | The authors in REF studied the trade-off between cost and performance of building a disaggregated memory system; they constructed a simple cost model that compares the savings expected from a disaggregated memory system to the expected costs, such as latency and bandwidth costs, and then identified the level at which a disaggregated memory system becomes cost competitive with a traditional direct-attached memory system. | 7563532 | Disaggregated and optically interconnected memory: when will it be cost effective? | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science"
]
} |
Influence maximization is a well-studied problem that asks for a small set of influential users from a social network, such that by targeting them as early adopters, the expected total adoption through influence cascades over the network is maximized. However, almost all prior work focuses on cascades of a single propagating entity or purely-competitive entities. In this work, we propose the Comparative Independent Cascade (Com-IC) model that covers the full spectrum of entity interactions from competition to complementarity. In Com-IC, users' adoption decisions depend not only on edge-level information propagation, but also on a node-level automaton whose behavior is governed by a set of model parameters, enabling our model to capture not only competition, but also complementarity, to any possible degree. We study two natural optimization problems, Self Influence Maximization and Complementary Influence Maximization, in a novel setting with complementary entities. Both problems are NP-hard, and we devise efficient and effective approximation algorithms via non-trivial techniques based on reverse-reachable sets and a novel "sandwich approximation" strategy. The applicability of both techniques extends beyond our model and problems. Our experiments show that the proposed algorithms consistently outperform intuitive baselines in four realworld social networks, often by a significant margin. In addition, we learn model parameters from real user action logs. Review of Classical IC Model. In the IC model [16] , there is just one entity (e.g., idea or product) being propagated through the network. An instance of the model has a directed graph G = (V, E, p) where p : E β [0, 1], and a seed set S β V . For convenience, we use pu,v for p (u, v). At time step 0, the seeds are active and all other nodes are inactive. Propagation proceeds in discrete time steps. At time t, every node u that became active at tβ1 makes one attempt to activate each of its inactive out-neighbors v. This can be seen as node u "testing" if the edge (u, v) is "live" or "blocked". The out-neighbor v becomes active at t iff the edge is live. The propagation ends when no new nodes become active. Key differences from IC model. In the Comparative IC model 1 No generality is lost in assuming seeds adopt an item without testing the NLA: for every v β V , we can create two dummy nodes vA, vB and edges (vA, v) and (vB, v) with pv A ,v = pv B ,v = 1. Requiring seeds to go through NLA is equivalent to constraining that A-seeds (B-seeds) be selected from all vA's (resp. vB's). | The related complementary opinion is extended by Lu et al. REF , who proposed the Comparative Independent Cascade (Com-IC) model consisting of edge-level information propagation and a Node-Level Automation (NLA) that ultimately makes adoption decisions based on a set of model parameters called Global Adoption Probabilities (GAPs), to address the Complementary Influence Maximization problem. | 3471972 | From Competition to Complementarity: Comparative Influence Diffusion and Maximization | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science",
"Physics"
]
} |
A graph has growth rate k if the number of nodes in any subgraph with diameter r is bounded by O(r k ). The communication graphs of wireless networks and peer-to-peer networks often have small growth rate. In this paper we study the tradeoff between two quality measures for routing in growth restricted graphs. The two measures we consider are the stretch factor, which measures the lengths of the routing paths, and the load balancing ratio, which measures how evenly the traffic is distributed. We show that if the routing algorithm is required to use paths with stretch factor c, then its load balancing ratio is bounded by O((n/c) 1β1/k ), where k is the graph's growth rate. We illustrate our results by focusing on the unit disk graph for modeling wireless networks in which two nodes have direct communication if their distance is under certain threshold. We show that if the maximum density of the nodes is bounded by Ο, there exists routing scheme such that the stretch factor of routing paths is at most c, and the maximum load on the nodes is at most O(min( Οn/c, n/c)) times the optimum. In addition, the bound on the load balancing ratio is tight in the worst case. As a special case, when the density is bounded by a constant, the shortest path routing has a load balancing ratio of O( β n). The result extends to k-dimensional unit ball graphs and graphs with growth rate k. We also discuss algorithmic issues for load balanced short path routing and for load balanced routing in spanner graphs. | For instance, for growth-bounded wireless networks, Gao and Zhang REF show routing algorithms that simultaneously achieve a stretch factor of c and a load balancing ratio of O((n/c) 1β1/k ) where k is the growth rate. | 207794 | Tradeoffs between stretch factor and load balancing ratio in routing on growth restricted graphs | {
"venue": "PODC '04",
"journal": null,
"mag_field_of_study": [
"Computer Science",
"Mathematics"
]
} |
This paper introduces new optimality-preserving operators on Q-functions. We first describe an operator for tabular representations, the consistent Bellman operator, which incorporates a notion of local policy consistency. We show that this local consistency leads to an increase in the action gap at each state; increasing this gap, we argue, mitigates the undesirable effects of approximation and estimation errors on the induced greedy policies. This operator can also be applied to discretized continuous space and time problems, and we provide empirical results evidencing superior performance in this context. Extending the idea of a locally consistent operator, we then derive sufficient conditions for an operator to preserve optimality, leading to a family of operators which includes our consistent Bellman operator. As corollaries we provide a proof of optimality for Baird's advantage learning algorithm and derive other gap-increasing operators with interesting properties. We conclude with an empirical study on 60 Atari 2600 games illustrating the strong potential of these new operators. Value-based reinforcement learning is an attractive solution to planning problems in environments with unknown, unstructured dynamics. In its canonical form, value-based reinforcement learning produces successive refinements of an initial value function through repeated application of a convergent operator. In particular, value iteration (Bellman 1957) directly computes the value function through the iterated evaluation of Bellman's equation, either exactly or from samples (e.g. Q-Learning, Watkins 1989). In its simplest form, value iteration begins with an initial value function V 0 and successively computes V k+1 := T V k , where T is the Bellman operator. When the environment dynamics are unknown, V k is typically replaced by Q k , the state-action value function, and T is approximated by an empirical Bellman operator. The fixed point of the Bellman operator, Q * , is the optimal state-action value function or optimal Q-function, from which an optimal policy Ο * can be recovered. In this paper we argue that the optimal Q-function is inconsistent, in the sense that for any action a which is subop- * Now at Carnegie Mellon University. Copyright c 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. timal in state x, Bellman's equation for Q * (x, a) describes the value of a nonstationary policy: upon returning to x, this policy selects Ο * (x) rather than a. While preserving global consistency appears impractical, we propose a simple modification to the Bellman operator which provides us a with a first-order solution to the inconsistency problem. Accordingly, we call our new operator the consistent Bellman operator. We show that the consistent Bellman operator generally devalues suboptimal actions but preserves the set of optimal policies. As a result, the action gap -the value difference between optimal and second best actions -increases. Increasing the action gap is advantageous in the presence of approximation or estimation error ( In the second half of this paper we derive novel sufficient conditions for an operator to preserve optimality. The relative weakness of these new conditions reveal that it is possible to deviate significantly from the Bellman operator without sacrificing optimality: an optimality-preserving operator needs not be contractive, nor even guarantee convergence of the Q-values for suboptimal actions. While numerous alternatives to the Bellman operator have been put forward (e.g. recently Azar et al. 2011; Bertsekas and Yu 2012) , we believe our work to be the first to propose such a major departure from the canonical fixed-point condition required from an optimality-preserving operator. As proof of the richness of this new operator family we describe a few practical instantiations with unique properties. We use our operators to obtain state-of-the-art empirical | More recently, it was shown that the advantage learning algorithm is a gap-increasing operator REF . | 1907310 | Increasing the Action Gap: New Operators for Reinforcement Learning | {
"venue": "Bellemare, Marc G., Ostrovski, G., Guez, A., Thomas, Philip S., and Munos, Remi. Increasing the Action Gap: New Operators for Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 2016",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Does phonological variation get transcribed into social media text? This paper investigates examples of the phonological variable of consonant cluster reduction in Twitter. Not only does this variable appear frequently, but it displays the same sensitivity to linguistic context as in spoken language. This suggests that when social media writing transcribes phonological properties of speech, it is not merely a case of inventing orthographic transcriptions. Rather, social media displays influence from structural properties of the phonological system. | REF analyzes phonological factors in social media writing. | 2655326 | Phonological Factors in Social Media Writing | {
"venue": "LASM",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
This paper presents a statistical decision procedure for lexical ambiguity resolution. The algorithm exploits both local syntactic patterns and more distant collocational evidence, generating an efficient, effective, and highly perspicuous recipe for resolving a given ambiguity. By identifying and utilizing only the single best disambiguating evidence in a target context, the algorithm avoids the problematic complex modeling of statistical dependencies. Although directly applicable to a wide class of ambiguities, the algorithm is described and evaluated in a realistic case study, the problem of restoring missing accents in Spanish and French text. Current accuracy exceeds 99% on the full task, and typically is over 90% for even the most difficult ambiguities. | REF presents a statistic procedure for lexical ambiguity resolution, based on decision lists, that achieved good results when applied to accent restoration in Spanish and French. | 1580335 | Decision Lists For Lexical Ambiguity Resolution: Application To Accent Restoration In Spanish And French | {
"venue": "Annual Meeting Of The Association For Computational Linguistics",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored. We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC ) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets. We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on the Web and broaden the understanding of the dynamics of human attention. | Yang and Leskovec REF describe six classes of temporal shapes of attention. | 1412278 | Patterns of temporal variation in online media | {
"venue": "WSDM '11",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Land use classification is a fundamental task of information extraction from remote sensing imagery. Semantic segmentation based on deep convolutional neural networks (DCNNs) has shown outstanding performance in this task. However, these methods are still affected by the loss of spatial features. In this study, we proposed a new network, called the dense-coordconv network (DCCN), to reduce the loss of spatial features and strengthen object boundaries. In this network, the coordconv module is introduced into the improved DenseNet architecture to improve spatial information by putting coordinate information into feature maps. The proposed DCCN achieved an obvious performance in terms of the public ISPRS (International Society for Photogrammetry and Remote Sensing) 2D semantic labeling benchmark dataset. Compared with the results of other deep convolutional neural networks (U-net, SegNet, Deeplab-V3), the results of the DCCN method improved a lot and the OA (overall accuracy) and mean F1 score reached 89.48% and 86.89%, respectively. This indicates that the DCCN method can effectively reduce the loss of spatial features and improve the accuracy of semantic segmentation in high resolution remote sensing imagery. | Yao et al. REF proposed the dense-coordconv network (DCCN) to reduce the loss of spatial features and strengthen object boundaries. | 195357122 | Land Use Classification of the Deep Convolutional Neural Network Method Reducing the Loss of Spatial Features | {
"venue": "Sensors (Basel, Switzerland)",
"journal": "Sensors (Basel, Switzerland)",
"mag_field_of_study": [
"Engineering",
"Medicine",
"Computer Science"
]
} |
We describe a method for incorporating syntactic information in statistical machine translation systems. The first step of the method is to parse the source language string that is being translated. The second step is to apply a series of transformations to the parse tree, effectively reordering the surface string on the source language side of the translation system. The goal of this step is to recover an underlying word order that is closer to the target language word-order than the original string. The reordering approach is applied as a pre-processing step in both the training and decoding phases of a phrase-based statistical MT system. We describe experiments on translation from German to English, showing an improvement from 25.2% Bleu score for a baseline system to 26.8% Bleu score for the system with reordering, a statistically significant improvement. | For example, REF parse the sentences of the source language and restructure the word order, such that it matches the target language word order more closely. | 11142668 | Clause Restructuring For Statistical Machine Translation | {
"venue": "Annual Meeting Of The Association For Computational Linguistics",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract. Given an undirected graph G = (V, E), the density of a subgraph on vertex set S is defined as d(S) = , where E(S) is the set of edges in the subgraph induced by nodes in S. Finding subgraphs of maximum density is a very well studied problem. One can also generalize this notion to directed graphs. For a directed graph one notion of density given by Kannan and Vinay [12] is as follows: given subsets S and T of vertices, the density of the subgraph , where E(S, T ) is the set of edges going from S to T . Without any size constraints, a subgraph of maximum density can be found in polynomial time. When we require the subgraph to have a specified size, the problem of finding a maximum density subgraph becomes N P -hard. In this paper we focus on developing fast polynomial time algorithms for several variations of dense subgraph problems for both directed and undirected graphs. When there is no size bound, we extend the flow based technique for obtaining a densest subgraph in directed graphs and also give a linear time 2-approximation algorithm for it. When a size lower bound is specified for both directed and undirected cases, we show that the problem is NP-complete and give fast algorithms to find subgraphs within a factor 2 of the optimum density. We also show that solving the densest subgraph problem with an upper bound on size is as hard as solving the problem with an exact size constraint, within a constant factor. | In REF , they also consider the density versions of the problems in directed graphs. | 1774289 | On finding dense subgraphs | {
"venue": "In ICALP β09",
"journal": null,
"mag_field_of_study": [
"Mathematics",
"Computer Science"
]
} |
Well distributed point sets play an important role in a variety of computer graphics contexts, such as anti-aliasing, global illumination, halftoning, non-photorealistic rendering, point-based modeling and rendering, and geometry processing. In this paper, we introduce a novel technique for rapidly generating large point sets possessing a blue noise Fourier spectrum and high visual quality. Our technique generates non-periodic point sets, distributed over arbitrarily large areas. The local density of a point set may be prescribed by an arbitrary target density function, without any preset bound on the maximum density. Our technique is deterministic and tile-based; thus, any local portion of a potentially infinite point set may be consistently regenerated as needed. The memory footprint of the technique is constant, and the cost to generate any local portion of the point set is proportional to the integral over the target density in that area. These properties make our technique highly suitable for a variety of real-time interactive applications, some of which are demonstrated in the paper. Our technique utilizes a set of carefully constructed progressive and recursive blue noise Wang tiles. The use of Wang tiles enables the generation of infinite non-periodic tilings. The progressive point sets inside each tile are able to produce spatially varying point densities. Recursion allows our technique to adaptively subdivide tiles only where high density is required, and makes it possible to zoom into point sets by an arbitrary amount, while maintaining a constant apparent density. | The technique of REF generates non-periodic point sets that can be used for tiling, allowing viewers to interactively resize the stipple image. | 11007853 | Recursive Wang tiles for real-time blue noise | {
"venue": "SIGGRAPH '06",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-Content-centric networking (CCN) is a promising framework to rebuild the Internet's forwarding substrate around the concept of content. CCN advocates ubiquitous in-network caching to enhance content delivery, and thus each router has storage space to cache frequently requested content. In this work, we focus on the cache allocation problem, namely, how to distribute the cache capacity across routers under a constrained total storage budget for the network. We first formulate this problem as a content placement problem and obtain the optimal solution by a two-step method. We then propose a suboptimal heuristic method based on node centrality, which is more practical in dynamic networks with frequent content publishing. We investigate through simulations the factors that affect the optimal cache allocation, and perhaps more importantly we use a real-life Internet topology and video access logs from a large scale Internet video provider to evaluate the performance of various cache allocation methods. We observe that network topology and content popularity are two important factors that affect where exactly should cache capacity be placed. Further, the heuristic method comes with only a very limited performance penalty compared to the optimal allocation. Finally, using our findings, we provide recommendations for network operators on the best deployment of CCN caches capacity over routers. | Wang et al. REF address the distribution of the cache capacity across routers under a constrained total storage budget for the network. | 12943609 | Design and Evaluation of the Optimal Cache Allocation for Content-Centric Networking | {
"venue": "IEEE Transactions on Computers",
"journal": "IEEE Transactions on Computers",
"mag_field_of_study": [
"Computer Science"
]
} |
Sequence to sequence learning models still require several days to reach state of the art performance on large benchmark datasets using a single machine. This paper shows that reduced precision and large batch training can speedup training by nearly 5x on a single 8-GPU machine with careful tuning and implementation. 1 On WMT'14 English-German translation, we match the accuracy of Vaswani et al. (2017) in under 5 hours when training on 8 GPUs and we obtain a new state of the art of 29.3 BLEU after training for 85 minutes on 128 GPUs. We further improve these results to 29.8 BLEU by training on the much larger Paracrawl dataset. On the WMT'14 EnglishFrench task, we obtain a state-of-the-art BLEU of 43.2 in 8.5 hours on 128 GPUs. | Other work shows that training on 128 GPUs can significantly boost the experimental results and shorten the training time REF . | 44131019 | Scaling Neural Machine Translation | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science"
]
} |
Emojis have evolved as complementary sources for expressing emotion in social-media platforms where posts are mostly composed of texts and images. In order to increase the expressiveness of the social media posts, users associate relevant emojis with their posts. Incorporating domain knowledge has improved machine understanding of text. In this paper, we investigate whether domain knowledge for emoji can improve the accuracy of emoji recommendation task in case of multimedia posts composed of image and text. Our emoji recommendation can suggest accurate emojis by exploiting both visual and textual content from social media posts as well as domain knowledge from Emojinet. Experimental results using pre-trained image classifiers and pre-trained word embedding models on Twitter dataset show that our results outperform the current state-ofthe-art by 9.6%. We also present a user study evaluation of our recommendation system on a set of images chosen from MSCOCO dataset. | Emojinet has also improved the accuracies of emoji prediction in case of images REF . | 52096100 | Which Emoji Talks Best for My Picture? | {
"venue": "2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI)",
"journal": "2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI)",
"mag_field_of_study": [
"Computer Science"
]
} |
It has been several years since Massive Open Online Courses (MOOC) have entered the higher education environment and many forms have emerged from this new way of acquiring knowledge. Teachers have been incorporating MOOCs with more or less success in a traditional classroom setting to support various learning preferences, introduce this new way of learning to students, and to make learning available to those who might not be able to follow traditional instructions. This paper researches a blended learning model where a MOOC has been integrated in a traditional classroom. A learning outcomes based approach was implemented, that supported a balanced student workload. Qualitative approach was used to analyse students' learning diaries. Based on this research, benefits of integrating a MOOC with classroom based teaching were identified, as well as barriers that can hinder the successful implementation. Recommendations for teachers are provided. | In REF , BraliΔ and Divjak proposed a blended learning model that integrates Massive Open Online Course (MOOC) into a traditional classroom. | 20647488 | Integrating MOOCs in traditionally taught courses: achieving learning outcomes with blended learning | {
"venue": "International Journal of Educational Technology in Higher Education",
"journal": "International Journal of Educational Technology in Higher Education",
"mag_field_of_study": [
"Sociology"
]
} |
This paper advocates expansion of the role of Bayesian statistical inference when formally quantifying uncertainty in computer models defined by systems of ordinary or partial differential equations. We adopt the perspective that implicitly defined infinite dimensional functions representing model states are objects to be inferred probabilistically. We develop a general methodology for the probabilistic integration of differential equations via model based updating of a joint prior measure on the space of functions and their temporal and spatial derivatives. This results in a posterior measure over functions reflecting how well they satisfy the system of differential equations and corresponding initial and boundary values. We show how this posterior measure can be naturally incorporated within the Kennedy and O'Hagan framework for uncertainty quantification and provides a fully Bayesian approach to model calibration. By taking this probabilistic viewpoint, the full force of Bayesian inference can be exploited when seeking to coherently quantify and propagate epistemic uncertainty in computer models of complex natural and physical systems. A broad variety of examples are provided to illustrate the potential of this framework for characterising discretization uncertainty, including initial value, delay, and boundary value differential equations, as well as partial differential equations. We also demonstrate our methodology on a large scale system, by modeling discretization uncertainty in the solution of the Navier-Stokes equations of fluid flow, reduced to over 16,000 coupled and stiff ordinary differential equations. Finally, we discuss the wide range of open research themes that follow from the work presented. Keywords: Bayesian Numerical Analysis, Uncertainty Quantification, Gaussian Processes, Differential Equation Models, Uncertainty in Computer Models. In all of the sciences, economics and engineering there is a fundamental reliance on the use of differential equation models to describe complex phenomena concisely, using few but readily interpretable parameters, which we denote by ΞΈ. In systems of differential equations, the derivatives with respect to spatial variables, xβ D β R d , and temporal variables, t β [a, b] β R + , are related to the implicitly defined states, u(x, t) β R P , which are hence often analytically intractable. The main challenge of working with differential equation models, from both mathematical and statistical perspectives, is that the solutions are generally not available in closed form. Consequently, numerical solvers are used to approximate a system solution over a discretisation grid. Numerical integration of a system of differential equations yields a deterministic approximation based on subjective choices regarding the order of the numerical method, the specified error tolerance, and the implied discretisation grid, all of which impact the quality of the approximation (Butcher, 2008; Higham, 1996) . These methods implicitly make strong assumptions, for example, that small changes in the step size result in small cumulative changes in the output, and that there exists a unique, well-conditioned solution. Additionally, meaningful inferences from the model are dependent on the assumption that the approximated states contain negligible numerical error. Although the specified error tolerance and associated point-wise errors can be reduced by refining the discretisation grid, there is naturally a trade off between the computational effort involved and the accuracy of the approximated solution, which determines whether the assumption of negligible error is reasonable in * oksana.chkrebtii@gmail.com; address: Department of Statistics, The Ohio State University, 1958 Neil Avenue, 404 Cockins Hall, Columbus, OH 43210-1247 1 arXiv:1306.2365v2 [stat.ME] 28 Apr 2014 practice. Local and global numerical errors are defined point-wise and relate to the asymptotic behaviour of the deterministic approximation of the model. This form of numerical error analysis is not well suited for quantifying the functional uncertainty in the solution for the purpose of fully probabilistic model inference. An additional challenge when working with differential equation models is that in some cases classical solution approximations provide potentially misleading trajectories. Long-time numerical solutions may be globally sensitive to truncation errors introduced at each discretisation point (e.g. Sauer et al., 1997) . Illconditioned models can give rise to another source of uncertainty in the form of solution multiplicity (Ascher et al., 1988; Beyn and Doedel, 1981) . This often cannot be readily verified analytically and clearly poses a problem for classical numerical integration methods that produce a single deterministic solution (Keller, 1968) . In this paper we address the modelling challenges that arise when using the following classes of deterministic ordinary and partial differential equation models, for which analytical solutions are generally not available. Ordinary Differential Equation (ODE) models implicitly represent the derivative of states u(t), with respect to time t through u t (t) := d dt u(t) = f (t, u(t), ΞΈ), where we make the standard assumption that f is continuous in the first argument and Lipschitz continuous in the second argument. Inputs and boundary constraints define several model variants. The Initial Value Problem (IVP) models the system states with fixed initial condition u * (a), evolving according to the ODE as follows, u t (t) = f t, u(t), ΞΈ , t β [a, b], u(a) = u * (a). The existence of a solution is guaranteed under mild conditions (see for example, Butcher, 2008; Coddington and Levinson, 1955) . Such models may be high dimensional and contain other complexities such as algebraic components, functional inputs, or higher order terms. While IVP models specify a fixed initial condition on the system states, the Mixed Boundary Value Problem (MBVP) may constrain different states at different time points. Typically these constraints are imposed at the ends of the time domain giving the general form for two state mixed boundary value problems, which can be straightforwardly generalised to higher dimensions and extrapolated beyond the final time point b. Whereas a unique IVP solution exists under relatively mild conditions, imposing mixed boundary constraints can result in multiple solutions (e.g. Keller, 1968) introducing severe problems for parameter estimation methods. The Delay Initial Function Problem (DIFP) generalises the initial constraint of IVPs to an initial function, Ο(t), thereby relating the derivative of a process to both present and past states at lags Ο j β [0, β), DIFPs are well suited to describing biological and physical dynamics that take time to propagate through systems. However, they pose challenges to numerical techniques due to potentially large and structured truncation error (Bellen and Zennaro, 2003) . Furthermore, the sensitivity of DIFP solutions to small changes in the initial function, such as those due to interpolation error, may push an otherwise well behaved system into a chaotic regime (Taylor and Campbell, 2007) . Partial Differential Equation (PDE) models represent the derivative of states with respect to multiple arguments, for example time and spatial variables. The main classes of PDE models are based on elliptic, parabolic and hyperbolic equations. Further adding to their complexity, functional boundary constraints and initial conditions make PDE models more challenging, while their underlying theory is less developed compared to ODE models (Polyanin and Zaitsev, 2004 Numerical discretisation error is often characterised in the form of an upper bound, referred to as verification error in the mathematics and engineering literature (Oberkampf and Roy, 2010) . In the many cases where the error is not negligible, for example in large scale Navier-Stokes equations used in modelling fluid flow, an accurate representation of the error propagation resulting from the approximation is required. An illustrative example of quantifying this numerical error in fluid dynamics is provided by Oliver et al. (2014) and a preliminary attempt at a Bayesian formalisation of the uncertainty induced is described by Oliver and Moser (2011) . The approach advocated in this paper is different in nature. We characterise discretisation uncertainty using a probabilistic representation of the solution conditional on fixed initial conditions and solver parameters. Our methodology challenges some of the assumptions used by numerical solvers and yields algorithms robust to many of the inferential issues that occur. Increasing attention is being paid to the challenges associated with quantifying uncertainty in system models, and in particular those based on mathematical descriptions of natural or physical processes using differential equations (Ghanem and Spanos, 2003; Huttunen and Kaipio, 2007; Kaipio et al., 2004; Marzouk and Najm, 2009; Marzouk et al., 2007; Stuart, 2010) . The important question of how uncertainty propagates through complex systems arises in a variety of contexts, and the focus is often on two main problems. The forward problem is concerned with how input uncertainty (i.e. in the parameters and initial conditions) propagates over the numerical trajectory. This type of problem has been investigated in the engineering and applied mathematics literature, making use of sampling based methods, perturbation of initial system states (Mosbach and Turner, 2009), moment closure approaches, and more recently polynomial chaos methods (see Ghanem and Spanos, 2003; Xiu, 2009 , for a complete overview). The inverse problem concerns the uncertainty in the model inputs given measurements with error of the model outputs, and much work in this area has been presented in the statistics literature (Bock, 1983; Brunel, 2008; Calderhead and Girolami, 2011; Calderhead et al., 2009; Campbell and Lele, 2013; Campbell and Steele, 2011; Campbell and Chkrebtii, 2013; Dowd, 2007; Gugushvili and Klaassen, 2012; Ionides et al., 2006; Liang and Wu, 2008; Ramsay et al., 2007; Xue et al., 2010; Xun et al., 2014). Additionally, the inverse problem itself may be ill-conditioned when the data does not lie in the solution space of the differential equation model (BrynjarsdΓ³ttir and O'Hagan, 2014; Kennedy and O'Hagan, 2001) . The issue of discretisation uncertainty is intrinsic to both the forward and inverse problems, and it seems natural to consider the unknown solution of the differential equation itself as part of the inferential framework. There are many examples suggesting that a probabilistic functional approach is well suited for modelling solution uncertainty for differential equation models; indeed solving PDEs involves estimation of a field, DIFPs require estimation of an unknown function, and MBVPs may have multiple solutions. All of these problems may be naturally framed in terms of measures over a function space (Stuart, 2010) . A Bayesian functional approach offers a way of defining and updating measures over possible trajectories of a system of differential equations and propagating the resulting uncertainty consistently through the inferential process. The mathematical framework for defining numerical problems on function spaces has been developed through the foundational work of Stuart (2010), Somersalo (2007), and Skilling (1991) , and this is the natural setting that we adopt in our contribution. We develop these ideas further and investigate general approaches to working with classes of differential equations from a Bayesian perspective. The use of probability in numerical analysis has a long history. In the 1950s, Monte Carlo methods were originally developed to estimate analytically intractable high dimensional integrals, and extensions of this methodology have since had an immense impact in a wide variety of areas. Markov chain Monte Carlo (MCMC) methods have allowed the rapid development and application of Bayesian statistical approaches to quantifying uncertainty in scientific and engineering problems. Theoretical analyses of Bayesian statistical approaches to classical numerical analysis problems appear to date back all the way to PoincarΓ©, as eloquently summarised by Diaconis (1988) . More recently, O'Hagan (1992); Skilling (1991) also provided practical motivations for developing and employing Bayesian techniques to characterise the uncertainty that results from computational restrictions in the case of integration, since we cannot evaluate a function at every point in its input space and must therefore account for discretisation uncertainty. Such work is closely related to the idea of computer emulation of large and complex computer models; the use of Gaussian processes to model functions that are computationally very expensive to evaluate has found application in many challenging areas of science, such as climate prediction and geophysics (Conti and O'Hagan, 2010; Kennedy 3 and O'Hagan, 2001; Tokmakian et al., 2012) . Other recent examples of applying Bayesian approaches to numerical integration include Hennig and Hauberg (2014) , who apply Skilling's approach to the calculation of Riemannian statistics, and Kennedy (1998); Osborne et al. (2012) , whose methods allow for a more informed probabilistic approach to the choice of integration points, based on the predicted variance of the surrogate model given limited function evaluations. However, such approaches assume no error on the function evaluation itself, and are therefore not applicable to differential equation models, where the function evaluation gives only an approximate derivative of the solution of the system at the next discrete time point, based on the current predictive distributions over the states. The aim of this paper is to make a case for probabilistically characterising discretisation error when solving general classes of differential equations, and for viewing this epistemic uncertainty as an important and integral part of the overall model inference framework. We first develop a probabilistic differential equation solver that characterises discretisation uncertainty in the solution, by formalising and substantially extending the ideas first presented by Skilling (1991) . The sequential sampling approach suggested is made computationally feasible through recursive Bayesian updating, and it closely follows the sequential construction of a proof of consistency that we present in the Appendix, ensuring the algorithm converges to the exact solution as the time-step tends to zero. In our approach, we treat the sequential model evaluations as auxiliary parameters, over which the solution can then be marginalised, and we further extend the basic method to the cases of mixed boundary value problems exhibiting multiplicity of solutions, delay initial function problems with partially observed initial functions, stiff and chaotic PDEs via spectral projection, as well as providing a framework for directly solving PDEs. We give an example of the scalability of our approach using a Navier-Stokes system involving the solution of over 16,000 coupled stiff ordinary differential equations. Finally, we adopt a flexible and general forward-simulation approach that allows us to embed the formal quantification of discretisation uncertainty for differential equations within the Kennedy and O'Hagan framework, hence defining it as part of the full inferential procedure for inverse problems. We provide code for our Probabilistic Differential Equation Solver (PODES), which allows replication of all results presented in this paper. This is available at http://web.warwick.ac.uk/PODES. We begin in Section 2 by discussing model discrepancy within the Bayesian approach to uncertainty quantification, as first presented by Kennedy and O'Hagan (2001) and subsequently adopted in practice throughout the statistics literature. We discuss how existing numerical approaches to differential equation integration ignore discretisation uncertainty when the solution is not available in closed form. We then develop a general Bayesian framework for modelling this epistemic uncertainty in differential equations, inspired by the work of Skilling (1991) and O'Hagan (1992) . We begin in Section 3 by defining functional priors on the states and derivatives for general ordinary and partial differential equations. Then, in Section 4, we develop a Bayesian approach for updating our prior beliefs given model information obtained sequentially and self-consistently over a given discretisation grid, thus probabilistically characterising the finite dimensional representation of an underlying infinite dimensional solution given model parameters and initial conditions. We show that the resulting posterior trajectory converges in probability to the exact solution under standard assumptions. We then incorporate this methodology into the inverse problem in Section 5, and provide examples in Section 6. Finally, we discuss our proposed approach to Bayesian uncertainty quantification for differential equation models and highlight open areas of research. The Appendix contains proofs and further mathematical and algorithmic details. The importance of formally quantifying sources of uncertainty in computer simulations when mathematically modelling complex natural phenomena, such as the weather, ocean currents, ice sheet flow, and cellular protein transport, is widely acknowledged. These model based simulations inform the reasoning process 4 when, for example, assessing financial risk in deciding on oil field bore configurations, or forming government policy in response to extreme weather events. As such, accounting for all sources of uncertainty and propagating them in a coherent manner throughout the entire inference and decision making process is of great importance. Consider data, y(t), observed at discrete time points t = [t 1 , t 2 , . . . , t T ] and a set of model parameters ΞΈ. Using the exact solution of a mathematical model represented by u * (t, ΞΈ), a simplified observation model based on some measurement error structure (t) is, In full generality, a nonlinear transformation G of u * (t, ΞΈ) may be observed, however for expositional clarity we assume this is an identity. In our setting, u * (t, ΞΈ) is the unique function satisfying a general system of differential equations. When performing statistical inference over such models, we are interested in the joint posterior distribution over all unknowns, possibly including the initial and boundary conditions. Throughout the paper, ΞΈ will be augmented to contain all the variables of interest. In the landmark paper of Kennedy and O'Hagan (2001) a number of sources of uncertainty were identified when modelling a natural or physical process. Their acknowledgement of uncertainty from incomplete knowledge of the process and the corresponding model inadequacy motivates a probabilistic view of a modelreality mismatch Ξ΄(t) also studied in recent papers by BrynjarsdΓ³ttir and O'Hagan (2014); Huttunen and Kaipio (2007); Kaipio and Somersalo (2007); and Stuart (2010) . In the engineering literature, this form of error is also know as validation error (see for example, Oberkampf and Roy, 2010). Following Kennedy and O'Hagan by defining Ξ΄(t) as a random function drawn from a Gaussian Process (GP), the observational model becomes, Due to the lack of an analytical solution for most nonlinear differential equations, the likelihood p y(t) | u * (t, ΞΈ), ΞΈ cannot be obtained in closed form. This issue is dealt with throughout the statistics literature by replacing the exact likelihood with a surrogate, p y(t) |Γ» N (t, ΞΈ), ΞΈ , based on an N -dimensional approximate solution,Γ» N (t, ΞΈ), obtained using numerical integration methods (for example, a Runge-Kutta solver with N time steps) whose accuracy has been well studied (see for example, Henrici, 1964). However, there are still many scenarios for which standard approaches for characterising upper bounds on numerical error, and subsequently propagating this uncertainty throughout the rest of the inferential process are unsatisfactory. Limited computation and coarse mesh size are contributors to the numerical error, as described in the comprehensive overview of Oberkampf and Roy (2010) . The seemingly innocuous assumption of negligible numerical integration error can subsequently lead to serious statistical bias and misleading inferences for certain classes of differential equation models, and we illustrate this point in Section 6.1 using a simple example. We may represent this additional uncertainty using the term ΞΆ(t, ΞΈ) = u Kennedy and O'Hagan (2001) suggest modelling the computer code using a Gaussian process that is agnostic to the specific form of the underlying mathematical model, which is therefore considered as a "black box". In the discussion of their paper, H. Wynn argues that the sensitivity equations and other underlying mathematical structures that govern the computer simulation could also be included in the overall uncertainty analysis. In the current paper, we "open the black box" by explicitly modelling the solution and associated discretisation uncertainty,Γ» N (t, ΞΈ) + ΞΆ(t, ΞΈ). This allows the Kennedy and O'Hagan framework to be further enriched by incorporating detailed knowledge of the mathematical model being employed. Noting that the posterior measure over the computer model is defined independently of the observed data suggests a further construction by probabilistically characterising the uncertainty due to system states being defined implicitly by differential equations. This leads to the development of a fully probabilistic scheme for solving differential equations and suggests a powerful new approach to modelling uncertainty in computer models. We model uncertainty in a finite dimensional representation of the infinite dimensional solution through a probability statement on a space of suitably smooth functions. Restricting ourselves to Hilbert spaces for 5 modelling our knowledge of u * (t, ΞΈ), we define a Gaussian prior measure on the function space (Stuart, 2010) . We then directly model our knowledge about the solution via the stochastic process u(t, ΞΈ), thus replacing (6) with y(t) = u(t, ΞΈ) + Ξ΄(t) + (t). The contribution in this paper is in our definition and use of u(t, ΞΈ), therefore for expositional clarity we focus our attention on the joint posterior measure over differential equation model states, parameters, and associated auxiliary parameters, Ξ¨, of our probabilistic model of uncertainty, Our Bayesian probabilistic integration framework for differential equation models fully explores the space of trajectories that approximately satisfy model dynamics under theoretical guarantees of consistency. The framework presented in this paper is demonstrated on models described using nonlinear ordinary, delay, mixed boundary value and partial differential equations, and we explicitly address multiplicity of solutions in Section 6.2. For notational simplicity we hereafter omit the dependence of u on ΞΈ. When considering uncertainty in the infinite dimensional solution of differential equations, the natural measure for spaces of a wide class of functions is Gaussian (Stuart, 2010). As such we define a Gaussian process (GP) prior measure jointly on the state u and its time derivative u t . We will now discuss the prior construction for ordinary differential equations, and then consider the case for partial differential equations. In the remainder of the paper, we will denote by R Ξ» a deterministic, square integrable kernel function with length-scale Ξ» β (0, β), and its integrated version by Q Ξ» (t 1 , t 2 ) = t1 a R Ξ» (s, t 2 )ds. Here a represents the lower boundary of our temporal domain. Our model for the derivative has covariance operator cov( , where Ξ± is a prior precision parameter. Therefore the covariance on the state is its integrated version, cov(u(t 1 ), u(t 2 )) = Ξ± The cross covariance terms are defined in a similar manner and denoted as RQ(t 1 , t 2 ) and QR(t 1 , t 2 ) respectively. We note that RQ(t 1 , t 2 ) = QR β (t 1 , t 2 ), where β represents the adjoint. We assume a joint Gaussian prior measure on the state and its derivative, The Gaussian conditional prior measures for e.g. p (u t (t) | u(t)) take the standard forms (Stuart, 2010) . The choice of prior means and covariance structure, as well as the impact and choice of auxiliary parameters, Ξ¨ = [Ξ±, Ξ»], are discussed in the Appendix (Section 8.2). For exposition, we take the auxiliary parameters Ξ¨ as fixed, although we note that propagating a distribution over Ξ¨ through the probabilistic solver is straightforward, and estimation of Ξ¨ is addressed in Section 6.4. This model straightforwardly generalises to ODE problems of order greater than one, by defining the prior jointly on the state, u, and any required derivatives. Alternatively, higher order ODE problems may be restated in the first order by introducing additional states. We have found that adding states to the model is computationally straightforward, so we adopt this approach in practice, working with the prior in equation (8). PDE problems require a prior measure over multivariate trajectories, as well as modelling single or higher order derivatives with respect to spatial inputs. Therefore the prior specification will depend on the PDE model. Consider as an illustrative example the parabolic heat equation, modelling the heat diffusion over time along a single spatial dimension by, 6 One modelling choice for incorporating the spatial component of the PDE into the covariance is to adopt a product structure. The covariance over time is defined as above, and over space may be defined using a similar construction. Let R Β΅ be a kernel function with length-scale Β΅. Q Β΅ (z, x 2 )dz, where c denotes a spatial lower boundary (in (9), this is c = 0). The spatial covariance structures are defined similarly to the temporal form in the ODE example as, where Ξ² is a spatial prior precision parameter. Cross covariances are defined analogously. The prior construction follows by defining a product structure for space and time, In the next section, we describe in detail the framework for sequentially linking the prior in equation (8) with the ODE model. We describe the corresponding algorithm for updating the prior in equation (10) under the heat equation PDE in the Appendix (Section 8.1). We provide examples illustrating its use experimentally in Section 6. The joint prior on the state and its derivative is defined in equation (8) in terms of mean functions m, m t and a covariance structure determined by the choice of kernel function R Ξ» , which is parameterised by the auxiliary variables Ξ± and Ξ». The choice of the covariance kernel should reflect our assumptions regarding the smoothness of the exact but unknown differential equation solution. In the Appendix (Section 8.3), we provide covariance structures based on two kernels: the infinitely differentiable squared exponential and the non-differentiable uniform kernel. Gaussian process models typically match the kernel for a given application to prior information about smoothness of the underlying function (see, for example, Rasmussen and Williams, 2006) . It is also important to be aware that imposing unrealistically strict smoothness assumptions on the state space by choice of covariance structure may introduce estimation bias if the exact solution is not at least as smooth. Therefore, in cases where the solution smoothness is not known a priori, one can err on the side of caution by using less regular kernels. In the examples reported in this paper we chose to work with stationary derivative covariance structures for simplicity, however we point out that there will be classes of problems where non-stationary kernels may be more appropriate and can likewise be incorporated into the Gaussian process framework. The mean function is chosen based on any prior knowledge about how the system evolves over time. Typically, however, such prior detailed knowledge is unavailable, in which case it is reasonable to use a constant mean function. Additionally, we condition on the known boundary values, enforcing them by choice of the prior mean. For example, for IVPs, we satisfy the boundary constraint exactly by choosing a differentiable prior mean function m such that m(a) := u * (a). Now that the form of the prior measure has been defined, the corresponding posterior is obtained in the following two sections in a sequential manner. This iterative updating closely follows the sequential structure of the proof of consistency, provided in the Appendix (Section 8.6). Here we point out that, although the amount and quality of prior information regarding the true solution will affect the efficiency of our probabilistic solvers, we can still expect convergence as the time step tends to zero, subject to some standard assumptions on the kernel function. Furthermore, in Section 6.1, we illustrate the extent of prior influence on the probabilistic solution for the heat equation PDE. We find that the use of a zero-mean prior on the spatial and temporal derivatives has minimal impact on the solution, even for reasonably rough discretisation grids. We now introduce a sequential framework to characterise the epistemic model uncertainty component in equation (7). We restrict our attention to the ODE initial value problem (1) and consider more general ODE and PDE problems in Section 6. We model discretisation uncertainty by updating the joint prior on the unknown state and derivative iteratively, by evaluating the deterministic ODE model at a finite number N of grid points over the temporal domain, which we denote by f 1:N . Algorithm 1 produces a sample from the joint distribution, [u, Let us consider the problem of modelling uncertainty in the exact but unknown solution of a nonlinear ODE initial value problem. We are given the differential equation model, u t (t) = f (t, u(t), ΞΈ), which implicitly defines the derivatives in terms of a Lipschitz continuous function f of the states given ΞΈ and a fixed initial value, u * (a). We model our uncertainty regarding the unique explicit state, u, and its derivative, u t , satisfying the initial value problem, via the joint GP prior measure in equation (8). The following procedure sequentially links the prior on the state with the ODE model defined by f . Consider a discretisation grid made up of N time points We begin by fixing the known initial value, u(s 1 ) := u * (a), and computing the exact derivative f 1 := f (s 1 , u(s 1 ), ΞΈ) at s 1 , via the deterministic ODE model. We then update our joint GP prior given the computed exact derivative f 1 , obtaining the conditional predictive distribution for the state at the subsequent grid location s 2 , with, This predictive distribution describes our current uncertainty about the solution at time s 2 . We now sample a realisation, u(s 2 ), of the predictive process, and again link our prior to the deterministic ODE model by computing f 2 := f (s 2 , u(s 2 ), ΞΈ). In contrast to the first time point s 1 , we can no longer guarantee that the realisation at the second time point, u(s 2 ), and its derivative, u t (s 2 ), exactly satisfy the ODE model, i.e. that u t (s 2 ) = f 2 . Therefore, at time s 2 we explicitly model the mismatch between the ODE evaluation f 2 and the process derivative, u t (s 2 ), as, where the magnitude of the mismatch may be described by the variance, C t (s 2 , s 2 ), of the predictive posterior over the derivative given by, For systems in which we believe this mismatch to be strongly non-Gaussian, we may appropriately modify the above model and associated sampling strategy, as described in the Discussion section. We may now update our joint posterior from the previous iteration by conditioning on the augmented vector f 1:2 := [f 1 , f (s 2 , u(s 2 ), ΞΈ)]. We therefore define the matrix Ξ 2Γ2 := diag{0, C t (s 2 , s 2 )} to describe the mismatch between the process derivative and the ODE function evaluations at s 1 and s 2 . The new predictive posterior, As in the previous two steps, we sample the state realisation u(s 3 ) from the above predictive posterior distribution. We then apply the deterministic transformation f , to obtain f 3 := f (s 3 , u(s 3 ), ΞΈ), whose mismatch with the realised derivative u t (s 3 ) is again modelled as, We next augment f 1:3 := [f 1:2 , f 3 ]. The corresponding mismatch matrix therefore has a diagonal structure, Ξ 3Γ3 := diag{Ξ 2Γ2 , C t (s 3 , s 3 )}, where the step-ahead predicted covariance in the derivative space is, The diagonal elements of Ξ, which are step-ahead predictive derivative variances, are non-decreasing, reflecting our growing uncertainty about how well the realised state obeys the ODE model as we take our sample further and further away from the known initial state. It is also shown in the Appendix (Section 8.7) that its elements tend to zero with the step size but at a much faster rate than the step size. The general scheme can be written according to Algorithm 1, and an illustration of this is provided in Figure 1 . Algorithm 1 Sample from the joint posterior distribution of u and f 1:N for an ODE initial value problem given ΞΈ, Ξ¨, N At time s 1 := a, initialise the derivative f 1 := f s 1 , u(s 1 ), ΞΈ for initial state u(s 1 ) := u * (a), and define associated model-derivative mismatch, Ξ 1Γ1 := 0; for n = 1 : N β 1 do Define the predictive state mean and variance, Sample step-ahead realisation u(s n+1 ) from the predictive distribution of the state, Evaluate the ODE model f n+1 := f (s n+1 , u(s n+1 ), ΞΈ) for realisation u(s n+1 ) at the subsequent grid point, s n+1 , and augment the vector f 1:n+1 := [f 1:n , f n+1 ]; Define the predictive derivative variance, and augment the matrix We emphasise that integration of the ODE model proceeds probabilistically via Gaussian process integration, without the use of numerical approximations. We obtain a posterior distribution over trajectories 9 Figure 1 : Illustration of Algorithm 1 for generating a sample from the joint distribution of derivative observations and possible trajectories with density p u(t), f 1:N | ΞΈ, Ξ¨ . Given two derivative model realisations (red points), we obtain a posterior distribution over the derivative space (top left) and over the state space (bottom middle). A sample is then drawn from the predictive posterior over the states at the next time point s n (bottom middle), and a model realisation is obtained by mapping u(s n ) to the derivative space via the function f (top middle, rightmost red point). Given these three model evaluations, this procedure may be repeated (bottom right, top right) in an analogous manner. 10 u that are governed by the differential equation model over a discrete number of grid points. In the Appendix (Section 8.6), we provide a proof that under certain conditions, the posterior process u(t) obtained via Algorithm 1 tends to the unique solution u * (t) satisfying IVP (1), as follows, where h is the maximum length between consecutive discretisation grid points. We now present a framework to propagate the model discretisation uncertainty characterised in Section 4 through the inverse problem, where we wish to infer model parameters from measurement data. We describe one possible Markov chain Monte Carlo procedure that generates a sample from the joint posterior distribution of the state at the data locations t = [t 1 , Β· Β· Β· , t T ] and the unknown parameters ΞΈ conditional on noisy observations y(t) = G u(t), ΞΈ + (t) of the states. We assume the parameters ΞΈ also include any unknown initial conditions, or auxiliary parameters. Once again, for expositional simplicity, we focus on direct observations of states governed by an ODE initial value problem. However, extension to other ODE and PDE problems is straightforward given a probabilistic solution. We provide an application where states are indirectly observed through the nonlinear transformation of states G in Section 6.4. Algorithm 2 targets the posterior density in equation (7) by forward model proposals via conditional simulation from Algorithm 1. This proposal step avoids the need to explicitly calculate the intractable marginal density p u(t), f 1:N | ΞΈ, Ξ¨, N df 1:N , and can be implemented efficiently as described in the Appendix (Section 8.1). Such partially likelihood-free MCMC implementations (Marjoram et al., 2003) are widely used in the area of inference for stochastic differential equations (see, for example, Golightly and Wilkinson, 2011) for simulating sample paths within the inverse problem. Algorithm 2 Draw K samples from the posterior distribution with density p ΞΈ, u(t) | y(t), Ξ¨ Initialise ΞΈ and conditionally sample a realisation of the state u(t) via Algorithm 1; for k = 1 : K do Propose ΞΈ βΌ q(ΞΈ | ΞΈ), where q is a proposal density; Sample a probabilistic realisation of the state u (t) conditioned on ΞΈ via Algorithm 1; Compute: We now have all the components to take a fully Bayesian approach for quantifying uncertainty on differential equation models of natural and physical systems. In this section we use the framework developed in the previous two sections in applications to a wide range of systems, including ODE and PDE boundary value problems and delay initial function problems. 11 As an illustrative example, we demonstrate the use of our probabilistic framework on the heat equation presented in (9). We model our uncertainty about the solution through the prior (10) defined over time and space using a product covariance structure as described in Section 3. We use our probabilistic framework to integrate the state over time at each of the spatial discretization grid points, employing a uniform covariance kernel for the time component and a squared exponential covariance kernel for the spatial component. We therefore characterise the discretisation uncertainty in both the time and spatial domains by conditioning on PDE model evaluations corresponding to approximate time derivatives at each discretisation grid location, dependent on sequentially sampled realisations from the predictive posterior distribution over the second order spatial derivatives. We describe the full algorithmic construction in the Appendix (Algorithm 3). In the following numerical simulations, we consider dynamics with ΞΊ = 1 and initial function u * (x, 0) = sin (xΟ) , x β [0, 1]. We firstly consider the probabilistic forward problem using a variety of discretisation grid sizes. In Figure 2 we compare the exact solution with the probabilistic solution on two different grids; a coarse discretisation of 15 points in the spatial domain and 50 points in the temporal domain, and a finer discretisation of 29 points in the spatial domain and 100 points in the temporal domain. We observe from the simulations that as the mesh size becomes finer, the uncertainty in the solution decreases, in agreement with the consistency result (Appendix, Section 8.6), where it is shown that the probabilistic solution should tend to the exact solution as the grid spacing tends to zero. Although this is an illustrative example using a simple toy system, the characterisation of spatial uncertainty is vital for more complex models, where there are computational constraints limiting the number of system evaluations that may be performed. We can see the effect of such discretisation uncertainty by performing posterior inference for the parameter ΞΊ given data simulated from an exact solution with ΞΊ = 1. Figure 3 shows the posterior distribution over ΞΊ obtained by using both a "forward in time, centred in space" (FTCS) finite difference solver and a probabilistic solver under a variety of discretisation grids. The use of a deterministic solver illustrates the problem of inferential bias and overconfident posterior variance that may occur if discretisation uncertainty is not taken into account and too coarse a grid is employed. In this illustrative setting, the use of a probabilistic solver propagates discretisation uncertainty in the solution through to the posterior distribution over the parameters. We obtain parameter estimates that assign positive probability mass to the true value of ΞΊ, even when using a coarsely discretised grid, avoiding the problem of overconfident parameter inferences that exclude the true value. MBVPs introduce challenges for many existing numerical solvers, which typically rely on optimisation and the theory of IVPs to estimate the unspecified initial state, u(a) in (2). The optimisation over u(a) is performed until the corresponding IVP solution satisfies the specified boundary condition u * (b) to within a user specified tolerance. For expositional simplicity, we consider a boundary value problem with one constraint located at each boundary of the domain, namely, v(a), u(b) = v * (a), u * (b) . We treat the boundary value, u * (b), as a data point and consider the solution as an inference problem over the unspecified initial value, u(a). The likelihood therefore defines the mismatch between the boundary value u(b), obtained from the realised probabilistic solution given some u(a), with the exact boundary value, u * (b), as follows, where m (u) (b) and C (u) (b, b) are the posterior mean and covariance for state u at time point b obtained from evaluating Algorithm 1. The posterior distribution of the states therefore has density, which exhibits multimodality over the states, as shown on the left side of Figure 4 . While deterministic numerical solvers rely on an ad hoc end point mismatch tolerance, the probabilistic framework naturally Figure 2: We illustrate the probabilistic output of the solution to the heat equation PDE, with ΞΊ = 1, integrated between t = 0 and t = 0.25 using two grid sizes; the coarser mesh (shown in blue) consists of 15 spatial discretisation points and 50 time discretisation points, the finer mesh consists of 29 spatial discretisation points and 100 time discretisation points. We show the spatial posterior predictions at three time points; t = 0.02 (top), t = 0.12 (middle) and t = 0.22 (bottom). The exact solution at each time point is represented by the green line. The error bars show the mean and 2 standard deviations for each of the probabilistic solutions calculated using 50 simulations. Figure 3: We illustrate the inverse problem by performing inference over the parameter ΞΊ in the heat equation, integrated between t = 0 and t = 0.25. We generate data over a grid of 8 spatial discretisation points and 25 time discretisation points by using the exact solution with ΞΊ = 1, then adding noise with standard deviation of 0.005. We firstly use the probabilistic differential equation solver (PODES) using three grid sizes; a coarse mesh consisting of 8 spatial discretisation points and 25 time discretisation points (far left), a finer mesh consisting of 15 spatial discretisation points and 50 time discretisation points (second from left), and a further finer mesh consisting of 29 spatial discretisation points and 100 time discretisation points (second from right). Note the change in scale as the posterior variance decreases with increasing resolution of the discretisation. As an illustrative comparison, we show the posterior distributions using a deterministic forward in time, centred in space (FTCS) integration scheme (far right). If the discretisation is not fine enough, we obtain an overconfident biased posterior that assigns negligible probability mass to the true value of ΞΊ. In contrast, use of the exact solution produces a perfectly unbiased posterior, as expected. Figure 4: 3,000 samples were drawn from the posterior probability density (13) and the trajectories are shown for both states (left, above and below). The marginal posterior probability density over the unknown initial condition u( 1 2 ) is also shown (right). defines that tolerance through the predictive distribution of (12). An important consideration for solving MBVPs probabilistically is to sample efficiently from potentially multimodal posteriors over the initial value. We therefore recommend the use of an appropriate MCMC scheme such as parallel tempering (Geyer, 1991) , which can quickly identify and explore disjoint regions of high posterior probability. We provide one such implementation of parallel tempering in the Appendix (Algorithm 5). As a demonstration of probabilistically solving a mixed boundary value problem, we consider a special case the Lane-Emden model, which is used to describe the density u of gaseous spherical objects, such as stars, as a function of its radius t, (Shampine, 2003) . We rewrite the canonical second order ODE as a system of a first order equations with a mixed boundary value, with boundary conditions, u * (1) = β 3/2 and v * ( 1 2 ) = β288/2197. The unknown initial state u * ( 1 2 ) is assigned a diffuse Gaussian prior with mean 1.5 and standard deviation 2 u * (1) β v * ( 1 2 ) , which reflects the possibility that multiple solutions may be present over a wide range of initial states. In this example, we chose the squared exponential covariance to model what we expect to be a very smooth solution. The discretisation grid consists of 100 equally spaced points. The length-scale is set to twice the discretisation grid step size and the prior precision is set to 1. Figure 4 shows a posterior sample from Equation 13 and identifies two high density regions in the posterior corresponding to distinct trajectories that approximately satisfy model dynamics given our discretisation grid. Figure 4 also shows multimodality through the marginal posterior over the unknown initial state u( 1 2 ). This example illustrates the need for accurately modelling solution uncertainty in a functional manner. A numerical approximation, even with corresponding numerical error bounds, will fail to detect the existence of a second solution, whereas a probabilistic approach allows us to determine the number and location of possible solutions. In this case, a probabilistic approach to solving the differential equation problem addresses model bias caused by multiplicity of solutions. 14 We present the following example as a proof of concept that the probabilistic solver may be applied reliably and straightforwardly to a very high-dimensional dynamical system. The Navier-Stokes system is a fundamental model of fluid dynamics, incorporating laws of conservation of mass, energy and linear momentum, as well as physical properties of an incompressible fluid over some domain given constraints imposed along the boundaries. It is an important component of complex models in oceanography, weather, atmospheric pollution, and glacier movement. Despite its extensive use, the dynamics of Navier-Stokes models are poorly understood even at small time scales, where they can give rise to turbulence. We consider the Navier-Stokes PDE model for the time evolution of 2 components of the velocity, u : D β R 2 , of an incompressible fluid on a torus, D := [0, 2Ο, ) Γ [0, 2Ο], expressed in spherical coordinates. The Navier-Stokes boundary value problem is defined by: where β := is the Laplacian operator such that βu = u x2x2 , and is the gradient operator such that βu = u x1 + u (15) is not known in closed form. Often, the quantity of interest is the local spinning motion of the incompressible fluid, called vorticity, which we define as, = ββ Γ u, where β Γ u represents the rotational curl defined as the cross product of β and u, with positive vorticity corresponding to clockwise rotation. This variable will be used to better visualise the probabilistic solution of the Navier-Stokes system by reducing the two components of velocity to a one dimensional function. We discretize the Navier-Stokes model (15) over a grid of size 128 in each spatial dimension. Therefore, a pseudo spectral projection in Fourier space yields 16,384 coupled, stiff ODEs with associated constraints. Full details of the pseudo spectral projection are provided to allow full replication of these results and are available on the accompanying website. Figure 6 .3 shows four forward simulated vorticity trajectories (along rows), obtained from two components of velocity governed by the Navier-Stokes equations (15) at four distinct time points (along columns). Slight differences in the state dynamics can be seen at the last time point, where the four trajectories visibly diverge from one another. These differences express the epistemic uncertainty resulting from discretising the exact but unknown infinite dimensional solution. The probabilistic approach for forward simulation of DIFPs is described in Algorithm 6 in the Appendix. In addition to modelling discretisation uncertainty in a structured, functional way, our framework allows us to straightforwardly incorporate uncertainty in the initial function through the forward simulation; indeed, the initial function Ο may iteself only be available at a finite number of nodes. The probabilistic approach quantifies the uncertainty associated with the estimation of Ο(t) and propagates it recursively through the states. Even when Ο(t) is fully specified in advance, the dependence of the current state on previously estimated states impacts the accuracy of the numerical solution using standard solvers, even over the short term. In the following example we account for the uncertainty associated with current and delayed estimates of system states, and fully address these potential sources of bias in the estimation of model parameters. 16 their initial state, returning to the cytoplasm to be used in the next activation cycle. This last stage is not well understood and is proxied in the model by the unknown time delay Ο . The model for this mechanism describes changes in 4 reaction states of STAT-5 through the nonlinear DIFP, The initial function components Ο (2) (t) = Ο (3) (t) = Ο (4) (t) are everywhere zero, while the constant initial function Ο (1) (t) is unknown. The states for this system cannot be measured directly, but are observed through a nonlinear transformation, and parameterised by the unknown scaling factors k 5 and k 6 . The indirect measurements of the states are assumed contaminated with additive zero-mean Gaussian noise, Ξ΅(t), with experimentally determined standard deviations, Our analysis is based on experimental data measured at the locations, t, from Swameye et al. (2003) , which consists of 16 measurements for the first two states of the observation process. Raue et al. (2009) further utilise an additional artificial data point for each of the third and fourth observation process states to deal with lack of parameter identifiability for this system; we therefore adopt this assumption in our analysis. The forcing function, EpoR A : [0, 60] β R + , is not known, but measured at 16 discrete time points t EP O . As per Raue et al. (2009) , we assume that observations are measured without error. We further assume that this function shares the same smoothness as the solution state (piecewise linear first derivative). The full conditional distribution of the forcing function is given by a GP interpolation of the observations. Forward inference for this model will be used within the statistical inverse problem of recovering unknown parameters and first initial state, ΞΈ = [k 1 , . . . , k 6 , Ο, u (1) (0)], from experimental data. We demonstrate fully probabilistic inference for state trajectories and parameters of the challenging 4 state delay initial function model (16) describing the dynamics of the JAK-STAT cellular signal transduction pathway (Raue et al., 2009 ). There have been several analyses of the JAK-STAT pathway mechanism based on this data (e.g., Campbell and Chkrebtii, 2013; Raue et al., 2009; Schmidl et al., 2003; Swameye et al., 2003) . Despite the variety of modelling assumptions considered by different authors, as well as distinct inference approaches, some interesting common features have been identified that motivate the explicit modelling of discretisation uncertainty for this application. Firstly, the inaccuracy and computational constraints of numerical techniques required to solve the system equations have led some authors to resort to coarse ODE approximations or even indirectly bypassing numerical solution via Generalised Smoothing. This motivates a formal analysis of the structure and propagation of discretisation error through the inverse problem. A further issue is that the model (16) and its variants suffer from model misspecification. The above studies 17 suggest the model is not flexible enough to fit the available data, however it is not clear how much of this misfit is due to model discrepancy, and how much may be attributed to discretisation error. Our analysis proceeds by defining prior distributions on the unknown parameters as follows, We obtained samples from the posterior distribution of the model parameters, ΞΈ = [k 1 , . . . , k 6 , Ο, u (1) (0)], solution states, u(t, ΞΈ), and auxiliary variables, Ξ¨ given the data y(t) using MCMC. In order to construct a Markov chain that efficiently traverses the parameter space of this multimodal posterior distribution, we employed a parallel tempering sampler (Geyer, 1991) with 10 parallel chains along a uniformly spaced temperature profile over the interval [0.5, 1]. Each probabilistic DIFP simulation was generated using Algorithm 6 under an equally spaced discretisation grid of size N = 500. Full algorithmic details are provided in the Appendix (Algorithm 8). We obtained two groups of posterior samples, each of size 50,000. Within chain convergence was assessed by testing for equality of means between disjoint iteration intervals of the chain (Geweke 1992). Between chain convergence was similarly assessed. Additionally we ensured that the acceptance rate fell roughly within the accepted range of 18%-28% for each of the two parameter blocks, and that the total acceptance rate for moves between any two chains remained roughly within 5%-15%. Correlation plots and kernel density estimates with priors for the marginal parameter posteriors are shown in Figure 6 . All parameters with the exception of the prior precision Ξ± are identified by the data, including the rate parameter k 2 which appears to be only weakly identified. We observe strong correlations between parameters, consistent with previous studies on this system. For example, there is strong correlation among the scaling parameters k 5 , k 6 and the initial first state u (1) (0). Interestingly, there appears to be a correlation between the probabilistic solver's length-scale Ξ» and the first, third and fourth reaction rates. Furthermore, this correlation has a nonlinear structure, where the highest parameter density region seems to change with length scale implying strong sensitivity to the solver specifications. The length-scale is the probabilistic analogue, under a bounded covariance, of the step number in a numerical method. However, in analyses based on numerical integration, the choice of a numerical technique effectively fixes this parameter at an arbitrary value that is chosen a priori. Our result here suggests that for this problem, the inferred parameter values are highly and nonlinearly dependent on the choice of the numerical method used. We speculate that this effect may become quite serious for more sensitive systems, such that ignoring discretisation uncertainty within the inverse problem may result in inferential bias. A sample from the marginal posterior of state trajectories and the corresponding observation process are shown in Figure 7 . The error bars on the data points show two standard deviations of the measurement error from Swameye et al. (2003) . It is immediately clear that our model, which incorporates discretisation uncertainty in the forward problem, still does not fully capture the dynamics of the observed data. This systematic lack of fit suggests the existence of model discrepancy beyond that described by discretisation uncertainty. This paper has presented a probabilistic formalism to describe the structure of approximate solution uncertainty for general systems of differential equations. Rather than providing a single set of discrete function values that approximately satisfy the constraints imposed by the system of differential equations, our approach yields a probability measure over the space of such infinite dimensional functions. This is a departure : Marginal posterior distribution of the model parameters using probabilistic forward simulation based on a sample of size 50,000, generated using a parallel tempering algorithm with ten chains. Prior probability densities are shown in black. 19 from the existing accepted practice of employing a deterministic numerical integration code to obtain an approximate statistical error model as part of the inference process. This enables adoption of probability calculus for coherently propagating functional uncertainty when solving the statistical inverse problem. We have demonstrated that our probabilistic framework can already be applied to complex and high dimensional systems using the Navier-Stokes example. However, further work is needed to improve computational efficiency to a point where it is comparable with existing implementations of numerical integration codes. In principle, as the probabilistic integration method we have proposed relies on solutions of linear systems, it shares the same algorithmic scaling as most implicit numerical integration codes (e.g. Crank-Nicholson), and it is anticipated that, over time, algorithmic and code development will ensure our methodology attains similar levels of performance, becoming part of the standard toolbox for climate researchers, geoscientists, and engineers. Algorithmic advances and code development will be an exciting and fruitful area of ongoing investigation with high impact. Extension of the methodology to systems that exhibit directional derivative errors is immediate, for example in fluid dynamics applications. Indeed, in Section 4 we have modelled the mismatch between the prior on the derivative and the ODE model as Gaussian. Relaxing this assumption no longer guarantees a closed form representation of the updated prior on the state derivative, however this can be overcome by an additional layer of Monte Carlo sampling within Algorithm 1. Such questions would additionally motivate the development of efficient MCMC algorithms in this context. In this work we have been able to directly exploit the structure of the mathematical model within the inference process in a more informed manner than by treating the intractable system of differential equations and associated numerical solver code as a "black box". We may also exploit the intrinsic manifold structure induced by the probabilistic model posterior to increase efficiency of a variety of MCMC methods (Girolami and Calderhead, 2011). As we have already noted, controlling the auxiliary parameters Ξ¨ associated with smoothness of the space of possible trajectories can also add flexibility to MCMC sampling methods. This feature of our probabilistic method allows one to define intermediate target densities for population MCMC methods such as Smooth Functional Tempering (Campbell and Steele, 2011), thereby increasing efficiency when exploring the complex posterior densities arising in applications described by systems of nonlinear ordinary or partial differential equations. The proposed probabilistic approach takes the form of a linear functional projection, allowing direct computation of sensitivities of the system states with respect to parameters. Local sensitivity analysis can be useful in engineering applications, inference, or in aiding optimisation. This would also allow, for example, our probabilistic framework to be incorporated into a classical nonlinear least squares framework (Bates and Watts, 2007) . Fundamentally, the probabilistic approach we advocate allows one to define a formal tradeoff between uncertainty induced by numerical accuracy and the size of the inverse problem, by choice of discretisation grid. This problem is of deep interest in the uncertainty quantification community (see, for example, Arridge et al., 2006; Kaipio et al., 2004) . We may now quantify and compare the individual contributions to overall uncertainty from discretisation, model misspecification, and Monte Carlo error within the probabilistic framework. Our probabilistic model of discretisation uncertainty may be used to guide mesh refinement and indeed mesh design for complex models, in that a predictive distribution is now available, which forms the basis of experiment design approaches to mesh development. Numerical solutions of ODEs and PDEs for complex models also rely on adaptive mesh selection. The probabilistic formalism presented here can now inform mesh selection in a probabilistic way, by optimising a chosen design criterion. Some work in this direction has already been conducted in Chkrebtii (2013) , where the natural Kullback-Leibler divergence criterion is successfully used to adaptively choose the discretisation grid probabilistically. Such an approach was also suggested by Skilling (1991) using the cross-entropy. Inference and prediction for computer experiments (Sacks et al., 1989 ) relies on numerical solutions of large-scale system models. Currently, numerical uncertainty in the model is largely ignored, although it is informally incorporated through a covariance nugget in the emulator (see, for example, Gramacy and Lee, 2012) . Adopting a probabilistic approach on a large scale will have practical implications in this area by permitting relaxation of the error-free assumption adopted when modelling computer code output, leading to more realistic and flexible emulators. In modelling uncertainty about an exact but unknown solution, the probabilistic integration formalism may be viewed as providing both an estimate of the exact solution and its associated functional error analysis. Indeed, many existing numerical methods can be interpreted in the context of this general probabilistic framework. This suggests the possibility to both generalise existing numerical solvers and to develop new sampling schemes to complement the probabilistic solutions we describe in this contribution. Chaotic systems arise in modelling a large variety of physical systems, including laser cavities, chemical reactions, fluid motion, crystal growth, weather prediction, and earthquake dynamics (Baker and Gollub, 1996, chapter 7). Extreme sensitivity to small perturbations characterises chaotic dynamics, where the effect of discretisation uncertainty becomes an important contribution to global solution uncertainty. Although the long term behaviour of the system is entrenched in the initial states, the presence of numerical discretisation error results in exponential divergence from the exact solution. Consequently, information about initial states rapidly decays as system solution evolves in time. This insight, demonstrated and explained by (Berliner, 1991) , is showcased in the following example. We consider the classical Lorenz initial value problem (Lorenz, 1963) , a deceptively simple three-state ODE model of convective fluid motion induced by a temperature difference between an upper and lower surface. A sample from the probabilistic posterior of this system is shown in Figure 8 given model parameters in the chaotic regime and a fixed initial state. There is a short time window within which there is negligible uncertainty in the solution, but the accumulation of uncertainty quickly results in divergent, yet highly structured, solutions as the flow is restricted around a chaotic attractor. This example highlights the need for a functional model of discretization uncertainty to replace numerical pointwise error bounds. Within the inverse problem, a probabilistic approach also allows us to quantify the loss of model information as we move away from the initial state. The study of chaotic systems is clearly a very important area of research, and it is anticipated that the probabilistic approach proposed may help to accelerate progress. We now return to an important discussion regarding the class of problems for which probabilistic integration is well suited. As we have seen with inference for the JAK-STAT model, even relatively small discretisation uncertainty can become amplified through the nonlinear forward model and may introduce bias in the estimation of parameters to which the trajectory is highly sensitive. Nevertheless, it may be conjectured that inference for low dimensional, stable systems, without any topological restrictions on the solution may not suffer from significant discretisation effects. However, the benefits of our formulation undoubtedly lie at the frontier of research in uncertainty quantification, dealing with massive nonlinear models, exhibiting complex or even chaotic dynamics, and strong spatial-geometric effects (e.g. subsurface flow models). Indeed, solving such systems approximately is a problem that is still at the edge of current research in numerical analysis. We suggest that a probabilistic approach would provide an important additional set of tools for the study of such systems. In conclusion, we hope that our paper will encourage much more research at this exciting interface between mathematical analysis and statistical methodology. | A similar approach is followed in REF , yielding a nonparametric posterior rather than a Gaussian approximation. | 14077995 | Bayesian Solution Uncertainty Quantification for Differential Equations | {
"venue": null,
"journal": "arXiv: Methodology",
"mag_field_of_study": [
"Mathematics"
]
} |
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning problematic. Recently, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more meaningful comparisons, we identified three distinct continual learning scenarios based on whether task identity is known and, if it is not, whether it needs to be inferred. Performing the split and permuted MNIST task protocols according to each of these scenarios, we found that regularization-based approaches (e.g., elastic weight consolidation) failed when task identity needed to be inferred. In contrast, generative replay combined with distillation (i.e., using class probabilities as "soft targets") achieved superior performance in all three scenarios. In addition, we reduced the computational cost of generative replay by integrating the generative model into the main model by equipping it with generative feedback connections. This Replay-through-Feedback approach substantially shortened training time with no or negligible loss in performance. We believe this to be an important first step towards making the powerful technique of generative replay scalable to real-world continual learning applications. * Alternative | More recently, this idea has been made more efficient by integrating the generative model into the training procedure REF . | 52880246 | Generative replay with feedback connections as a general strategy for continual learning | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science",
"Mathematics"
]
} |
The production environment for analytical data management applications is rapidly changing. Many enterprises are shifting away from deploying their analytical databases on high-end proprietary machines, and moving towards cheaper, lower-end, commodity hardware, typically arranged in a shared-nothing MPP architecture, often in a virtualized environment inside public or private "clouds". At the same time, the amount of data that needs to be analyzed is exploding, requiring hundreds to thousands of machines to work in parallel to perform the analysis. There tend to be two schools of thought regarding what technology to use for data analysis in such an environment. Proponents of parallel databases argue that the strong emphasis on performance and efficiency of parallel databases makes them wellsuited to perform such analysis. On the other hand, others argue that MapReduce-based systems are better suited due to their superior scalability, fault tolerance, and flexibility to handle unstructured data. In this paper, we explore the feasibility of building a hybrid system that takes the best features from both technologies; the prototype we built approaches parallel databases in performance and efficiency, yet still yields the scalability, fault tolerance, and flexibility of MapReduce-based systems. | HadoopDB REF proposes a hybrid system combining a Hadoop deployment and a parallel database system to simultaneously leverage the resilience and scalability of MapReduce together with the performance and efficiency of parallel databases. | 2717398 | HadoopDB: An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads | {
"venue": "PVLDB",
"journal": "PVLDB",
"mag_field_of_study": [
"Computer Science"
]
} |
In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable mdps (pomdps). We then outline a novel algorithm for solving pomdps o line and show how, in some cases, a nite-memory controller can be extracted from the solution to a pomdp. We conclude with a discussion of the complexity of nding exact solutions to pomdps and of some possibilities for nding approximate solutions. Consider the problem of a robot navigating in a large o ce building. The robot can move from hallway intersection to intersection and can make local observations of its world. Its actions are not completely reliable, however. Sometimes, when it intends to move, it stays where it is or goes too far; sometimes, when it intends to turn, it overshoots. It has similar problems with observation. Sometimes a corridor looks like a corner; sometimes a T-junction looks like an L-junction. How can such an error-plagued robot navigate, even given a map of the corridors? In general, the robot will have to remember something about its history of actions and observations and use this information, together with its knowledge of the underlying dynamics of the world (the map and other information), to maintain an estimate of its location. Many engineering applications follow this approach, using methods like the Kalman lter 10] to maintain a running estimate of the robot's spatial uncertainty, expressed as an ellipsoid or normal distribution in Cartesian space. This approach will not do for our robot, though. Its uncertainty may be discrete: it might be almost certain that it is in the north-east corner of either the fourth or the seventh oors, though it admits a chance that it is on the fth oor, as well. Then, given an uncertain estimate of its location, the robot has to decide what actions to take. In some cases, it might be su cient to ignore its uncertainty and take actions that would be appropriate for the most likely location. In other cases, it might be better for the robot to take actions for the purpose of gathering information, such as searching for a landmark or reading signs on the wall. In general, it will take actions that ful ll both purposes simultaneously. | A partially observable Markov decision process (POMDP) REF defines the optimal policy for a sequential decision making problem while taking into account uncertain state transitions and partially observable states. | 5613003 | Planning and Acting in Partially Observable Stochastic Domains | {
"venue": "Artif. Intell.",
"journal": null,
"mag_field_of_study": [
"Mathematics",
"Computer Science"
]
} |
Private information retrieval (PIR) enables a user to retrieve a data item from a database, replicated among one or more servers, while hiding the identity of the retrieved item. This problem was suggested by Chor, Goldreich, Kushilevitz, and Sudan in 1995, and since then efficient protocols with sub-linear communication were suggested. However, in all these protocols the servers' computation for each retrieval is at least linear in the size of entire database, even if the user requires only a single bit. In this paper, we study the computational complexity of PIR. We show that in the standard PIR model, where the servers hold only the database, linear computation cannot be avoided. To overcome this problem we propose the model of PIR with preprocessing: Before the execution of the protocol each server may compute and store polynomially-many information bits regarding the database; later on, this information should enable the servers to answer each query of the user with more efficient computation. We demonstrate that preprocessing can significantly save work. In particular, we construct for any constants k β₯ 2 and > 0: (1) a k-server protocol with O(n 1/(2kβ1) ) communication, O(n/ log 2kβ2 n) work, and O(n 1+ ) storage; (2) a k-server protocol with O(n 1/k+ ) communication and work and n O(1) storage; (3) a computationally-private k-server protocol with O(n ) communication, O(n 1/k+ ) work, and n O(1) storage; and (4) a protocol with a polylogarithmic number of servers, polylogarithmic communication and work, and O(n 1+ ) storage. On the lower bounds front, we prove that the product of the extra storage used by the servers (i.e., in addition to the length of the database) and the expected amount of work is at least linear in n. Finally, we suggest two alternative models to saving computation, by batching queries and by allowing a separate off-line interaction per future query. | Theoretical work proves that the collective work required by the servers to answer a client's request scales at least linearly with the size of the database REF . | 12258286 | Reducing the servers' computation in private information retrieval: Pir with preprocessing | {
"venue": "In CRYPTO 2000",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract We prove that with high probability a skip graph contains a 4-regular expander as a subgraph and estimate the quality of the expansion via simulations. As a consequence, skip graphs contain a large connected component even after an adversarial deletion of nodes. We show how the expansion property can be used to sample a node in the skip graph in a highly efficient manner. We also show that the expansion property can be used to load balance the skip graph quickly. Finally, it is shown that the skip graph could serve as an unstructured P2P system, making it a good candidate for a hybrid P2P system. | It was shown in REF that skip graphs contain expanders as subgraphs w.h.p., which can be used as a randomized expander construction. | 5986866 | The expansion and mixing time of skip graphs with applications | {
"venue": "Distributed Computing",
"journal": "Distributed Computing",
"mag_field_of_study": [
"Computer Science",
"Engineering"
]
} |
Abstract. Honest but curious cloud servers can make inferences about the stored encrypted documents and the profile of a user once it knows the keywords queried by her and the keywords contained in the documents. We propose two progressively refined privacy-preserving conjunctive symmetric searchable encryption (PCSSE) schemes that allow cloud servers to perform conjunctive keyword searches on encrypted documents with different privacy assurances. Our scheme generates randomized search queries that prevent the server from detecting if the same set of keywords are being searched by different queries. It is also able to hide the number of keywords in a query as well as the number of keywords contained in an encrypted document. Our searchable encryption scheme is efficient and at the same time it is secure against the adaptive chosen keywords attack. | Later, Moataz et al. REF proposed the Conjunctive Symmetric Searchable Encryption scheme that allows conjunctive keyword search on encrypted documents with different privacy assurances. | 6510158 | Privacy-Preserving Multiple Keyword Search on Outsourced Data in the Clouds | {
"venue": "DBSec",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-For robots to be a part of our daily life, they need to be able to navigate among crowds not only safely but also in a socially compliant fashion. This is a challenging problem because humans tend to navigate by implicitly cooperating with one another to avoid collisions, while heading toward their respective destinations. Previous approaches have used handcrafted functions based on proximity to model human-human and human-robot interactions. However, these approaches can only model simple interactions and fail to generalize for complex crowded settings. In this paper, we develop an approach that models the joint distribution over future trajectories of all interacting agents in the crowd, through a local interaction model that we train using real human trajectory data. The interaction model infers the velocity of each agent based on the spatial orientation of other agents in his vicinity. During prediction, our approach infers the goal of the agent from its past trajectory and uses the learned model to predict its future trajectory. We demonstrate the performance of our method against a state-of-the-art approach on a public dataset and show that our model outperforms when predicting future trajectories for longer horizons. | In REF future trajectories of all interacting agents are modeled by learning social interactions from real data using a Gaussian Process model. | 1491148 | Modeling cooperative navigation in dense human crowds | {
"venue": "2017 IEEE International Conference on Robotics and Automation (ICRA)",
"journal": "2017 IEEE International Conference on Robotics and Automation (ICRA)",
"mag_field_of_study": [
"Engineering",
"Computer Science"
]
} |
Many significant challenges exist for the mental health field, but one in particular is a lack of data available to guide research. Language provides a natural lens for studying mental health -much existing work and therapy have strong linguistic components, so the creation of a large, varied, language-centric dataset could provide significant grist for the field of mental health research. We examine a broad range of mental health conditions in Twitter data by identifying self-reported statements of diagnosis. We systematically explore language differences between ten conditions with respect to the general population, and to each other. Our aim is to provide guidance and a roadmap for where deeper exploration is likely to be fruitful. | Coppersmith et al. REF examined a broad range of mental health conditions in Twitter data by identifying self-reported statements of diagnosis, and they took the experimental results as evidence that examining mental health through the lens of language was fertile ground for advances in mental health. | 96824 | From ADHD to SAD: Analyzing the Language of Mental Health on Twitter through Self-Reported Diagnoses | {
"venue": "CLPsych@HLT-NAACL",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-We present a novel domain adaptation approach for solving cross-domain pattern recognition problems, i.e., the data or features to be processed and recognized are collected from different domains of interest. Inspired by canonical correlation analysis (CCA), we utilize the derived correlation subspace as a joint representation for associating data across different domains, and we advance reduced kernel techniques for kernel CCA (KCCA) if nonlinear correlation subspace are desirable. Such techniques not only makes KCCA computationally more efficient, potential over-fitting problems can be alleviated as well. Instead of directly performing recognition in the derived CCA subspace (as prior CCA-based domain adaptation methods did), we advocate the exploitation of domain transfer ability in this subspace, in which each dimension has a unique capability in associating cross-domain data. In particular, we propose a novel support vector machine (SVM) with a correlation regularizer, named correlation-transfer SVM, which incorporates the domain adaptation ability into classifier design for cross-domain recognition. We show that our proposed domain adaptation and classification approach can be successfully applied to a variety of cross-domain recognition tasks such as cross-view action recognition, handwritten digit recognition with different features, and image-to-text or text-to-image classification. From our empirical results, we verify that our proposed method outperforms state-of-the-art domain adaptation approaches in terms of recognition performance. | Inspired by the Canonical Correlation Analysis (CCA), the authors of REF utilize a correlation subspace as a joint representation for associating the data across different domains. | 13828880 | Heterogeneous Domain Adaptation and Classification by Exploiting the Correlation Subspace | {
"venue": "IEEE Transactions on Image Processing",
"journal": "IEEE Transactions on Image Processing",
"mag_field_of_study": [
"Mathematics",
"Medicine",
"Computer Science"
]
} |
We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2Γ, while keeping the accuracy within 1% of the original model. | Inspired by this technique, a low rank approximation for the convolution layers achieves twice the speed while staying within 1% of the original model in terms of accuracy REF . | 7340116 | Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science"
]
} |
We study the problem of verifiable delegation of computation over outsourced data, whereby a powerful worker maintains a large data structure for a weak client in a verifiable way. Compared to the well-studied problem of verifiable computation, this setting imposes additional difficulties since the verifier also needs to check the consistency of updates succinctly and without maintaining large state. We present a scheme for verifiable evaluation of hierarchical set operations (unions, intersections and set-differences) applied to a collection of dynamically changing sets of elements from a given domain. The verification cost incurred is proportional only to the size of the final outcome set and to the size of the query, and is independent of the cardinalities of the involved sets. The cost of updates is optimal (involving O(1) modular operations per update). Our construction extends that of [Papamanthou et al., CRYPTO 2011] and relies on a modified version of the extractable collision-resistant hash function (ECRH) construction, introduced in [Bitansky et al., ITCS 2012] that can be used to succinctly hash univariate polynomials. | Canetti et al. extended it by allowing data updates and monolithic verification of hierarchical set operations REF . | 8750990 | Verifiable Set Operations over Outsourced Databases | {
"venue": "Public Key Cryptography",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract. We introduce a shape detection framework called Contour Context Selection for detecting objects in cluttered images using only one exemplar. Shape based detection is invariant to changes of object appearance, and can reason with geometrical abstraction of the object. Our approach uses salient contours as integral tokens for shape matching. We seek a maximal, holistic matching of shapes, which checks shape features from a large spatial extent, as well as long-range contextual relationships among object parts. This amounts to finding the correct figure/ground contour labeling, and optimal correspondences between control points on/around contours. This removes accidental alignments and does not hallucinate objects in background clutter, without negative training examples. We formulate this task as a set-to-set contour matching problem. Naive methods would require searching over 'exponentially' many figure/ground contour labelings. We simplify this task by encoding the shape descriptor algebraically in a linear form of contour figure/ground variables. This allows us to use the reliable optimization technique of Linear Programming. We demonstrate our approach on the challenging task of detecting bottles, swans and other objects in cluttered images. | In REF , Zhu et al. formulate the shape matching of contour in clutter as a set to set matching problem, and present an approximate solution to the hard combinatorial problem by using a voting scheme. | 9878393 | Contour Context Selection for Object Detection : A Set-to-Set Contour Matching Approach | {
"venue": "ICPR (3)",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Researchers studying daily life mobility patterns have recently shown that humans are typically highly predictable in their movements. However, no existing work has examined the boundaries of this predictability, where human behaviour transitions temporarily from routine patterns to highly unpredictable states. To address this shortcoming, we tackle two interrelated challenges. First, we develop a novel information-theoretic metric, called instantaneous entropy, to analyse an individual's mobility patterns and identify temporary departures from routine. Second, to predict such departures in the future, we propose the first Bayesian framework that explicitly models breaks from routine, showing that it outperforms current state-of-the-art predictors. | A recent study REF proposes methods to identify and predict departures from routine in individual mobility using information-theoretic metrics, such as the instantaneous entropy, and developing a Bayesian framework that explicitly models the tendency of individuals to break from routine. | 624605 | Breaking the habit: Measuring and predicting departures from routine in individual human mobility | {
"venue": "Pervasive Mob. Comput.",
"journal": "Pervasive Mob. Comput.",
"mag_field_of_study": [
"Computer Science"
]
} |
Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4, 262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28] . In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85%) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1, 000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it. | In REF , the authors analysed the entire Twitter graph in order to assess its topological characteristics. | 207178765 | What is Twitter, a social network or a news media? | {
"venue": "WWW '10",
"journal": null,
"mag_field_of_study": [
"Computer Science",
"Political Science"
]
} |
We apply a general deep learning framework to address the non-factoid question answering task. Our approach does not rely on any linguistic tools and can be applied to different languages or domains. Various architectures are presented and compared. We create and release a QA corpus and setup a new QA task in the insurance domain. Experimental results demonstrate superior performance compared to the baseline methods and various technologies give further improvements. For this highly challenging task, the top-1 accuracy can reach up to 65.3% on a test set, which indicates a great potential for practical use. | These representation learning based methods do not rely on any linguistic tools and can be applied to different languages or domains REF . | 3477924 | Applying deep learning to answer selection: A study and an open task | {
"venue": "2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)",
"journal": "2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)",
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract -As web is expanding day by day and people generally rely on web for communication so e-mails are the fastest way to send information from one place to another. Now a day's all the transactions all the communication whether general or of business taking place through e-mails. E-mail is an effective tool for communication as it saves a lot of time and cost. But emails are also affected by attacks which include Spam Mails. Spam is the use of electronic messaging systems to send bulk data. Spam is flooding the Internet with many copies of the same message, in an attempt to force the message on people who would not otherwise choose to receive it. In this study, we analyze various data mining approach to spam dataset in order to find out the best classifier for email classification. In this paper we analyze the performance of various classifiers with feature selection algorithm and without feature selection algorithm. Initially we experiment with the entire dataset without selecting the features and apply classifiers one by one and check the results. Then we apply Best-First feature selection algorithm in order to select the desired features and then apply various classifiers for classification. In this study it has been found that results are improved in terms of accuracy when we embed feature selection process in the experiment. Finally we found Random Tree as best classifier for spam mail classification with accuracy = 99.72%. Still none of the algorithm achieves 100% accuracy in classifying spam emails but Random Tree is very nearby to that. | Megha Rathi and Vikas Pareek(2013) performed an analysis on spam email detection through Data Mining by performing analysis on classifiers by selecting and without selecting the features REF . | 35670502 | Spam Mail Detection through Data Mining β A Comparative Performance Analysis | {
"venue": null,
"journal": "International Journal of Modern Education and Computer Science",
"mag_field_of_study": [
"Computer Science"
]
} |
In this paper, we consider a two-user mobile-edge computing (MEC) network, where each wireless device (WD) has a sequence of tasks to execute. In particular, we consider task dependency between the two WDs, where the input of a task at one WD requires the final task output at the other WD. Under the considered task-dependency model, we study the optimal task offloading policy and resource allocation (on offloading transmit power and local CPU frequencies) that minimize the weighted sum of the WDs' energy consumption and execution time. The problem is challenging due to the combinatorial nature of the offloading decision among all tasks and the strong coupling with resource allocation among subsequent tasks. When the offloading decision is given, we obtain the closed-form expressions of the offloading transmit power and local CPU frequencies and propose an efficient method to obtain the optimal solutions. Furthermore, we prove that the optimal offloading decision follows an one-climb policy, based on which a reduced-complexity algorithm is proposed to obtain the optimal offloading decision in polynomial time. Numerical results validate the effectiveness of our proposed methods. | When the computing tasks at different MUs have inputoutput dependency, REF studies the optimal binary offloading strategy and resource allocation that minimizes the computation delay and energy consumption. | 53082484 | Optimal Offloading and Resource Allocation in Mobile-Edge Computing with Inter-User Task Dependency | {
"venue": "2018 IEEE Global Communications Conference (GLOBECOM)",
"journal": "2018 IEEE Global Communications Conference (GLOBECOM)",
"mag_field_of_study": [
"Computer Science"
]
} |
We introduce a model for information spreading among a population of N agents diffusing on a square L Γ L lattice, starting from an informed agent (Source). Information passing from informed to unaware agents occurs whenever the relative distance is β€ 1. Numerical simulations show that the time required for the information to reach all agents scales as N βΞ± L Ξ² , where Ξ± and Ξ² are noninteger. A decay factor z takes into account the degeneration of information as it passes from one agent to another; the final average degree of information of the population, Iav(z), is thus history-dependent. We find that the behavior of Iav(z) is non-monotonic with respect to N and L and displays a set of minima. Part of the results are recovered with analytical approximations. | Agliari et al. studied information spreading in a population of diffusing agents REF . | 17928479 | Efficiency of information spreading in a population of diffusing agents | {
"venue": "Physical review. E, Statistical, nonlinear, and soft matter physics",
"journal": "Physical review. E, Statistical, nonlinear, and soft matter physics",
"mag_field_of_study": [
"Medicine",
"Physics"
]
} |
The combined advances of open mobile platforms and online social networking applications (SNAs) are driving pervasive computing to the real-world users, as the mobile SNAs are expected to revolutionize wireless application industry. While sharing location through mobile SNAs is useful for information access and user interactions, privacy issues must be addressed at the design levels of mobile SNAs. In this paper, we survey mobile SNAs available today and we analyze their privacy designs using feedback and control framework on information capture, construction, accessibility, and purposes. Our analysis results suggest that today's mobile SNAs need better privacy protection on construction and accessibility, to handle increasingly popular mashups between different SNA sites. We also identify two unexpected privacy breaches and suggest three potential location misuse scenarios using mobile SNAs. | Research in REF studies the privacy of mobile social network application design. | 2250582 | Analyzing Privacy Designs of Mobile Social Networking Applications | {
"venue": "2008 IEEE/IFIP International Conference on Embedded and Ubiquitous Computing",
"journal": "2008 IEEE/IFIP International Conference on Embedded and Ubiquitous Computing",
"mag_field_of_study": [
"Computer Science"
]
} |
ABSTRACT In this paper, we study radio frequency energy harvesting (EH) in a wireless sensor network in the presence of multiple eavesdroppers (EAVs). Specifically, the sensor source and multiple sensor relays harvest energy from multiple power transfer stations (PTSs), and then, the source uses this harvested energy to transmit information to the base station (BS) with the help of the relays. During the transmission of information, the BS typically faces a risk of losing information due to the EAVs. Thus, to enhance the secrecy of the considered system, one of the relays acts as a jammer, using harvested energy to generate interference with the EAVs. We propose a best-relay-and-best-jammer scheme for this purpose and compare this scheme with other previous schemes. The exact closed-form expression for the secrecy outage probability (SOP) is obtained and is validated through Monte Carlo simulations. A near-optimal EH time algorithm is also proposed. In addition, the effects on the SOP of key system parameters such as the EH efficiency coefficient, the EH time, the distance between the relay and BS, the number of PTSs, the number of relays, and the number of EAVs are investigated. The results indicate that the proposed scheme generally outperforms both the best-relay-and-random-jammer scheme and the random-relay-and-best-jammer scheme in terms of the secrecy capacity. INDEX TERMS Energy harvesting, wireless sensor networks, relay networks, friendly jammer, physical layer security. | The results indicated that the proposed scheme outperformed both the best-relayand-random-jammer scheme and the random-relay-and-bestjammer scheme in terms of secrecy performance REF . | 21709243 | Secrecy Outage Performance Analysis for Energy Harvesting Sensor Networks With a Jammer Using Relay Selection Strategy | {
"venue": "IEEE Access",
"journal": "IEEE Access",
"mag_field_of_study": [
"Computer Science"
]
} |
QoS-aware dynamic binding of composite services provides the capability of binding each service invocation in a composition to a service chosen among a set of functionally equivalent ones to achieve a QoS goal, for example minimizing the response time while limiting the price under a maximum value. This paper proposes a QoS-aware binding approach based on Genetic Algorithms. The approach includes a feature for early run-time re-binding whenever the actual QoS deviates from initial estimates, or when a service is not available. The approach has been implemented in a framework and empirically assessed through two different service compositions. | REF have proposed an approach based on genetic algorithms. | 181777 | A Framework for QoS-Aware Binding and Re-Binding of Composite Web Services | {
"venue": "J. Syst. Softw.",
"journal": "J. Syst. Softw.",
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-Fog computing is seen as a promising approach to perform distributed, low-latency computation for supporting Internet of Things applications. However, due to the unpredictable arrival of available neighboring fog nodes, the dynamic formation of a fog network can be challenging. In essence, a given fog node must smartly select the set of neighboring fog nodes that can provide low-latency computations. In this paper, this problem of fog network formation and task distribution is studied considering a hybrid cloud-fog architecture. The goal of the proposed framework is to minimize the maximum computational latency by enabling a given fog node to form a suitable fog network, under uncertainty on the arrival process of neighboring fog nodes. To solve this problem, a novel approach based on the online secretary framework is proposed. To find the desired set of neighboring fog nodes, an online algorithm is developed to enable a task initiating fog node to decide on which other nodes can be used as part of its fog network, to offload computational tasks, without knowing any prior information on the future arrivals of those other nodes. Simulation results show that the proposed online algorithm can successfully select an optimal set of neighboring fog nodes while achieving a latency that is as small as the one resulting from an ideal, offline scheme that has complete knowledge of the system. The results also show how, using the proposed approach, the computational tasks can be properly distributed between the fog network and a remote cloud server. | To provide fog computing services, the formation of fog networks is studied in REF . | 15363899 | An online secretary framework for fog network formation with minimal latency | {
"venue": "2017 IEEE International Conference on Communications (ICC)",
"journal": "2017 IEEE International Conference on Communications (ICC)",
"mag_field_of_study": [
"Mathematics",
"Computer Science"
]
} |
Abstract-With the introduction of network function virtualization technology, migrating entire enterprise data centers into the cloud has become a possibility. However, for a cloud service provider (CSP) to offer such services, several research problems still need to be addressed. In previous work, we have introduced a platform, called network function center (NFC), to study research issues related to virtualized network functions (VNFs). In an NFC, we assume VNFs to be implemented on virtual machines that can be deployed in any server in the CSP network. We have proposed a resource allocation algorithm for VNFs based on genetic algorithms (GAs). In this paper, we present a comprehensive analysis of two resource allocation algorithms based on GA for: 1) the initial placement of VNFs and 2) the scaling of VNFs to support traffic changes. We compare the performance of the proposed algorithms with a traditional integer linear programming resource allocation technique. We then combine data from previous empirical analyses to generate realistic VNF chains and traffic patterns, and evaluate the resource allocation decision making algorithms. We assume different architectures for the data center, implement different fitness functions with GA, and compare their performance when scaling over the time. | Rankothge et al. REF have presented a resource allocation algorithm for VNFs based on Generic algorithms (GAs). | 28867567 | Optimizing Resource Allocation for Virtualized Network Functions in a Cloud Center Using Genetic Algorithms | {
"venue": "IEEE Transactions on Network and Service Management",
"journal": "IEEE Transactions on Network and Service Management",
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-We introduce a weakly supervised approach for learning human actions modeled as interactions between humans and objects. Our approach is human-centric: We first localize a human in the image and then determine the object relevant for the action and its spatial relation with the human. The model is learned automatically from a set of still images annotated only with the action label. Our approach relies on a human detector to initialize the model learning. For robustness to various degrees of visibility, we build a detector that learns to combine a set of existing part detectors. Starting from humans detected in a set of images depicting the action, our approach determines the action object and its spatial relation to the human. Its final output is a probabilistic model of the humanobject interaction, i.e., the spatial relation between the human and the object. We present an extensive experimental evaluation on the sports action data set from [1], the PASCAL Action 2010 data set [2] , and a new human-object interaction data set. | A humancentric approach is proposed by REF that works by first localizing a human and then finding an object and its relationship to it. | 1819788 | Weakly Supervised Learning of Interactions between Humans and Objects | {
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"mag_field_of_study": [
"Computer Science",
"Medicine"
]
} |
Users organize themselves into communities on web platforms. These communities can interact with one another, often leading to conflicts and toxic interactions. However, little is known about the mechanisms of interactions between communities and how they impact users. Here we study intercommunity interactions across 36,000 communities on Reddit, examining cases where users of one community are mobilized by negative sentiment to comment in another community. We show that such conflicts tend to be initiated by a handful of communities-less than 1% of communities start 74% of conflicts. While conflicts tend to be initiated by highly active community members, they are carried out by significantly less active members. We find that conflicts are marked by formation of echo chambers, where users primarily talk to other users from their own community. In the long-term, conflicts have adverse effects and reduce the overall activity of users in the targeted communities. Our analysis of user interactions also suggests strategies for mitigating the negative impact of conflicts-such as increasing direct engagement between attackers and defenders. Further, we accurately predict whether a conflict will occur by creating a novel LSTM model that combines graph embeddings, user, community, and text features. This model can be used to create early-warning systems for community moderators to prevent conflicts. Altogether, this work presents a data-driven view of community interactions and conflict, and paves the way towards healthier online communities. | Kumar et al. REF introduced a method that employs graph embeddings, where the graph captures user interactions on Reddit 1 , in conjunction with user, community, and text features to predict potential conflict and subsequent community mobilization. | 3854959 | Community Interaction and Conflict on the Web | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science"
]
} |
This paper continues the rather recent line of research on the dynamics of non-monotonic formalisms. In particular, we consider semantic changes in Dung's abstract argumentation formalism. One of the most studied problems in this context is the so-called enforcing problem which is concerned with manipulating argumentation frameworks (AFs) such that a certain desired set of arguments becomes an extension. Here we study the inverse problem, namely the extension removal problem: is it possible -and if so how -to modify a given argumentation framework in such a way that certain undesired extensions are no longer generated? Analogously to the well known AGM paradigm we develop an axiomatic approach to the removal problem, i.e. a certain set of axioms will determine suitable manipulations. Although contraction (that is, the elimination of a particular belief) is conceptually quite different from extension removal, there are surprisingly deep connections between the two: it turns out that postulates for removal can be directly obtained as reformulations of the AGM contraction postulates. We prove a series of formal results including conditional and unconditional existence and semantical uniqueness of removal operators as well as various impossibility results -and show possible ways out. | Quite recently, in REF ) the so-called extension removal problem was studied. | 54053953 | Extension Removal in Abstract Argumentation -- An Axiomatic Approach | {
"venue": "AAAI",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets demonstrate that our approach performs comparable to state-of-art feature based SVM system, and substantially better than LSTM and attention-based LSTM architectures. On both datasets we show that multiple computational layers could improve the performance. Moreover, our approach is also fast. The deep memory network with 9 layers is 15 times faster than LSTM with a CPU implementation. | REF introduced an end-to-end memory network for aspect level sentiment classification, which employs an attention mechanism over an external memory to capture the importance of each context word. | 359042 | Aspect Level Sentiment Classification with Deep Memory Network | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science"
]
} |
Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson's patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson's disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables. | Borchani et al. REF further propose a Markov blanket-based approach to train the MBCs from multioutput data. | 205713058 | Markov blanket-based approach for learning multi-dimensional Bayesian network classifiers : An application to predict the European Quality of Life-5 Dimensions ( EQ-5 D ) from the 39-item Parkinson β s Disease Questionnaire ( PDQ-39 ) | {
"venue": "Journal of biomedical informatics",
"journal": "Journal of biomedical informatics",
"mag_field_of_study": [
"Computer Science",
"Medicine"
]
} |
Background: Cloud computing is a unique paradigm that is aggregating resources available from cloud service providers for use by customers on demand and pay per use basis. There is a Cloud federation that integrates the four primary Cloud models and the Cloud aggregator that integrates multiple computing services. A systematic mapping study provides an overview of work done in a particular field of interest and identifies gaps for further research. Objectives: The objective of this paper was to conduct a study of deployment and designs models for Cloud using a systematic mapping process. The methodology involves examining core aspect of the field of study using the research, contribution and topic facets. The results obtained indicated that there were more publications on solution proposals, which constituted 41.98% of papers relating to design and deployment models on the Cloud. Out of this, 5.34% was on security, 1.5% on privacy, 6.11% on configuration, 7.63% on implementation, 11.45% on service deployment, and 9.92% of the solution proposal was on design. The results obtained will be useful for further studies by the academia and industry in this broad topic that was examined. | REF is cloud design and deployment models: a systematic mapping study. | 198134639 | Cloud designs and deployment models: a systematic mapping study | {
"venue": "BMC Research Notes",
"journal": "BMC Research Notes",
"mag_field_of_study": [
"Medicine"
]
} |
To accurately match records it is often necessary to utilize the semantics of the data. Functional dependencies (FDs) have proven useful in identifying tuples in a clean relation, based on the semantics of the data. For all the reasons that FDs and their inference are needed, it is also important to develop dependencies and their reasoning techniques for matching tuples from unreliable data sources. This paper investigates dependencies and their reasoning for record matching. (a) We introduce a class of matching dependencies (MDs) for specifying the semantics of data in unreliable relations, defined in terms of similarity metrics and a dynamic semantics. (b) We identify a special case of MDs, referred to as relative candidate keys (RCKs), to determine what attributes to compare and how to compare them when matching records across possibly different relations. (c) We propose a mechanism for inferring MDs, a departure from traditional implication analysis, such that when we cannot match records by comparing attributes that contain errors, we may still find matches by using other, more reliable attributes. (d) We provide an O(n 2 ) time algorithm for inferring MDs, and an effective algorithm for deducing a set of RCKs from MDs. (e) We experimentally verify that the algorithms help matching tools efficiently identify keys at compile time for matching, blocking or windowing, and that the techniques effectively improve both the quality and efficiency of various record matching methods. | Reasoning mechanism for deducing MDs from a set of given MDs is studied in REF . | 2690556 | Reasoning about Record Matching Rules | {
"venue": "PVLDB",
"journal": "PVLDB",
"mag_field_of_study": [
"Computer Science"
]
} |
The existing watermarking schemes were designed to embed the watermark information into the original images, which are vulnerable to unauthorized access. In this paper, we have proposed a novel and feasible watermarking algorithm in the encrypted domain. First, we encrypted both original medical image and watermark image by using DCT and Logistic map; Then we embedded watermark into the encrypted medical image. In watermarking embedding and extraction phase, zerowatermarking technique has been utilized to ensure the integrity of medical images. At the end of the paper, we compared the robustness of watermarking algorithm between the unencrypted and encrypted approaches. Results demonstrate that in encrypted domain the proposed algorithm is not only robust against common image process such as Gaussian noise, JPEG compression, median filtering, but can also withstand levels of geometric distortions, which can be utilized in information protection of both original medical images and the watermark images. | Dong et al. proposed a novel and feasible watermarking algorithm in the encrypted domain by using DCT REF . | 27562386 | A Robust Watermarking Algorithm for Encrypted Medical Images Based on DCT Encrypted Domain | {
"venue": "Proceedings of the 2015 International Conference on Electronic Science and Automation Control",
"journal": "Proceedings of the 2015 International Conference on Electronic Science and Automation Control",
"mag_field_of_study": [
"Computer Science"
]
} |
We describe an algorithm, IsoRank, for global alignment of two protein-protein interaction (PPI) networks. IsoRank aims to maximize the overall match between the two networks; in contrast, much of previous work has focused on the local alignment problem-identifying many possible alignments, each corresponding to a local region of similarity. IsoRank is guided by the intuition that a protein should be matched with a protein in the other network if and only if the neighbors of the two proteins can also be well matched. We encode this intuition as an eigenvalue problem, in a manner analogous to Google's PageRank method. We use IsoRank to compute the first known global alignment between the S. cerevisiae and D. melanogaster PPI networks. The common subgraph has 1420 edges and describes conserved functional components between the two species. Comparisons of our results with those of a well-known algorithm for local network alignment indicate that the globally optimized alignment resolves ambiguity introduced by multiple local alignments. Finally, we interpret the results of global alignment to identify functional orthologs between yeast and fly; our functional ortholog prediction method is much simpler than a recently proposed approach and yet provides results that are more comprehensive. Corresponding Author. Also with the Department of Mathematics, MIT interaction between the proteins. Computational analyses of these networks has already yielded valuable insights: the scale-free character of these networks and the disproportionate importance of "hub" proteins [30] ; the combination of these networks with gene expression data to discern some of the dynamic character of the cell [8]; the use of PPI networks for inferring biological function [20] , etc. As more PPI data becomes available, comparative analysis of PPI networks (across species) is proving to be a valuable tool. Such analysis is similar in spirit to traditional sequence-based comparative genomic analyses; it also promises commensurate insights. Such an analysis can identify conserved functional components across species [15] . As a phylogenetic tool, it offers a function-oriented perspective that complements traditional sequence-based methods. It also facilitates annotation transfer between species. Indeed, Bandyopadhyay et al. [3] have demonstrated that the use of PPI networks in computing orthologs produces orthology mappings that better conserve protein function across species. In this paper, we explore a new approach to comparative analysis of PPI networks. Specifically, we consider the problem of finding the optimal global alignment between two PPI networks, aiming to find a correspondence between nodes and edges of the input networks that maximizes the overall match between the two networks. For this problem, we propose a novel pairwise global alignment algorithm, IsoRank. | In REF , Singh et al. present IsoRank, an algorithm for pairwise global alignment of PPI networks aiming at finding a correspondence between nodes and edges of the input networks that maximizes the overall match between the two networks. | 6304597 | Pairwise global alignment of protein interaction networks by matching neighborhood topology | {
"venue": "RECOMB",
"journal": null,
"mag_field_of_study": [
"Mathematics",
"Computer Science"
]
} |
Research has shown that computers are notoriously bad at supporting the management of parallel activities and interruptions, and that mobility increases the severity and scope of these problems. This paper presents activity-based computing (ABC) which supplements the prevalent data-and application-oriented computing paradigm with technologies for handling multiple, parallel and mobile work activities. We present the design and implementation of ABC support embedded in the Windows XP operating system. This includes replacing the Windows Taskbar with an Activity Bar, support for handling Windows applications, a zoomable user interface, and support for moving activities across different computers. We report an evaluation of this Windows XP ABC system which is based on a multi-method approach, where perceived ease-of-use and usefulness was evaluated together with rich interview material. This evaluation showed that users found the ABC XP extension easy to use and likely to be useful in their own work. | Another group, based at the University of Aarhus in Denmark REF , has trialled an extension to the Windows XP operating system for Activity Based Computing (ABC). | 9648288 | Support for activity-based computing in a personal computing operating system | {
"venue": "CHI",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-In this paper, fault tolerance in a multi-radio network is discussed. Fault tolerance is achieved using the BeeHive routing algorithm. The paper discusses faults added to the system as random fluctuations in hardware radio operation. The multi-radio nodes are designed using WiMAX and WiFi Radios that work in conjunction using traffic splitting to transfer data across a multi-hop network. During the operation of this network random faults are introduced by turning off certain radios in nodes. The paper discusses fault tolerance as applied to multi radio nodes that use traffic splitting in the transmission of data. We also propose a method to handle random faults in hardware radios by using traffic splitting and combining it with the BeeHive routing algorithm. | Kiran K et al (2014) REF discusses fault tolerance in a multi-radio network. | 22260439 | Fault tolerant BeeHive routing in mobile ad-hoc multi-radio network | {
"venue": "2014 IEEE REGION 10 SYMPOSIUM",
"journal": "2014 IEEE REGION 10 SYMPOSIUM",
"mag_field_of_study": [
"Engineering"
]
} |
Current approaches for semantic parsing take a supervised approach requiring a considerable amount of training data which is expensive and difficult to obtain. This supervision bottleneck is one of the major difficulties in scaling up semantic parsing. We argue that a semantic parser can be trained effectively without annotated data, and introduce an unsupervised learning algorithm. The algorithm takes a self training approach driven by confidence estimation. Evaluated over Geoquery, a standard dataset for this task, our system achieved 66% accuracy, compared to 80% of its fully supervised counterpart, demonstrating the promise of unsupervised approaches for this task. | It is also possible to self-train a semantic parser without any labeled data REF . | 9111381 | Confidence Driven Unsupervised Semantic Parsing | {
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract. We present a novel method to generate accurate and realistic clothing deformation from real data capture. Previous methods for realistic cloth modeling mainly rely on intensive computation of physicsbased simulation (with numerous heuristic parameters), while models reconstructed from visual observations typically suffer from lack of geometric details. Here, we propose an original framework consisting of two modules that work jointly to represent global shape deformation as well as surface details with high fidelity. Global shape deformations are recovered from a subspace model learned from 3D data of clothed people in motion, while high frequency details are added to normal maps created using a conditional Generative Adversarial Network whose architecture is designed to enforce realism and temporal consistency. This leads to unprecedented high-quality rendering of clothing deformation sequences, where fine wrinkles from (real) high resolution observations can be recovered. In addition, as the model is learned independently from body shape and pose, the framework is suitable for applications that require retargeting (e.g., body animation). Our experiments show original high quality results with a flexible model. We claim an entirely data-driven approach to realistic cloth wrinkle generation is possible. | REF takes a hybrid approach combining a statistical model for pose-based global deformation with a conditional generative adversarial network for adding details on normal maps to produce finer wrinkles. | 51967305 | DeepWrinkles: Accurate and Realistic Clothing Modeling | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science"
]
} |
Patient genomes are interpretable only in the context of other genomes; however, genome sharing enables discrimination. Thousands of monogenic diseases have yielded definitive genomic diagnoses and potential gene therapy targets. Here we show how to provide such diagnoses while preserving participant privacy through the use of secure multiparty computation. In multiple real scenarios (small patient cohorts, trio analysis, two-hospital collaboration), we used our methods to identify the causal variant and discover previously unrecognized disease genes and variants while keeping up to 99.7% of all participants' most sensitive genomic information private. | Another protocol was proposed to provide genomic diagnoses while preserving participant privacy REF . | 206658219 | Deriving genomic diagnoses without revealing patient genomes | {
"venue": "Science",
"journal": "Science",
"mag_field_of_study": [
"Medicine",
"Biology"
]
} |
Abstract-Recently, a number of existing blockchain systems have witnessed major bugs and vulnerabilities within smart contracts. Although the literature features a number of proposals for securing smart contracts, these proposals mostly focus on proving the correctness or absence of a certain type of vulnerability within a contract, but cannot protect deployed (legacy) contracts from being exploited. In this paper, we address this problem in the context of re-entrancy exploits and propose a novel smart contract security technology, dubbed Sereum (Secure Ethereum), which protects existing, deployed contracts against re-entrancy attacks in a backwards compatible way based on run-time monitoring and validation. Sereum does neither require any modification nor any semantic knowledge of existing contracts. By means of implementation and evaluation using the Ethereum blockchain, we show that Sereum covers the actual execution flow of a smart contract to accurately detect and prevent attacks with a false positive rate as small as 0.06% and with negligible runtime overhead. As a by-product, we develop three advanced reentrancy attacks to demonstrate the limitations of existing offline vulnerability analysis tools. | Another interesting example is the Sereum REF architecture, a hardened EVM which is able to protect deployed contracts against re-entrancy attacks in a backward compatible way by leveraging taint tracking to monitor runtime behaviors of smart contracts. | 55701989 | Sereum: Protecting Existing Smart Contracts Against Re-Entrancy Attacks | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract. Motivated by the problem of avoiding duplication in storage systems, Bellare, Keelveedhi, and Ristenpart have recently put forward the notion of Message-Locked Encryption (MLE) schemes which subsumes convergent encryption and its variants. Such schemes do not rely on permanent secret keys, but rather encrypt messages using keys derived from the messages themselves. We strengthen the notions of security proposed by Bellare et al. by considering plaintext distributions that may depend on the public parameters of the schemes. We refer to such inputs as lock-dependent messages. We construct two schemes that satisfy our new notions of security for message-locked encryption with lock-dependent messages. Our main construction deviates from the approach of Bellare et al. by avoiding the use of ciphertext components derived deterministically from the messages. We design a fully randomized scheme that supports an equality-testing algorithm defined on the ciphertexts. Our second construction has a deterministic ciphertext component that enables more efficient equality testing. Security for lock-dependent messages still holds under computational assumptions on the message distributions produced by the attacker. In both of our schemes the overhead in the length of the ciphertext is only additive and independent of the message length. | Then Abadi et al. REF designed a randomized scheme to avoid the ciphertext derived from the message. | 1315646 | Message-locked encryption for lock-dependent messages | {
"venue": "In Advances in CryptologyβCRYPTO 2013",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Virtual humans need to be persuasive in order to promote behaviour change in human users. While several studies have focused on understanding the numerous aspects that inluence the degree of persuasion, most of them are limited to dyadic interactions. In this paper, we present an evaluation study focused on understanding the efects of multiple agents on user's persuasion. Along with gender and status (authoritative & peer), we also look at type of focus employed by the agent i. e., user-directed where the agent aims to persuade by addressing the user directly and vicarious where the agent aims to persuade the user, who is an observer, indirectly by engaging another agent in the discussion. Participants were randomly assigned to one of the 12 conditions and presented with a persuasive message by one or several virtual agents. A questionnaire was used to measure perceived interpersonal attitude, credibility and persuasion. Results indicate that credibility positively afects persuasion. In general, multiple agent setting, irrespective of the focus, was more persuasive than single agent setting. Although, participants favored user-directed setting and reported it to be persuasive and had an increased level of trust in the agents, the actual change in persuasion score relects that vicarious setting was the most efective in inducing behaviour change. In addition to this, the study also revealed that authoritative agents were the most persuasive. | An early evaluation study REF was conducted to observe the influence of gender, status (authoritative and peer), and focus employed by the agent i. e., (i) user-directed: persuading by addressing the user directly, and (ii) vicarious: persuading the user indirectly by engaging another agent in the discussion. | 53244506 | Is Two Better than One?: Effects of Multiple Agents on User Persuasion | {
"venue": "IVA '18",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-Radio spectrum resource is of fundamental importance for wireless communication. Recent reports show that most available spectrum has been allocated. While some of the spectrum bands (e.g., unlicensed band, GSM band) have seen increasingly crowded usage, most of the other spectrum resources are underutilized. This drives the emergence of open spectrum and dynamic spectrum access concepts, which allow unlicensed users equipped with cognitive radios to opportunistically access the spectrum not used by primary users. Cognitive radio has many advanced features, such as agilely sensing the existence of primary users and utilizing multiple spectrum bands simultaneously. However, in practice such capabilities are constrained by hardware cost. In this paper, we discuss how to conduct efficient spectrum management in ad hoc cognitive radio networks while taking the hardware constraints (e.g., single radio, partial spectrum sensing and spectrum aggregation limit) into consideration. A hardware-constrained cognitive MAC, HC-MAC, is proposed to conduct efficient spectrum sensing and spectrum access decision. We identify the issue of optimal spectrum sensing decision for a single secondary transmission pair, and formulate it as an optimal stopping problem. A decentralized MAC protocol is then proposed for the ad hoc cognitive radio networks. Simulation results are presented to demonstrate the effectiveness of our proposed protocol. Index Terms-Cognitive MAC, open spectrum, optimal spectrum sensing, spectrum aggregation. | In REF , the tradeoff between the spectrum access opportunity and spectrum sensing overhead in cognitive radio systems is formulated as a finite-horizon optimal stopping problem, which is solved using backward induction. | 14939569 | HC-MAC: A Hardware-Constrained Cognitive MAC for Efficient Spectrum Management | {
"venue": "IEEE Journal on Selected Areas in Communications",
"journal": "IEEE Journal on Selected Areas in Communications",
"mag_field_of_study": [
"Computer Science"
]
} |
Figure 1: Two agents in Gibson Environment for real-world perception. The agent is active, embodied, and subject to constraints of physics and space (a,b). It receives a constant stream of visual observations as if it had an on-board camera (c). It can also receive additional modalities, e.g. depth, semantic labels, or normals (d,e,f). The visual observations are from real-world rather than an artificially designed space. Developing visual perception models for active agents and sensorimotor control are cumbersome to be done in the physical world, as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly. This has given rise to learning-in-simulation which consequently casts a question on whether the results transfer to real-world. In this paper, we are concerned with the problem of developing real-world perception for active agents, propose Gibson Virtual Environment 1 for this purpose, and showcase sample perceptual tasks learned therein. Gibson is based on virtualizing real spaces, rather than using artificially designed ones, and currently includes over 1400 floor spaces from 572 full buildings. The main characteristics of Gibson are: I. being from the real-world and reflecting its semantic complexity, II. having an internal synthesis mechanism, "Goggles", enabling deploying the trained models in real-world without needing domain adaptation, III. embodiment of agents and making them subject to constraints of physics and space. | Gibson REF ) is a learning environment in which an agent is embodied and made subject to constraints of space and physics (using Bullet physics) and spawned in a virtualized real space (coming from real-world datasets such as Matterport or Stanford 2D-3D). | 49358881 | Gibson Env: Real-World Perception for Embodied Agents | {
"venue": "CVPR 2018",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
The ubiquity of Online Social Networks (OSNs) is creating new sources for healthcare information, particularly in the context of pharmaceutical drugs. We aimed to examine the impact of a given OSN's characteristics on the content of pharmaceutical drug discussions from that OSN. We compared the effect of four distinguishing characteristics from ten different OSNs on the content of their pharmaceutical drug discussions: (1) General versus Health OSN; (2) OSN moderation; (3) OSN registration requirements; and (4) OSNs with a question and answer format. The effects of these characteristics were measured both quantitatively and qualitatively. Our results show that an OSN's characteristics indeed affect the content of its discussions. Based on their information needs, healthcare providers may use our findings to pick the right OSNs or to advise patients regarding their needs. Our results may also guide the creation of new and more effective domain-specific health OSNs. Further, future researchers of online healthcare content in OSNs may find our results informative while choosing OSNs as data sources. We reported several findings about the impact of OSN characteristics on the content of pharmaceutical drug discussion, and synthesized these findings into actionable items for both healthcare providers and future researchers of healthcare discussions on OSNs. Future research on the impact of OSN characteristics could include user demographics, quality and safety of information, and efficacy of OSN usage. | REF examined the characteristics of ten different online social networking sites to find impacts of these characteristics on the discussions of pharmaceutical drugs among the users and DailyStrength was one of these websites. | 7022593 | Pharmaceutical drugs chatter on Online Social Networks | {
"venue": "Journal of biomedical informatics",
"journal": "Journal of biomedical informatics",
"mag_field_of_study": [
"Computer Science",
"Medicine"
]
} |
Abstract Exploiting the full computational power of current hierarchical multiprocessor machines requires a very careful distribution of threads and data among the underlying non-uniform architecture so as to avoid remote memory access penalties. Directive-based programming languages such as OpenMP, can greatly help to perform such a distribution by providing programmers with an easy way to structure the parallelism of their application and to transmit this information to the runtime system. Our runtime, which is based on a multi-level thread scheduler combined with a NUMA-aware memory manager, converts this information into scheduling hints related to thread-memory affinity issues. These hints enable dynamic load distribution guided by application structure and hardware topology, thus helping to achieve performance portability. Several experiments show that mixed solutions (migrating both threads and data) outperform work-stealing based balancing strategies and next-touchbased data distribution policies. These techniques provide insights about additional optimizations. | Broquedis et al. REF combined NUMA-aware memory manager with their runtime system to enable dynamic load distribution, utilizing the information from the application structure and hardware topology. | 10528126 | ForestGOMP: An Efficient OpenMP Environment for NUMA Architectures | {
"venue": "International Journal of Parallel Programming",
"journal": "International Journal of Parallel Programming",
"mag_field_of_study": [
"Computer Science"
]
} |
Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other. In ReSA, a hard attention trims a sequence for a soft self-attention to process, while the soft attention feeds reward signals back to facilitate the training of the hard one. For this purpose, we develop a novel hard attention called "reinforced sequence sampling (RSS)", selecting tokens in parallel and trained via policy gradient. Using two RSS modules, ReSA efficiently extracts the sparse dependencies between each pair of selected tokens. We finally propose an RNN/CNN-free sentence-encoding model, "reinforced self-attention network (ReSAN)", solely based on ReSA. It achieves state-of-the-art performance on both Stanford Natural Language Inference (SNLI) and Sentences Involving Compositional Knowledge (SICK) datasets. | REF proposed reinforced selfattention network (ReSAN), which integrate both soft and hard attention into one context fusion with reinforced learning. | 27764139 | Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-In this work, we propose a Programmable Vector Memory Controller (PVMC), which boosts noncontiguous vector data accesses by integrating descriptors of memory patterns, a specialized local memory, a memory manager in hardware, and multiple DRAM controllers. We implemented and validated the proposed system on an Altera DE4 FPGA board. We compare the performance of our proposal with a vector system without PVMC as well as a scalar only system. When compared with a baseline vector system, the results show that the PVMC system transfers data sets up to 2.2x to 14.9x faster, achieves between 2.16x to 3.18x of speedup for 5 applications and consumes 2.56 to 4.04 times less energy. | Hussain et al. proposed a vector architecture called programmable vector memory controller (PVMC) REF and its implementation on an Altera Stratix IV 230 FPGA device. | 15805857 | PVMC: Programmable Vector Memory Controller | {
"venue": "2014 IEEE 25th International Conference on Application-Specific Systems, Architectures and Processors",
"journal": "2014 IEEE 25th International Conference on Application-Specific Systems, Architectures and Processors",
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-Monitoring network traffic and classifying applications are essential functions for network administrators. In this paper, we consider the use of Traffic Dispersion Graphs (TDGs) to classify network traffic. Given a set of flows, a TDG is a graph with an edge between any two IP addresses that communicate; thus TDGs capture network-wide interactions. Using TDGs, we develop an application classification framework dubbed Graption (Graph-based classification). Our framework provides a systematic way to harness the power of network-wide behavior, flow-level characteristics, and data mining techniques. As a proof of concept, we instantiate our framework to detect P2P applications, and show that it can identify P2P traffic with recall and precision greater than 90% in backbone traces, which are particularly challenging for other methods. | REF developed a graph-based framework for classifying P2P traffic. | 6109492 | Graph-Based P2P Traffic Classification at the Internet Backbone | {
"venue": "IEEE INFOCOM Workshops 2009",
"journal": "IEEE INFOCOM Workshops 2009",
"mag_field_of_study": [
"Computer Science"
]
} |
Background: Since Swanson proposed the Undiscovered Public Knowledge (UPK) model, there have been many approaches to uncover UPK by mining the biomedical literature. These earlier works, however, required substantial manual intervention to reduce the number of possible connections and are mainly applied to disease-effect relation. With the advancement in biomedical science, it has become imperative to extract and combine information from multiple disjoint researches, studies and articles to infer new hypotheses and expand knowledge. Methods: We propose MKEM, a Multi-level Knowledge Emergence Model, to discover implicit relationships using Natural Language Processing techniques such as Link Grammar and Ontologies such as Unified Medical Language System (UMLS) MetaMap. The contribution of MKEM is as follows: First, we propose a flexible knowledge emergence model to extract implicit relationships across different levels such as molecular level for gene and protein and Phenomic level for disease and treatment. Second, we employ MetaMap for tagging biological concepts. Third, we provide an empirical and systematic approach to discover novel relationships. Results: We applied our system on 5000 abstracts downloaded from PubMed database. We performed the performance evaluation as a gold standard is not yet available. Our system performed with a good precision and recall and we generated 24 hypotheses. Conclusions: Our experiments show that MKEM is a powerful tool to discover hidden relationships residing in extracted entities that were represented by our Substance-Effect-Process-Disease-Body Part (SEPDB) model. | Ijaz, Song, and Lee REF proposed the MKE (Multi-Level Knowledge Emergence) model. | 9720218 | MKEM: a Multi-level Knowledge Emergence Model for mining undiscovered public knowledge | {
"venue": "BMC Bioinformatics",
"journal": "BMC Bioinformatics",
"mag_field_of_study": [
"Medicine",
"Computer Science"
]
} |
Abstract-Human face detection and tracking is an important research area having wide application in human machine interface, content-based image retrieval, video coding, gesture recognition, crowd surveillance and face recognition. Human face detection is extremely important and simultaneously a difficult problem in computer vision, mainly due to the dynamics and high degree of variability of the head. A large number of effective algorithms have been proposed for face detection in grey scale images ranging from simple edgebased methods to composite high-level approaches using modern and advanced pattern recognition approaches. The aim of the paper is to compare Gradient vector flow and silhouettes, two of the most widely used algorithms in the area of face detection. Both the algorithms were applied on a common database and the results were compared. This is the first paper which evaluates the runtime analysis of Gradient vector field methodology and compares with silhouettes segmentation technique. The paper also explains the factors affecting the performance and error incurred by both the algorithms. Finally, results are explained which proves the superiority of the silhouette segmentation method over Gradient vector flow method. Index Terms-Face detection, gradient vector flow (GVF), active contour flow, silhoutte. | Shivesh Bajpai, et. al., REF discussed the gradient vector flow and silhouettes algorithms. | 56205479 | An Experimental Comparison of Face Detection Algorithms | {
"venue": null,
"journal": "International Journal of Computer Theory and Engineering",
"mag_field_of_study": [
"Computer Science"
]
} |
Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015 Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers. | Spatially Regularized Discriminative Correlation Filter (SRDCF) REF utilizes spatial regularization by introducing a spatial regularization component, which can penalize the correlation filter coefficients during learning and lead to not only alleviating the unwanted boundary effects but also allowing the CF to be learned on larger regions. | 206770621 | Learning Spatially Regularized Correlation Filters for Visual Tracking | {
"venue": "2015 IEEE International Conference on Computer Vision (ICCV)",
"journal": "2015 IEEE International Conference on Computer Vision (ICCV)",
"mag_field_of_study": [
"Computer Science"
]
} |
We introduce the value iteration network (VIN): a fully differentiable neural network with a 'planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains. | iteration algorithm was introduced that is capable of predicting outcomes that involve planning-based reasoning REF . | 11374605 | Value Iteration Networks | {
"venue": "Advances in Neural Information Processing Systems 29 pages 2154--2162, 2016",
"journal": null,
"mag_field_of_study": [
"Computer Science",
"Mathematics"
]
} |
The United States spends more than $250 million each year on the American Community Survey (ACS), a labor-intensive door-todoor study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed several years. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may become an increasingly practical supplement to the ACS. Here, we present a method that estimates socioeconomic characteristics of regions spanning 200 US cities by using 50 million images of street scenes gathered with Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22 million automobiles in total (8% of all automobiles in the United States), were used to accurately estimate income, race, education, and voting patterns at the zip code and precinct level. (The average US precinct contains βΌ1,000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographics may effectively complement labor-intensive approaches, with the potential to measure demographics with fine spatial resolution, in close to real time. computer vision | deep learning | social analysis | demography F or thousands of years, rulers and policymakers have surveyed national populations to collect demographic statistics. In the United States, the most detailed such study is the American Community Survey (ACS), which is performed by the US Census Bureau at a cost of $250 million per year (1). Each year, ACS reports demographic results for all cities and counties with a population of 65,000 or more (2). However, due to the labor-intensive data-gathering process, smaller regions are interrogated less frequently, and data for geographical areas with less than 65,000 inhabitants are typically presented with a lag of βΌ 2.5 y. Although the ACS represents a vast improvement over the earlier, decennial census (3), this lag can nonetheless impede effective policymaking. Thus, the development of complementary approaches would be desirable. In recent years, computational methods have emerged as a promising tool for tackling difficult problems in social science. (6) used mobile phone metadata to predict poverty rates in Rwanda. These results suggest that socioeconomic studies, too, might be facilitated by computational methods, with the ultimate potential of analyzing demographic trends in great detail, in real time, and at a fraction of the cost. Recently, Naik et al. (7) used publicly available imagery to quantify people's subjective perceptions of a neighborhood's physical appearance. They then showed that changes in these perceptions correlate with changes in socioeconomic variables (8). Our work explores a related theme: whether socioeconomic statistics can be inferred from objective characteristics of images from a neighborhood. Here, we show that it is possible to determine socioeconomic statistics and political preferences in the US population by combining publicly available data with machine-learning methods. Our procedure, designed to build upon and complement the ACS, uses labor-intensive survey data for a handful of cities to train a model that can create nationwide demographic estimates. This approach allows for estimation of demographic variables with high spatial resolution and reduced lag time. Specifically, we analyze 50 million images taken by Google Street View cars as they drove through 200 cities, neighborhoodby-neighborhood and street-by-street. In Google Street View images, only the exteriors of houses, landscaping, and vehicles on the street can be observed. Of these objects, vehicles are among the most personalized expressions of American culture: Over 90% of American households own a motor vehicle (9), and their choice of automobile is influenced by disparate demographic factors including household needs, personal preferences, and economic wherewithal (10). (Note that, in principle, other factors such as spacing between houses, number of stories, and extent of shrubbery could also be integrated into such models.) Such street scenes are a natural data type to explore: They already cover We show that socioeconomic attributes such as income, race, education, and voting patterns can be inferred from cars detected in Google Street View images using deep learning. Our model works by discovering associations between cars and people. For example, if the number of sedans in a city is higher than the number of pickup trucks, that city is likely to vote for a Democrat in the next presidential election (88% chance); if not, then the city is likely to vote for a Republican (82% chance). | Gebru et al. REF extracted car types, years, and make from 50 million Google Street View images to correlate with socio-economic factors such as income and geographic demographic types across different cities in the United States. | 37969305 | Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States | {
"venue": "Proceedings of the National Academy of Sciences of the United States of America",
"journal": "Proceedings of the National Academy of Sciences of the United States of America",
"mag_field_of_study": [
"Computer Science",
"Medicine",
"Geography"
]
} |
Abstract-We propose a system that is capable of detailed analysis of eye region images in terms of the position of the iris, degree of eyelid opening, and the shape, complexity, and texture of the eyelids. The system uses a generative eye region model that parameterizes the fine structure and motion of an eye. The structure parameters represent structural individuality of the eye, including the size and color of the iris, the width, boldness, and complexity of the eyelids, the width of the bulge below the eye, and the width of the illumination reflection on the bulge. The motion parameters represent movement of the eye, including the up-down position of the upper and lower eyelids and the 2D position of the iris. The system first registers the eye model to the input in a particular frame and individualizes it by adjusting the structure parameters. The system then tracks motion of the eye by estimating the motion parameters across the entire image sequence. Combined with image stabilization to compensate for appearance changes due to head motion, the system achieves accurate registration and motion recovery of eyes. | In REF , a system was proposed that is capable of a detailed analysis of the eye region images in terms of the position of the iris, degree of the eyelid opening, and the shape, the complexity, and the texture of the eyelids. | 9732534 | Meticulously detailed eye region model and its application to analysis of facial images | {
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"mag_field_of_study": [
"Computer Science",
"Medicine"
]
} |
Abstract. Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. The reason for their immediate success is the fact that no specific skills are needed for participating. At the moment, however, the information retrieval support is limited. We present a formal model and a new search algorithm for folksonomies, called FolkRank, that exploits the structure of the folksonomy. The proposed algorithm is also applied to find communities within the folksonomy and is used to structure search results. All findings are demonstrated on a large scale dataset. | FolkRank REF leverages the structure of folksonomy and finds the communities to restructure search results. | 2204253 | Information retrieval in folksonomies: Search and ranking | {
"venue": "The Semantic Web: Research and Applications, volume 4011 of LNAI",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-The packet pair technique estimates the capacity of a path (bottleneck bandwidth) from the dispersion (spacing) experienced by two back-to-back packets [1][2][3]. We demonstrate that the dispersion of packet pairs in loaded paths follows a multimodal distribution, and discuss the queueing effects that cause the multiple modes. We show that the path capacity is often not the global mode, and so it cannot be estimated using standard statistical procedures. The effect of the size of the probing packets is also investigated, showing that the conventional wisdom of using maximum sized packet pairs is not optimal. We then study the dispersion of long packet trains. Increasing the length of the packet train reduces the measurement variance, but the estimates converge to a value, referred to as Asymptotic Dispersion Rate (ADR), that is lower than the capacity. We derive the effect of the cross traffic in the dispersion of long packet trains, showing that the ADR is not the available bandwidth in the path, as was assumed in previous work. Putting all the pieces together, we present a capacity estimation methodology that has been implemented in a tool called pathrate. | The work in REF mentions a technique for estimating the available bandwidth based on the asymptotic dispersion rate (ADR) method. | 352687 | What do packet dispersion techniques measure? | {
"venue": "Proceedings IEEE INFOCOM 2001. Conference on Computer Communications. Twentieth Annual Joint Conference of the IEEE Computer and Communications Society (Cat. No.01CH37213)",
"journal": "Proceedings IEEE INFOCOM 2001. Conference on Computer Communications. Twentieth Annual Joint Conference of the IEEE Computer and Communications Society (Cat. No.01CH37213)",
"mag_field_of_study": [
"Computer Science"
]
} |
We show for the first time that incorporating the predictions of a word sense disambiguation system within a typical phrase-based statistical machine translation (SMT) model consistently improves translation quality across all three different IWSLT ChineseEnglish test sets, as well as producing statistically significant improvements on the larger NIST Chinese-English MT taskand moreover never hurts performance on any test set, according not only to BLEU but to all eight most commonly used automatic evaluation metrics. Recent work has challenged the assumption that word sense disambiguation (WSD) systems are useful for SMT. Yet SMT translation quality still obviously suffers from inaccurate lexical choice. In this paper, we address this problem by investigating a new strategy for integrating WSD into an SMT system, that performs fully phrasal multi-word disambiguation. Instead of directly incorporating a Senseval-style WSD system, we redefine the WSD task to match the exact same phrasal translation disambiguation task faced by phrase-based SMT systems. Our results provide the first known empirical evidence that lexical semantics are indeed useful for SMT, despite claims to the contrary. | But two years later the same authors REF come to the conclusion that the incorporation of WSD within a typical SMT system "consistently improves translation quality" for Chinese-English. | 135295 | Improving Statistical Machine Translation Using Word Sense Disambiguation | {
"venue": "2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Nowadays there is an overall adoption of parallel programming, caused by the wide use of multicore processors, clusters and graphics processing units for various problems solving. Usually, programs are writ ten in imperative programming languages. The risk of making an error in a parallel program is higher in comparison to the sequential one, as parallel programs have their specific errors. Since the majority of system fails are the result of software malfunction, it is extremely important to develop reliable software. Formal verification could be used to increase software reliability. Formal verifi cation is a proof of programs correctness by finding the correspondence between the program and its spec ification, which describes the aim of the development [1] . The main advantage of formal verification is the capability to prove the absence of errors in the program, while testing only allows to detect errors. In con trast to other methods formal verification requires analytical treatment of source code properties, that is why the aim of formal verification could be achieved only by rigorous mathematical proof of program to specification correspondence. This requires formalization of objects used in verification. One method of formal verification was introduced by Hoare [2] . It utilises an axiomatic approach based on Hoare logic. Hoare logic is an extension of a formal system with certain formulas, containing the source code in the verified programming language, that are called Hoare triples. A Hoare triple is an annotated program, namely the source code and two formulas of the theory , that describe restrictions on input variables and conditions of the program execution result correctness. These formulas are named precondition and postcondition, respectively. A Hoare triple is usually of the form {Ο}Prog{Ο}, where Prog is a program, Ο is a precondition and Ο is a postcondition for Prog. The extended formal system is distinguished from by additional axioms and inference rules, that allow to deduce assertions of program properties, particularly of the program correctness. Then the program is correct if its Hoare triple is iden tically true. So the main idea of this approach is to derive a formula of formal system from the Hoare triple applying rules of inference and using axioms as premises, and then prove the truth of this formula within the formal system . There are certain achievements in practical application of such an approach for imperative program ming languages [3] . However formal verification complexity for parallel imperative programs increases rapidly for systems with both shared and distributed memory. In general, the main problem is the system resource conflicts. The examples are improper shared memory use for shared memory systems, and dead locks for distributed memory systems. An alternative to imperative programming is the functional data flow paradigm for parallel program ming that represents a program as a directed data flow graph. One implementation of the functional data flow paradigm is the Pifagor language [4] . The basis of this model is computation control based on the data 1 The article is published in the original. Abstract-The article is devoted to the methods of proving parallel programs correctness, that are based on the axiomatic approach. Formal system for functional data flow parallel programming lan guage Pifagor is described. On the basis of this system programs correctness could be proved. Formal Verification of Programs in the Functional | In REF , the verification of parallel programs correctness is based on the axiomatic approach. | 18808313 | Formal verification of programs in the functional data-flow parallel language | {
"venue": "Automatic Control and Computer Sciences",
"journal": "Automatic Control and Computer Sciences",
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract A new clusters labelling strategy, which combines the computation of the Davies-Bouldin index of the clustering and the centroid diameters of the clusters is proposed for application in anomaly based intrusion detection systems (IDS). The aim of such a strategy is to detect compact clusters containing very similar vectors and these are highly likely to be attack vectors. Experimental results comparing the effectiveness of a multiple classifier IDS with such a labelling strategy and that of the classical cardinality labelling based IDS show that the proposed strategy behaves much better in a heavily | Petrovic et al. REF introduced a new cluster labeling techniques for attacks identification based on combination of Davies-Bouldin index of clustering and centroid diameter evaluation. | 8492172 | Labelling Clusters in an Intrusion Detection System Using a Combination of Clustering Evaluation Techniques | {
"venue": "Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS'06)",
"journal": "Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS'06)",
"mag_field_of_study": [
"Computer Science"
]
} |
Strong intelligent machines powered by deep neural networks are increasingly deployed as black boxes to make decisions in risksensitive domains, such as finance and medical. To reduce potential risk and build trust with users, it is critical to interpret how such machines make their decisions. Existing works interpret a pretrained neural network by analyzing hidden neurons, mimicking pre-trained models or approximating local predictions. However, these methods do not provide a guarantee on the exactness and consistency of their interpretations. In this paper, we propose an elegant closed form solution named OpenBox to compute exact and consistent interpretations for the family of Piecewise Linear Neural Networks (PLNN). The major idea is to first transform a PLNN into a mathematically equivalent set of linear classifiers, then interpret each linear classifier by the features that dominate its prediction. We further apply OpenBox to demonstrate the effectiveness of nonnegative and sparse constraints on improving the interpretability of PLNNs. The extensive experiments on both synthetic and real world data sets clearly demonstrate the exactness and consistency of our interpretation. | Chu et al. REF transformed a piecewise linear neural network into a set of locally linear classifiers, and interpreted the prediction on an input instance by analyzing the gradients of all neurons with respect to the instance. | 3338933 | Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution | {
"venue": null,
"journal": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining",
"mag_field_of_study": [
"Computer Science"
]
} |
In this paper we investigate the use of surface text patterns for a Maximum Entropy based Question Answering (QA) system. These text patterns are collected automatically in an unsupervised fashion using a collection of trivia question and answer pairs as seeds. These patterns are used to generate features for a statistical question answering system. We report our results on the TREC-10 question set. | E.g., REF collect surface patterns automatically in an unsupervised fashion using a collection of trivia question and answer pairs as seeds. | 1963108 | Automatic Derivation Of Surface Text Patterns For A Maximum Entropy Based Question Answering System | {
"venue": "Human Language Technology Conference And Meeting Of The North American Association For Computational Linguistics - Short Papers",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. Region proposal methods typically rely on inexpensive features and economical inference schemes. Selective Search [4] , one of the most popular methods, greedily merges superpixels based on engineered low-level features. Yet when compared to efficient detection networks [2], Selective Search is an order of magnitude slower, at 2 seconds per image in a CPU implementation. EdgeBoxes [6] currently provides the best tradeoff between proposal quality and speed, at 0.2 seconds per image. Nevertheless, the region proposal step still consumes as much running time as the detection network. One may note that fast region-based CNNs take advantage of GPUs, while the region proposal methods used in research are implemented on the CPU, making such runtime comparisons inequitable. An obvious way to accelerate proposal computation is to re-implement it for the GPU. This may be an effective engineering solution, but re-implementation ignores the down-stream detection network and therefore misses important opportunities for sharing computation. In this paper, we show that an algorithmic change-computing proposals with a deep convolutional neural network-leads to an elegant and effective solution where proposal computation is nearly cost-free given the detection network's computation. To this end, we introduce novel Region Proposal Networks (RPNs) that share convolutional layers with state-of-the-art object detection networks [1], [2] . By sharing convolutions at test-time, the marginal cost for computing proposals is small (e.g., 10 ms per image). Our observation is that the convolutional feature maps used by region-based detectors, like Fast R-CNN, can also be used for generating region proposals. On top of these convolutional features, we construct an RPN by adding a few additional convolutional layers that simultaneously regress region bounds and objectness scores at each location on a regular grid. The RPN is thus a kind of fully convolutional network (FCN) [7] and can be trained end-to-end specifically for the task for generating detection proposals. RPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. In contrast to prevalent methods [1], [2], [8], [9] that use pyramids of images (Fig. 1a) or pyramids of filters (Fig. 1b) , we introduce novel "anchor" boxes that serve as references at multiple scales and aspect ratios. Our scheme can be thought of as a pyramid of regression references (Fig. 1c) , which avoids enumerating images or filters of multiple scales or aspect ratios. This model performs well when trained and tested using single-scale images and thus benefits running speed. To unify RPNs with Fast R-CNN [2] object detection networks, we propose a training scheme that alternates S. Ren is with | Ren et al. REF introduce a Regional Proposal Network (RPN) to share the full image convolution feature with the detection network. | 10328909 | Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks | {
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"mag_field_of_study": [
"Computer Science",
"Medicine"
]
} |
In this paper we evaluate the security of a two-factor Graphical Password scheme proposed in [1] . As in the original paper, we model the attack of a passive adversary as a boolean formula whose truth assignment corresponds to the user secret. We show that there exist a small number of secrets that a passive adversary cannot extract, independently from the amount information she manages to eavesdrop. We then experimentally evaluate the security of the scheme. Our tests show that the number of sessions the adversary needs to gather in order to be able to extract the users secret is relatively small. However, the amount of time needed to actually extract the user secret from the collected information grows exponentially in the system parameters, making the secret extraction unfeasible. Finally we observe that the graphical password scheme can be easily restated in as a device-device authentication mechanism. | Recently, with a random permutation function, Catuogno and Galdi proposed and evaluated the security of a two-factor graphical password scheme REF . | 6337399 | On the security of a two-factor authentication scheme | {
"venue": "WISTP",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
The Platform for Privacy Preferences (P3P), developed by the World Wide Web Consortium (W3C), provides a standard computer-readable format for privacy policies and a protocol that enables web browsers to read and process privacy policies automatically. P3P enables machine-readable privacy policies that can be retrieved automatically by web browsers and other user agent tools that can display symbols, prompt users, or take other appropriate actions. We developed the AT&T Privacy Bird as a P3P user agent that can compare P3P policies against a user's privacy preferences. Since P3P was adopted as a W3C recommendation in April 2002, little work has been done to study how it is being used and, especially, its impact on users. Many questions have been raised about whether and how Internet users will make use of P3P, and how to build P3P user agents that will prove most useful to end users. In this paper we first provide a brief introduction to P3P and the AT&T Privacy Bird. Then we discuss a survey of AT&T Privacy Bird users that we conducted in August 2002. We found that a large proportion of AT&T Privacy Bird users began reading privacy policies more often and being more proactive about protecting their privacy as a result of using this software. Unfortunately, the usefulness of P3P user agents is severely limited by the number of web sites that have implemented P3P. Our survey results also suggest that if it becomes easier to compare privacy policy across e-commerce web sites, a significant group of consumers would likely use this information in their purchase decisions. | A survey of Privacy Bird users showed strong interest in being able to do comparison shopping on the basis of privacy policies REF . | 18459548 | Use of a P3P user agent by early adopters | {
"venue": "WPES '02",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-High performance tracking control can only be achieved if a good model of the dynamics is available. However, such a model is often difficult to obtain from first order physics only. In this paper, we develop a data-driven control law that ensures closed loop stability of Lagrangian systems. For this purpose, we use Gaussian Process regression for the feedforward compensation of the unknown dynamics of the system. The gains of the feedback part are adapted based on the uncertainty of the learned model. Thus, the feedback gains are kept low as long as the learned model describes the true system sufficiently precisely. We show how to select a suitable gain adaption law that incorporates the uncertainty of the model to guarantee a globally bounded tracking error. A simulation with a robot manipulator demonstrates the efficacy of the proposed control law. | The work in REF considers the control of Lagrangian systems and shows boundedness of the tracking error. | 40809224 | Stable Gaussian Process based Tracking Control of Lagrangian Systems | {
"venue": "2017 IEEE 56th Annual Conference on Decision and Control (CDC)",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
We present a novel distributed algorithm for counting all four-node induced subgraphs in a big graph. These counts, called the 4-profile, describe a graph's connectivity properties and have found several uses ranging from bioinformatics to spam detection. We also study the more complicated problem of estimating the local 4-profiles centered at each vertex of the graph. The local 4-profile embeds every vertex in an 11-dimensional space that characterizes the local geometry of its neighborhood: vertices that connect different clusters will have different local 4-profiles compared to those that are only part of one dense cluster. Our algorithm is a local, distributed message-passing scheme on the graph and computes all the local 4-profiles in parallel. We rely on two novel theoretical contributions: we show that local 4-profiles can be calculated using compressed two-hop information and also establish novel concentration results that show that graphs can be substantially sparsified and still retain good approximation quality for the global 4-profile. We empirically evaluate our algorithm using a distributed GraphLab implementation that we scaled up to 640 cores. We show that our algorithm can compute global and local 4-profiles of graphs with millions of edges in a few minutes, significantly improving upon the previous state of the art. | Elenberg et al. REF present a distributed algorithm for counting subgraphs of size 4. | 3626580 | Distributed Estimation of Graph 4-Profiles | {
"venue": "ArXiv",
"journal": "ArXiv",
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract Users of a Web site usually perform their interest-oriented actions by clicking or visiting Web pages, which are traced in access log files. Clustering Web user access patterns may capture common user interests to a Web site, and in turn, build user profiles for advanced Web applications, such as Web caching and prefetching. The conventional Web usage mining techniques for clustering Web user sessions can discover usage patterns directly, but cannot identify the latent factors or hidden relationships among users' navigational behaviour. In this paper, we propose an approach based on a vector space model, called Random Indexing, to discover such intrinsic characteristics of Web users' activities. The underlying factors are then utilised for clustering individual user navigational patterns and creating common user profiles. The clustering results will be used to predict and prefetch Web requests for grouped users. We demonstrate the usability and superiority of the proposed Web user clustering approach through experiments on a real Web log file. The clustering and prefetching tasks are evaluated by comparison with previous studies demonstrating better clustering performance and higher prefetching accuracy. | Reference REF proposed an approach based on a vector space model, called Random Indexing, to discover the latent factors or hidden relationship among users' navigational behaviour, and the clustering results are used to predict and prefetch Web requests for grouped users. | 9184559 | Web user clustering and Web prefetching using Random Indexing with weight functions | {
"venue": "Knowledge and Information Systems",
"journal": "Knowledge and Information Systems",
"mag_field_of_study": [
"Computer Science"
]
} |
This survey provides a structured and comprehensive overview of research on security and privacy in computer and communication networks that use game-theoretic approaches. We present a selected set of works to highlight the application of game theory in addressing different forms of security and privacy problems in computer networks and mobile applications. We organize the presented works in six main categories: security of the physical and MAC layers, security of self-organizing networks, intrusion detection systems, anonymity and privacy, economics of network security, and cryptography. In each category, we identify security problems, players, and game models. We summarize the main results of selected works, such as equilibrium analysis and security mechanism designs. In addition, we provide a discussion on the advantages, drawbacks, and future direction of using game theory in this field. In this survey, our goal is to instill in the reader an enhanced understanding of different research approaches in applying gametheoretic methods to network security. This survey can also help researchers from various fields develop game-theoretic solutions to current and emerging security problems in computer networking. | A number of game-theoretic approaches are present in the literature (see REF ). | 207203993 | Game theory meets network security and privacy | {
"venue": "CSUR",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Abstract-In many networks, it is less costly to transmit a packet to any node in a set of neighbors than to one specific neighbor. This observation was previously exploited by opportunistic routing protocols by using single-path routing metrics to assign to each node a group of candidate relays for a particular destination. This paper addresses the least-cost anypath routing (LCAR) problem: how to assign a set of candidate relays at each node for a given destination such that the expected cost of forwarding a packet to the destination is minimized. The key is the following tradeoff: On one hand, increasing the number of candidate relays decreases the forwarding cost, but on the other, it increases the likelihood of "veering" away from the shortest-path route. Prior proposals based on single-path routing metrics or geographic coordinates do not explicitly consider this tradeoff and, as a result, do not always make optimal choices. The LCAR algorithm and its framework are general and can be applied to a variety of networks and cost models. We show how LCAR can incorporate different aspects of underlying coordination protocols, for example a link-layer protocol that randomly selects which receiving node will forward a packet, or the possibility that multiple nodes mistakenly forward a packet. In either case, the LCAR algorithm finds the optimal choice of candidate relays that takes into account these properties of the link layer. Finally, we apply LCAR to low-power, low-rate wireless communication and introduce a new wireless link-layer technique to decrease energy transmission costs in conjunction with anypath routing. Simulations show significant reductions in transmission cost to opportunistic routing using single-path metrics. Furthermore, LCAR routes are more robust and stable than those based on single-path distances due to the integrative nature of the LCAR's route cost metric. Index Terms-Cross-layer design, routing protocols, wireless mesh networks. NOTATION AND ACRONYMS Neighbors of node . Packet reception probability from to . | Dubois-Ferriere et al. proposed a protocol called the least-cost anypath routing (LCAR) REF , which selects anypath but not the shortest path, in order to reduce retransmissions. | 12657194 | Valuable Detours: Least-Cost Anypath Routing | {
"venue": "IEEE/ACM Transactions on Networking",
"journal": "IEEE/ACM Transactions on Networking",
"mag_field_of_study": [
"Computer Science"
]
} |
Advanced meter infrastructures (AMIs) are systems that measure, collect, and analyze utilities distribution and consumption, and communicate with metering devices either on a schedule or on request. AMIs are becoming a vital part of utilities distribution network and allow the development of Smart Cities. In this article we propose an integrated Internet of Things architecture for smart meter networks to be deployed in smart cities. We discuss the communication protocol, the data format, the data gathering procedure, and the decision system based on big data treatment. The architecture includes electricity, water, and gas smart meters. Real measurements show the benefits of the proposed IoT architecture for both the customers and the utilities. The integration of intelligent measuring devices in a city using the Internet of Things (IoT) allows the collection of all the data necessary to become a smart city. They are a fundamental part of keeping the city connected and informed, and ensure that each subsystem performs its function. The integration of information technology helps to control the subsystems that form the smart city. The installation of cutting-edge technologies regarding measurement, communications and systems, network automation, and distributed generation, among others, facilitates the development of the city. The goal is to achieve better management of the electric energy, water, and gas providing networks, and an efficient balance between demand and consumption. A key technological element in this context is the smart meter, which can be a thing inside the IoT. A smart metering system allows the water, electric, and gas utilities continuous consumption reading and recording in time intervals, or at least daily reporting, monitoring, and billing. Smart meters enable two-way real-time communication between the meter and the utility central system. This allows the utility to gather interval data, time-based demand data, outage management, service interruption, service restoration, quality of service monitoring, distribution network analysis, distribution planning, peak demand, demand reduction, customer billing, and work management. In recent years, advances in the information and communication tech- | A smart metering system can take electricity, water, and gas measurements REF . | 33950880 | An Integrated IoT Architecture for Smart Metering | {
"venue": "IEEE Communications Magazine",
"journal": "IEEE Communications Magazine",
"mag_field_of_study": [
"Computer Science"
]
} |
Penalty functions are often used in constrained optimization. However, it is very difficult to strike the right balance between objective and penalty functions. This paper introduces a novel approach to balance objective and penalty functions stochastically, i.e., stochastic ranking, and presents a new view on penalty function methods in terms of the dominance of penalty and objective functions. Some of the pitfalls of naive penalty methods are discussed in these terms. The new ranking method is tested using a (Β΅, Ξ») evolution strategy on 13 benchmark problems. Our results show that suitable ranking alone (i.e., selection), without the introduction of complicated and specialized variation operators, is capable of improving the search performance significantly. | In 2000, Runarsson and Yao REF introduced a stochastic ranking approach as a new constraint-handling technique to balance objective and penalty functions stochastically, and reported that the stochastic ranking approach is capable of improving the search performance significantly. | 13579058 | Stochastic Ranking for Constrained Evolutionary Optimization | {
"venue": "IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION",
"journal": null,
"mag_field_of_study": [
"Computer Science",
"Mathematics"
]
} |
Abstract. Multi-touch interaction has received considerable attention in the last few years, in particular for natural two-dimensional (2D) interaction. However, many application areas deal with three-dimensional (3D) data and require intuitive 3D interaction techniques therefore. Indeed, virtual reality (VR) systems provide sophisticated 3D user interface, but then lack efficient 2D interaction, and are therefore rarely adopted by ordinary users or even by experts. Since multi-touch interfaces represent a good trade-off between intuitive, constrained interaction on a touch surface providing tangible feedback, and unrestricted natural interaction without any instrumentation, they have the potential to form the foundation of the next generation user interface for 2D as well as 3D interaction. In particular, stereoscopic display of 3D data provides an additional depth cue, but until now the challenges and limitations for multi-touch interaction in this context have not been considered. In this paper we present new multi-touch paradigms and interactions that combine both traditional 2D interaction and novel 3D interaction on a touch surface to form a new class of multi-touch systems, which we refer to as interscopic multi-touch surfaces (iMUTS). We discuss iMUTS-based user interfaces that support interaction with 2D content displayed in monoscopic mode and 3D content usually displayed stereoscopically. In order to underline the potential of the proposed iMUTS setup, we have developed and evaluated two example interaction metaphors for different domains. First, we present intuitive navigation techniques for virtual 3D city models, and then we describe a natural metaphor for deforming volumetric datasets in a medical context. | In their more recent work REF , they present an interscopic multitouch surfaces (iMUTS) application to support intuitive interaction with either 2D content and 3D content. | 18447355 | Bimanual Interaction with Interscopic Multi-Touch Surfaces | {
"venue": "INTERACT",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
In this paper, we consider the problem of locating the information source with sparse observations. We assume that a piece of information spreads in a network following a heterogeneous susceptible-infected-recovered (SIR) model, where a node is said to be infected when it receives the information and recovered when it removes or hides the information. We further assume that a small subset of infected nodes are reported, from which we need to find the source of the information. We adopt the sample path based estimator developed in [1] , and prove that on infinite trees, the sample path based estimator is a Jordan infection center with respect to the set of observed infected nodes. In other words, the sample path based estimator minimizes the maximum distance to observed infected nodes. We further prove that the distance between the estimator and the actual source is upper bounded by a constant independent of the number of infected nodes with a high probability on infinite trees. Our simulations on tree networks and real world networks show that the sample path based estimator is closer to the actual source than several other algorithms. | In REF , it is shown that the Jordan center is still within a bounded hop distance from the true source with high probability, independent of the number of infected nodes. | 7886671 | A robust information source estimator with sparse observations | {
"venue": "IEEE INFOCOM 2014 - IEEE Conference on Computer Communications",
"journal": "IEEE INFOCOM 2014 - IEEE Conference on Computer Communications",
"mag_field_of_study": [
"Computer Science"
]
} |
We study the question of setting and testing reserve prices in single item auctions when the bidders are not identical. At a high level, there are two generalizations of the standard second price auction: in the lazy version we first determine the winner, and then apply reserve prices; in the eager version we first discard the bidders not meeting their reserves, and then determine the winner among the rest. We show that the two versions have dramatically different properties: lazy reserves are easy to optimize, and A/B test in production, whereas eager reserves always lead to higher welfare, but their optimization is NP-complete, and naive A/B testing will lead to incorrect conclusions. Despite their different characteristics, we show that the overall revenue for the two scenarios is always within a factor of 2 of each other, even in the presence of correlated bids. Moreover, we prove that the eager auction dominates the lazy auction on revenue whenever the bidders are independent or symmetric. We complement our theoretical results with simulations on real world data that show that even suboptimally set eager reserve prices are preferred from a revenue standpoint. | Paes REF consider second price auctions and study the question of computing the optimal personalized reserve prices in a correlated distribution setting, and they show that the problem is NP-complete. | 16885191 | A Field Guide to Personalized Reserve Prices | {
"venue": "Proceedings of the 25th International Conference on World Wide Web - WWW '16",
"journal": "Proceedings of the 25th International Conference on World Wide Web - WWW '16",
"mag_field_of_study": [
"Mathematics",
"Computer Science"
]
} |
We present an extension of the linear time, time-bounded, Signal Temporal Logic to describe spatio-temporal properties. We consider a discrete location/ patch-based representation of space, with a population of interacting agents evolving in each location and with agents migrating from one patch to another one. We provide both a boolean and a quantitative semantics to this logic. We then present monitoring algorithms to check the validity of a formula, or to compute its satisfaction (robustness) score, over a spatiotemporal trace, exploiting these routines to do statistical model checking of stochastic models. We illustrate the logic at work on an epidemic example, looking at the diffusion of a cholera infection among communities living along a river. ological ones [24] , and in the design of new systems [5] . This is due to their ability to easily express complex temporal behavioural patterns, and the availability of efficient model checking and monitoring tools, for many classes of mathematical models, ranging from ODEs [28] to stochastic processes [4, 5] . | In REF , an extension of Signal Temporal Logic is proposed to verify properties in continuous-time, discrete-space systems. | 3154105 | Specifying and Monitoring Properties of Stochastic Spatio-Temporal Systems in Signal Temporal Logic | {
"venue": "VALUETOOLS",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Revolutions often spawn counterrevolutions and the efficient market hypothesis in finance is no exception. The intellectual dominance of the efficient-market revolution has more been challenged by economists who stress psychological and behavioral elements of stock-price determination and by econometricians who argue that stock returns are, to a considerable extent, predictable. This survey examines the attacks on the efficient-market hypothesis and the relationship between predictability and efficiency. I conclude that our stock markets are more efficient and less predictable than many recent academic papers would have us believe. 2 A generation ago, the efficient market hypothesis was widely accepted by academic financial economists; for example, see Eugene Fama's (1970) influential survey article, "Efficient Capital Markets." It was generally believed that securities markets were extremely efficient in reflecting information about individual stocks and about the stock market as a whole. The accepted view was that when information arises, the news spreads very quickly and is incorporated into the prices of securities without delay. Thus, neither technical analysis, which is the study of past stock prices in an attempt to predict future prices, nor even fundamental analysis, which is the analysis of financial information such as company earnings, asset values, etc., to help investors select "undervalued" stocks, would enable an investor to achieve returns greater than those that could be obtained by holding a randomly selected portfolio of individual stocks with comparable risk. The efficient market hypothesis is associated with the idea of a "random walk," which is a term loosely used in the finance literature to characterize a price series where all subsequent price changes represent random departures from previous prices. The logic of the random walk idea is that if the flow of information is unimpeded and information is immediately reflected in stock prices, then tomorrow's price change will reflect only tomorrow's news and will be independent of the price changes today. But news is by definition unpredictable and, thus, resulting price changes must be unpredictable and random. As a result, prices fully reflect all known information, and even uninformed investors buying a diversified portfolio at the tableau of prices given by the market will obtain a rate of return as generous as that achieved by the experts. The way I put it in my book, A Random Walk Down Wall Street, first published in 1973, a blindfolded chimpanzee throwing darts at the Wall Street Journal could select a portfolio that would do as well as the experts. Of course, the advice was not literally to throw darts but instead to throw a towel over the stock pages -that is, to buy a broadbased index fund that bought and held all the stocks in the market and that charged very low expenses. By the start of the twenty-first century, the intellectual dominance of the efficient market hypothesis had become far less universal. Many financial economists and statisticians began to believe that stock prices are at least partially predictable. A new breed of economists emphasized psychological and behavioral elements of stock-price determination, and came to believe that future stock prices are somewhat predictable on the basis of past stock price patterns as well as certain "fundamental" valuation metrics. Moreover, many of these economists were even making the far more controversial claim that these predictable patterns enable investors to earn excess risk-adjusted rates of return. This paper examines the attacks on the efficient market hypothesis and the belief that stock prices are partially predictable. While I make no attempt to present a complete survey of the purported regularities or anomalies in the stock market, I will describe the major statistical findings as well as their behavioral underpinnings, where relevant, and also examine the relationship between predictability and efficiency. I will also describe the major arguments of those who believe that markets are often irrational by analyzing the "crash of 1987," the "Internet bubble" of the fin de siecle, and other specific irrationalities often mentioned by critics of efficiency. I conclude that our stock markets 4 are far more efficient and far less predictable than some recent academic papers would have us believe. Moreover, the evidence is overwhelming that whatever anomalous behavior of stock prices may exist, it does not create a portfolio trading opportunity that enables investors to earn extraordinary risk adjusted returns. At the outset, it is important to make clear what I mean by the term "efficiency". I will use as a definition of efficient financial markets that they do not allow investors to earn above-average returns without accepting above-average risks. A well-known story tells of a finance professor and a student who come across a $100 bill lying on the ground. As the student stops to pick it up, the professor says, "Don't bother-if it were really a $100 bill, it wouldn't be there." The story well illustrates what financial economists usually mean when they say markets are efficient. Markets can be efficient in this sense even if they sometimes make errors in valuation, as was certainly true during the 1999-early 2000 internet bubble. Markets can be efficient even if many market participants are quite irrational. Markets can be efficient even if stock prices exhibit greater volatility than can apparently be explained by fundamentals such as earnings and dividends. Many of us economists who believe in efficiency do so because we view markets as amazingly successful devices for reflecting new information rapidly and, for the most part, accurately. Above all, we believe that financial markets are efficient because they don't allow investors to earn above-average risk-adjusted returns. In short, we believe that $100 bills are not lying around for the taking, either by the professional or the amateur investor. What I do not argue is that the market pricing is always perfect. After the fact, we know that markets have made egregious mistakes as I think occurred during the recent 5 Internet bubble. Nor do I deny that psychological factors influence securities prices. But I am convinced that Benjamin Graham (1965) was correct in suggesting that while the stock market in the short run may be a voting mechanism, in the long run it is a weighing mechanism. True value will win out in the end. And before the fact, there is no way in which investors can reliably exploit any anomalies or patterns that might exist. I am skeptical that any of the "predictable patterns" that have been documented in the literature were ever sufficiently robust so as to have created profitable investment opportunities and after they have been discovered and publicized, they will certainly not allow investors to earn excess returns. In this section, I review some of the patterns of possible predictability suggested by studies of the behavior of past stock prices. The original empirical work supporting the notion of randomness in stock prices looked at such measures of short-run serial correlations between successive stock-price changes. In general, this work supported the view that the stock market has no memorythe way a stock price behaved in the past is not useful in divining how it will behave in the future; for example, see the survey of articles contained in Cootner (1964) . More recent work by Lo and MacKinlay (1999) finds that short-run serial correlations are not zero and that the existence of "too many" successive moves in the same direction enable 6 them to reject the hypothesis that stock prices behave as random walks. There does seem to be some momentum in short-run stock prices. Moreover, Lo, Mamaysky and Wang (2000) also find, through the use of sophisticated nonparametric statistical techniques that can recognize patterns, some of the stock-price signals used by "technical analysts" such as "head and shoulders" formations and "double bottoms", may actually have some modest predictive power. Economists and psychologists in the field of behavioral finance find such shortrun momentum to be consistent with psychological feedback mechanisms. Individuals see a stock price rising and are drawn into the market in a kind of "bandwagon effect." For example, Shiller (2000) describes the rise in the U.S. stock market during the late 1990s as the result of psychological contagion leading to irrational exuberance. The behavioralists offered another explanation for patterns of short-run momentum -a tendency for investors to underreact to new information. If the full impact of an important news announcement is only grasped over a period of time, stock prices will exhibit the positive serial correlation found by investigators. As behavioral finance became more prominent as a branch of the study of financial markets, momentum, as opposed to randomness, seemed reasonable to many investigators. However, there are several factors that should prevent us from interpreting the empirical results reported above as an indication that markets are inefficient. First, while the stock market may not be a mathematically perfect random walk, it is important to distinguish statistical significance from economic significance. The statistical dependencies giving rise to momentum are extremely small and are not likely to permit investors to realize excess returns. Anyone who pays transactions costs is unlikely to 7 fashion a trading strategy based on the kinds of momentum found in these studies that will beat a buy-and-hold strategy. Indeed, Odean (1999) suggests that momentum investors do not realize excess returns. Quite the opposite -a sample of such investors suggests that such traders did far worse than buy-and-hold investors even during a period where there was clear statistical evidence of positive momentum. This is so because of the large transactions costs involved in attempting to exploit whatever momentum exists. Similarly, David Lesmond, Michael Schill, and Chunsheng Zhou (2001) find that the transactions costs involved in undertaking standard "relative strength" strategies are not profitable because of the trading costs involved in their execution. Second, while behavioural hypotheses about bandwagon effects and underreaction to new information may sound plausible enough, the evidence that such effects occur systematically in the stock market is often rather thin. For example, Eugene Fama (1998) surveys the considerable body of empirical work on "event studies" that seeks to determine if stock prices respond efficiently to information. The "events" include such announcements as earnings surprises, stock splits, dividend actions, mergers, new exchange listings, and initial public offerings. Fama finds that apparent underreaction to information is about as common as overreaction, and post-event continuation of abnormal returns is as frequent as post-event reversals. He also shows that many of the return "anomalies" arise only in the context of some very particular model, and that the results tend to disappear when exposed to different models for expected "normal" returns, different methods to adjust for risk, and when different statistical approaches are used to measure them. For example, a study, which gives equal-weight to post-announcement returns of many stocks, can produce different results 8 from a study that weight the stocks according to their value. Certainly, whatever momentum displayed by stock prices does not appear to offer investors a dependable way to earn abnormal returns. The key factor is whether any patterns of serial correlation are consistent over time. Momentum strategies, which refer to buying stocks that display positive serial correlation and/or positive relative strength, appeared to produce positive relative returns during some periods of the late 1990s but highly negative relative returns during 2000. It is far from clear that any stock-price patterns are useful for investors in fashioning an investment strategy that will dependably earn excess returns. Many predictable patterns seem to disappear after they are published in the finance literature. As Schwert (2001) points out, there are two possible explanations for such a pattern. One explanation may be that researchers are always sifting through mountains of financial data. Their normal tendency is to focus on results that challenge perceived wisdom, and every now and again, a combination of a certain sample and a certain technique will produce a statistically significant result that seems to challenge the efficient markets hypothesis. Alternatively, perhaps practitioners learn quickly about any true predictable pattern and exploit it to the extent that it becomes no longer profitable. My own view is that such apparent patterns were never sufficiently large or stable to guarantee consistently superior investment results and certainly such patterns will never be useful for investors after they have received considerable publicity. The so-called January effect, for example, seems to have disappeared soon after it was discovered. 9 Long-run Return Reversals In the short-run, when stock returns are measured over periods of days or weeks, the usual argument against market efficiency is that some positive serial correlation exists. But many studies have shown evidence of negative serial correlation -that is, return reversals --over longer holding periods. For example, Fama and French (1988) found that 25 to 40 percent of the variation in long holding period returns can be predicted in terms of a negative correlation with past returns. Similarly, Poterba and Summers (1988) found substantial mean reversion in stock market returns at longer horizons. Some studies have attributed this forecastability to the tendency of stock market prices to "overreact." DeBondt and Thaler (1995) , for example, argue that investors are subject to waves of optimism and pessimism that cause prices to deviate systematically from their fundamental values and later to exhibit mean reversion. They suggest that such overreaction to past events is consistent with the behavioral decision theory of Kahneman and Tversky (1982), where investors are systematically overconfident in their ability to forecast either future stock prices or future corporate earnings. These findings give some support to investment techniques that rest on a "contrarian" strategy, that is, buying the stocks, or groups of stocks, that have been out of favor for long periods of time and avoiding those stocks that have had large run-ups over the last several years. There is indeed considerable support for long-run negative serial correlation in stock returns. However, the finding of mean reversion is not uniform across studies and is quite a bit weaker in some periods than it is for other periods. Indeed, the strongest empirical results come from periods including the Great Depression -which may be a 10 time with patterns that do not generalize well. Moreover, such return reversals for the market as a whole may be quite consistent with the efficient functioning of the market since they could result, in part, from the volatility of interest rates and the tendency of interest rates to be mean reverting. Since stock returns must rise or fall to be competitive with bond returns, there is a tendency when interest rates go up for prices of both bond and stocks to go down, and as interest rates go down for prices of bonds and stocks to go up. If interest rates mean revert over time, this pattern will tend to generate return reversals, or mean reversion, in a way that is quite consistent with the efficient functioning of markets. Moreover, it may not be possible to profit from the tendency for individual stocks to exhibit patterns of return reversals. Fluck, Malkiel and Quandt (1997) simulated a strategy of buying stocks over a 13-year period during the 1980s and early 1990s that had particularly poor returns over the past three to five years. They found that stocks with very low returns over the past three to five years had higher returns in the next period, and that stocks with very high returns over the past three to five years had lower returns in the next period. Thus, they confirmed the very strong statistical evidence of return reversals. However, they also found that returns in the next period were similar for both groups, so they could not confirm that a contrarian approach would yield higher-thanaverage returns. There was a statistically strong pattern of return reversal, but not one that implied an inefficiency in the market that would enable investors to make excess returns. Seasonal and Day-of-the-Week Patterns | Empirical research suggested that market prices can be partially predicted REF . | 18707992 | The Efficient Market Hypothesis and Its Critics | {
"venue": null,
"journal": "Journal of Economic Perspectives",
"mag_field_of_study": [
"Economics"
]
} |
Binary code reuse is the process of automatically identifying the interface and extracting the instructions and data dependencies of a code fragment from an executable program, so that it is self-contained and can be reused by external code. Binary code reuse is useful for a number of security applications, including reusing the proprietary cryptographic or unpacking functions from a malware sample and for rewriting a network dialog. In this paper we conduct the first systematic study of automated binary code reuse and its security applications. The main challenge in binary code reuse is understanding the code fragment's interface. We propose a novel technique to identify the prototype of an undocumented code fragment directly from the program's binary, without access to source code or symbol information. Further, we must also extract the code itself from the binary so that it is self-contained and can be easily reused in another program. We design and implement a tool that uses a combination of dynamic and static analysis to automatically identify the prototype and extract the instructions of an assembly function into a form that can be reused by other C code. The extracted function can be run independently of the rest of the program's functionality and shared with other users. We apply our approach to scenarios that include extracting the encryption and decryption routines from malware samples, and show that these routines can be reused by a network proxy to decrypt encrypted traffic on the network. This allows the network proxy to rewrite the malware's encrypted traffic by combining the extracted encryption and decryption functions with the session keys and the protocol grammar. We also show that we can reuse a code fragment from an unpacking function for the unpacking routine for a different sample of the same family, even if the code fragment is not a complete function. | Caballero et al. REF performed the first systematic study of automatic binary code reuse and implemented BCR, which can extract binary functions and wrap it with a C interface. | 13177007 | Binary Code Extraction and Interface Identification for Security Applications | {
"venue": "NDSS",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
The broadcast throughput in a network is defined as the average number of messages that can be transmitted per unit time from a given source to all other nodes when time goes to infinity. Classical broadcast algorithms treat messages as atomic tokens and route them from the source to the receivers by making intermediate nodes store and forward messages. The more recent network coding approach, in contrast, prompts intermediate nodes to mix and code together messages. It has been shown that certain wired networks have an asymptotic network coding gap, that is, they have asymptotically higher broadcast throughput when using network coding compared to routing. Whether such a gap exists for wireless networks has been an open question of great interest. We approach this question by studying the broadcast throughput of the radio network model which has been a standard mathematical model to study wireless communication. We show that there is a family of radio networks with a tight Ξ(log log n) network coding gap, that is, networks in which the asymptotic throughput achievable via routing messages is a Ξ(log log n) factor smaller than that of the optimal network coding algorithm. We also provide new tight upper and lower bounds that show that the asymptotic worst-case broadcast throughput over all networks with n nodes is Ξ(1/ log n) messages-per-round for both routing and network coding. * nogaa@post.tau.ac.il β | A network coding gap of β¦(log log ) on certain topologies for the radio network REF as was a Ξ(1) worst case gap. | 7112970 | Broadcast Throughput in Radio Networks: Routing vs. Network Coding | {
"venue": null,
"journal": null,
"mag_field_of_study": [
"Computer Science",
"Mathematics"
]
} |
Investigations of transcript levels on a genomic scale using hybridization-based arrays led to formidable advances in our understanding of the biology of many human illnesses. At the same time, these investigations have generated controversy, because of the probabilistic nature of the conclusions, and the surfacing of noticeable discrepancies between the results of studies addressing the same biological question. In this article we present simple and effective data analysis and visualization tools for gauging the degree to which the finding of one study are reproduced by others, and for integrating multiple studies in a single analysis. We describe these approaches in the context of studies of breast cancer, and illustrate that it is possible to identify a substantial, biologically relevant subset of the human genome within which hybridization results are reproducible. The subset generally varies with the platforms used, the tissues studied, and the populations being sampled. Despite important differences, it is also possible to develop simple expression measures that allow comparison across platforms, studies, labs and populations. Important biological signal is often preserved or enhanced. Cross-study validation and combination of microarray results requires careful, but not overly complex, statistical thinking, and can become a routine component of genomic analysis. Investigations of transcript levels on a genomic scale using hybridization-based arrays led to formidable advances in our understanding of the biology of many human illnesses. At the same time, these investigations have generated controversy, because of the probabilistic nature of the conclusions, and the surfacing of noticeable discrepancies between the results of studies addressing the same biological question. In this article we present simple and effective data analysis and visualization tools for gauging the degree to which the finding of one study are reproduced by others, and for integrating multiple studies in a single analysis. We describe these approaches in the context of studies of breast cancer, and illustrate that it is possible to identify a substantial, biologically relevant subset of the human genome within which hybridization results are reproducible. The subset generally varies with the platforms used, the tissues studied, and the populations being sampled. Despite important differences, it is also possible to develop simple expression measures that allow comparison across platforms, studies, labs and populations. Important biological signal is often preserved or enhanced. Cross-study validation and combination of microarray results requires careful, but not overly complex, statistical thinking, and can become a routine component of genomic analysis. | However, as emphasized in REF , the investigations of gene expression levels have also generated controversy because of the probabilistic nature of the conclusions and the discrepancies between the results of the studies addressing the same biological question. | 39601211 | Cross-study validation and combined analysis of gene expression microarray data | {
"venue": "Biostatistics",
"journal": "Biostatistics",
"mag_field_of_study": [
"Biology",
"Medicine"
]
} |
As developers work on a software product they accumulate expertise, including expertise about the code base of the software product. We call this type of expertise 'implementation expertise'. Knowing the set of developers who have implementation expertise for a software product has many important uses. This paper presents an empirical evaluation of two approaches to determining implementation expertise from the data in source and bug repositories. The expertise sets created by the approaches are compared to those provided by experts and evaluated using the measures of precision and recall. We found that both approaches are good at finding all of the appropriate developers, although they vary in how many false positives are returned. | Anvik and Murphy REF did an empirical evaluation of two approaches to locate expertise. | 8998218 | Determining Implementation Expertise from Bug Reports | {
"venue": "MSR '07",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
Many concurrent programming models enable both transactional memory and message passing. For such models, researchers have built increasingly efficient implementations and defined reasonable correctness criteria, while it remains an open problem to obtain the best of both worlds. We present a programming model that is the first to have opaque transactions, safe asynchronous message passing, and an efficient implementation. Our semantics uses tentative message passing and keeps track of dependencies to enable undo of message passing in case a transaction aborts. We can program communication idioms such as barrier and rendezvous that do not deadlock when used in an atomic block. Our experiments show that our model adds little overhead to pure transactions, and that it is significantly more efficient than Transactional Events. We use a novel definition of safe message passing that may be of independent interest. | Lesani and Palsberg REF describe a semantics that combines STM with message passing through tentative message passing, which keeps track of dependencies between transactions to enable undo of message passing in case a transaction aborts. | 2080040 | Communicating memory transactions | {
"venue": "PPoPP '11",
"journal": null,
"mag_field_of_study": [
"Computer Science"
]
} |
For years social psychologists have exalted the power of the situation. We comfortably acknowledge that across different situations, with different people, we may act in a range of ways, or even talk using a variety of styles. Aware of this tendency, Gergen (1972) began his explorations of our shifting masks of identity. In writing letters to close friends, he realized that he came across as a "completely different person" in each letter. "In one, I was morose, pouring out a philosophy of existential sorrow; in another I was a lusty realist: in a third I was a lighthearted jokester" (p. 32). Based merely on his word choices, Gergen inadvertently varied his style to adapt to the recipients of his letters. This is a prime demonstration of our inherent knowledge of the mutability of our language with respect to varying social contexts. Intuitively when we interact with others, we adapt to them across a wide range of behaviors, especially language. When two people are talking, their communicative behaviors are patterned and coordinated, like a dance. The nonverbal literature suggests that coordination may be a fundamental aspect of human behavior; most facets of communication, such as facial expression, nonverbal vocal behavior, kinesics, visual behavior and proxemics are coordinated (Harper, Wiens, & Matarazzo, 1978) . In this article, we explore the degree to which two people in conversation coordinate by matching their word use. Linguistic research originated searching for a set of rules to combine morphemes into sentences. More recently, linguistic research has attempted to ascertain a similar syntax or grammar of conversation (Clarke, 1983) . Key to the assumption that there are rules governing all possible conversations is the definition of conversation as jointly managed (Slugoski & Hilton, 2001) . Research devoted to this subject has succeeded in uncovering structural regularities not particular to the word level: categories of speech act types such as "questions," "gives orientation," and others that neglect the nuances of actual conversation. Furthermore, they must be coded by human judges. Similar to nonverbal coordination, our definition of linguistic style matching (LSM) assumes that the words one person uses covary with those the other person uses on both a turn-by-turn level and on the broader conversational level (Cappella, 1996) . However, because the language the interactants use is coordinated and reciprocal, it is often not clear who is leading or following. We propose that the words one speaker uses prime the listener to respond in a specific way. In this fashion, an interactant is influenced by her partner's language at the word level in natural conversation in the same way one's nonverbal behavior can be influenced by another's movement (Chartrand & Bargh, 1999) . We are not proposing a temporal synchrony amongst conversants' language, yet the theoretical underpinnings of this research are undoubtedly related to the nonverbal communication's conception of synchrony. Research on synchronized interactions was strongly influenced by Condon and Ogston's (1966; McDowall, 1978) initial work on behavioral entrainment. Through sound film microanalyses of speaking and listening behavior between mothers and infants, Condon and Ogston concluded synchrony was a fundamental, universal characteristic of human communication. Condon (1982) later suggested that individual differences in synchrony could be diagnostic of psychopathology. In his original studies, an absence of synchrony was observed in people with dyslexia and other learning disabilities (Condon, 1982) . Since then, research has continued to look primarily at physical, nonverbal behavior (gestures and postural behavior), affect, attitudes, and biological rhythms. Synchrony is defined as the matching of behaviors, the adoption of similar behavioral rhythms, the manifestation of simultaneous movement and the interrelatedness of individual behaviors (Bernieri & Rosenthal, 1991) . Research has shown synchrony to be related to positive affect in interactions (Bernieri, Reznick, & Rosenthal, 1988) and interpersonal liking and smoothness of interactions (Chartrand & Bargh, 1999) . We hypothesized that interpersonal synchrony could analogously occur in a powerful form at the word level. Pennebaker and King (1999) demonstrated that the language people use to convey their thoughts and feelings is demonstrative of individual differences in selfexpression and is reliable across time and situation. Based on the idea that language provides insight into the ways individuals perceive the world, if people are matched in their linguistic styles, this would signify that they are in harmony in the ways they organize their psychological worlds. According to Byrne's (1971) similarity-liking hypothesis, this similarity in life-orientation could potentially lead to a more profound bond between them. Beyond establishing the degree of matching in word use, a second goal of the present study was to explore how it is related to the success or failure of the conversation. Many studies relevant to the present investigation have documented increased levels of attraction between unacquainted dyads that exhibit more coordination (as compared to dyads that are "not coordinated") on various nonverbal behaviorsincluding head movement, vocal activity (not verbal), facial expressions, and postural mirroring (Bernieri, Davis, Rosenthal, & Knee, 1994; Burgoon, Stern, & Dillman, 1991; Chartrand & Bargh, 1999; Hatfield, Cacioppo, & Rapson, 1994 ; for detailed reviews see Cappella, 1997) . According to the coordination-rapport hypothesis (Tickle-Degnen & Rosenthal, 1987) , attraction, satisfaction, attachment, longevity, and rapport should be positively correlated with "coordinated" interaction patterns. In theory, these findings should generalize to our own studies, leading to the prediction that LSM should correlate with liking, rapport, and social integration among the interactants. Particularly relevant to our predictions concerning LSM is Giles's Communication Accommodation Theory (CAT) . According to CAT, individuals adapt to each other's communicative behaviors to promote social approval or communication efficiency. The premise of the theory rests in individuals' ability to strategically negotiate the social distance between themselves and their interacting partners: creating, maintaining, or decreasing that distance (Shepard, Giles, & Le Poire, 2001 ). This can be done linguistically, paralinguistically, and nonverbally: for example, varying speech style, rate, pitch, or gaze. One specific strategy an individual can use is convergence, which involves modifications of accents, idioms, dialects, and Niederhoffer, Pennebaker / LINGUISTIC STYLE MATCHING 339 code-switching to become more similar to an interaction partner (see Giles & Smith, 1979 (Pennebaker, Francis, & Booth, 2001 ). LIWC analyzes one or more text files on a word-by-word basis comparing each word in a given file to 2,290 words and word stems in an internal dictionary. The words in the internal dictionary have been rated by groups of judges as representing a variety of different psychological or linguistic dimensions. The word categories include standard linguistic measures such as word count, pronouns, and articles; psychological processes, such as affective or emotional, cognitive, and sensory processes; categories that tap references to space, time, and motion; and a group of dimensions that measure a variety of personal concerns including references to sex, death, television, and occupation (for a more complete review see Pennebaker et al. It should be noted that LIWC is capable of analyzing a conversation between two people in at least three ways: all the words within the entire conversation (based on one large file); the separate language use of each interactant for the conversation (based on one file for each speaker), and language use for each person for each turn of the conversation (e.g., 50 turns in a conversation would yield 100 separate files). No studies to date have been able to quantify the degree to which word use is coordinated nor have studies shown that linguistic dimensions may be serve as the best markers of coordination. The primary goal of these studies, then, was simply to determine the psychometric properties of language in ongoing interactions. A secondary goal of the current project was to learn the degree to which LSM reflected perceptions of rapport or "clicking." If an interaction among relative strangers goes well, we might see this in the ways the two are showing comparable word use. If we detect unmatched patterns of language between two people, we might deduce conflict within the interaction. CAT might predict this is another form of convergence that leads to satisfaction and quality of communication (see Giles & Smith, 1979) . The coordination-rapport hypothesis might similarly predict LSM to signify mutual adaptation and result in positive rapport; additionally, expectancy violations theory (Burgoon, 1993) To test these ideas, we conducted three experiments analyzing the words individuals used in two-person interactions. The first two experiments were laboratory studies wherein strangers got to know one another by interacting in live computer chat rooms. The third study was an archival analysis of 15 of the original Watergate transcripts secretly recorded in the White House wherein President Richard Nixon had a series of one-on-one discussions with H. R. Haldeman, John Erlichman, or John Dean. These natural and historic interactions allowed us to compare LSM among adult speakers and, unlike the lab studies, allowed us to examine conversational leadership. Because the first two studies relied on very similar methodologies, the methods and results will be presented together before introducing the Watergate study. In the first two experiments, college students were recruited to participate in an ongoing computer-based chat interaction in laboratories in the Department of Psychology. The first experiment sought to establish the degree to which turn length and word use was related to the quality of an ongoing computerbased chat interaction between two strangers. In addition, we sought to learn if having an anonymous screen name, as is common in naturalistic Internet chat-room use, would result in different types of interactions than having a screen name identifying the interactant's real name. A total of 130 Introductory Psychology students at the University of Texas at Austin (52 men and 78 women, mean age = 20.8 years) participated in the study as part of an Introductory Psychology experimental option. Individuals were randomly assigned chat partners, resulting in 28 mixed sex, 25 all female, and 12 all male conversational dyads. Three conversations (2 all male and 1 all female dyads) were not included in the analyses due to computer errors during the study, thus resulting in 62 dyads. Niederhoffer, Pennebaker / LINGUISTIC STYLE MATCHING 341 Participants initially signed up for one of two group experiments that were scheduled at the same time in different rooms. On arrival to one of the two rooms, students were randomly directed to one of several desk computers. Unbeknownst to the participants, each computer was directly connected to a computer in the other lab. The computer pairs were connected via a private chat-room software program (available for download at http://tucows.wau.nl/circ95.html). The privately licensed chat program is a multiple application program enabling private virtual communities with live interaction. Participants were told that they would be chatting with a person on another part of campus but would not meet this person. Measures were taken to assure that participants did not see each other before the experiment began. After consenting to participate in the experiment, each participant received a brief demographic survey with 10 questions regarding levels of experience with and usage of computers and Internet chat rooms. After being logged on to their computers, half the participants were randomly assigned (on the computer screen) to enter their real name, whereas the other half were given the opportunity to invent "a screen name of their choice." Both members of each dyad were in the same real name/invented name condition. After approximately 45 minutes, participants completed the Interaction Rating Questionnaire (IRQ). This scale contains 3 items forming the "click index" as well as 12 exploratory items assessing the degree to which participants enjoyed the conversation, and various measures of their comfort level. The click index was based on the degree to which participants felt the interaction went smoothly, they felt comfortable during the interaction, and they truly got to know the other participant. After completing the questionnaire, participants were debriefed and thanked for their participation. The transcripts of the interactions were saved and ultimately printed with all identifying information removed. Independent judges rated the transcripts using a modified version of the IRQ (IRQ-Judge). The IRQ-Judge includes questions similar to the IRQ, including the same three questions forming the click index as well as items regarding the perceived levels of fluidity, liveliness, and perceived enjoyment. Experiment 2 involved 32 (21 male, 11 female) beginning college students. Mean age was 18.2 years. Because participants were run in groups of four, speaking to each of the other three people in the group for 15 minutes each, data from a total of 48 computer chat conversations were collected. Overall, 19 interactions were mixed gender, 22 were male-male, and 7 were female-female. Data from all interactions were included in the analyses. Individuals signed up for experiments in groups of four, with the understanding that they not know any other potential participants in their time slot. On arrival at the lab, prior to any interactions, participants were escorted into separate cubicles with individual computers in a laboratory suite. Each computer was running Microsoft Chat Software (downloaded from www.microsoftchat.com), which allowed the experimenter to create separate chat rooms that only two of the participants could enter during any 15-minute interaction period. After being seated in the lab cubicles, participants were assigned an identification number as their screen name so as to prevent recognition by another participant. Each participant interacted with the other three participants for 15 minutes each. The students were individually instructed to "try to get to know the other participant." There were no limitations on conversation content. At the end of each 15-minute period, the participants completed a brief questionnaire and the experimenter reconfigured their software programs to be certain that they would be interacting with a different participant during the next 15-minute interaction period. After each conversation, participants completed a 10-item Interaction Rating Questionnaire (IRQ) (see description in Experiment 1). After the final interaction, participants were debriefed en masse, thanked, and excused. As in Experiment 1, all the transcripts of the 48 interactions were saved, and after removing any identifying information, were printed. Four independent judges rated the individual transcripts of the chatroom interactions using a modified version of the Interaction Rating Questionnaire (IRQ-Judge) (see Experiment 1). Niederhoffer, Pennebaker / LINGUISTIC STYLE MATCHING 343 The results from Experiments 1 and 2 are divided into four different categories. In the first section, we discuss the basic features of the conversations. The second section summarizes the basic psychometric properties of the self-reports and judges' ratings of interaction quality, or clicking. The next section focuses on the psychometric aspects of language. We conclude with the comparison of click ratings with the various linguistic elements hypothesized to be related to clicking. | It is well known that conversation partners become more linguistically similar to each other as their dialogue evolves, via many aspects such as lexical, syntactic, as well as acoustic characteristics REF Levitan et al., 2011) . | 16228951 | Linguistic Style Matching In Social Interaction | {
"venue": null,
"journal": "Journal of Language and Social Psychology",
"mag_field_of_study": [
"Psychology"
]
} |
Relationships among objects play a crucial role in image understanding. Despite the great success of deep learning techniques in recognizing individual objects, reasoning about the relationships among objects remains a challenging task. Previous methods often treat this as a classification problem, considering each type of relationship (e.g. "ride") or each distinct visual phrase (e.g. "personride-horse") as a category. Such approaches are faced with significant difficulties caused by the high diversity of visual appearance for each kind of relationships or the large number of distinct visual phrases. We propose an integrated framework to tackle this problem. At the heart of this framework is the Deep Relational Network, a novel formulation designed specifically for exploiting the statistical dependencies between objects and their relationships. On two large data sets, the proposed method achieves substantial improvement over state-of-the-art. | Dai et al.. REF exploit the statistical dependencies between objects and their relationships. | 2634827 | Detecting Visual Relationships with Deep Relational Networks | {
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"journal": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"mag_field_of_study": [
"Computer Science"
]
} |