Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 7 new columns ({'similar_books', 'authors', 'publisher', 'publication_year', 'genres', 'book_id', 'description'}) and 4 missing columns ({'mag_id', 'abstract', 'category', 'label_id'}).

This happened while the csv dataset builder was generating data using

zip://Book.csv::hf://datasets/Cloudy1225/HTAG@16e6e1d0e89012dbda758ed4f88c74e470446a07/book/Book.csv.zip

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              book_id: int64
              title: string
              description: string
              publication_year: int64
              publisher: string
              authors: string
              similar_books: string
              genres: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1205
              to
              {'mag_id': Value(dtype='int64', id=None), 'title': Value(dtype='string', id=None), 'abstract': Value(dtype='string', id=None), 'label_id': Value(dtype='int64', id=None), 'category': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1412, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 988, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 7 new columns ({'similar_books', 'authors', 'publisher', 'publication_year', 'genres', 'book_id', 'description'}) and 4 missing columns ({'mag_id', 'abstract', 'category', 'label_id'}).
              
              This happened while the csv dataset builder was generating data using
              
              zip://Book.csv::hf://datasets/Cloudy1225/HTAG@16e6e1d0e89012dbda758ed4f88c74e470446a07/book/Book.csv.zip
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

mag_id
int64
title
string
abstract
string
label_id
int64
category
string
9,657,784
Evasion Attacks against Machine Learning at Test Time
In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
4
cs.CR
39,886,162
How Hard is Computing Parity with Noisy Communications
We show a tight lower bound of $\Omega(N \log\log N)$ on the number of transmissions required to compute the parity of $N$ input bits with constant error in a noisy communication network of $N$ randomly placed sensors, each having one input bit and communicating with others using local transmissions with power near the connectivity threshold. This result settles the lower bound question left open by Ying, Srikant and Dullerud (WiOpt 06), who showed how the sum of all the $N$ bits can be computed using $O(N \log\log N)$ transmissions. The same lower bound has been shown to hold for a host of other functions including majority by Dutta and Radhakrishnan (FOCS 2008). #R##N#Most works on lower bounds for communication networks considered mostly the full broadcast model without using the fact that the communication in real networks is local, determined by the power of the transmitters. In fact, in full broadcast networks computing parity needs $\theta(N)$ transmissions. To obtain our lower bound we employ techniques developed by Goyal, Kindler and Saks (FOCS 05), who showed lower bounds in the full broadcast model by reducing the problem to a model of noisy decision trees. However, in order to capture the limited range of transmissions in real sensor networks, we adapt their definition of noisy decision trees and allow each node of the tree access to only a limited part of the input. Our lower bound is obtained by exploiting special properties of parity computations in such noisy decision trees.
5
cs.DC
121,432,379
A Promise Theory Perspective on Data Networks
Networking is undergoing a transformation throughout our industry. The shift from hardware driven products with ad hoc control to Software Defined Networks is now well underway. In this paper, we adopt the perspective of the Promise Theory to examine the current state of networking technologies so that we might see beyond specific technologies to principles for building flexible and scalable networks. Today's applications are increasingly distributed planet-wide in cloud-like hosting environments. Promise Theory's bottom-up modelling has been applied to server management for many years and lends itself to principles of self-healing, scalability and robustness.
8
cs.NI
1,444,859,417
Webvrgis Based City Bigdata 3d Visualization and Analysis
This paper shows the WEBVRGIS platform overlying multiple types of data about Shenzhen over a 3d globe. The amount of information that can be visualized with this platform is overwhelming, and the GIS-based navigational scheme allows to have great flexibility to access the different available data sources. For example,visualising historical and forecasted passenger volume at stations could be very helpful when overlaid with other social data.
6
cs.HC
1,483,430,697
Information Theoretic Authentication and Secrecy Codes in the Splitting Model
In the splitting model, information theoretic authentication codes allow non-deterministic encoding, that is, several messages can be used to communicate a particular plaintext. Certain applications require that the aspect of secrecy should hold simultaneously. Ogata-Kurosawa-Stinson-Saido (2004) have constructed optimal splitting authentication codes achieving perfect secrecy for the special case when the number of keys equals the number of messages. In this paper, we establish a construction method for optimal splitting authentication codes with perfect secrecy in the more general case when the number of keys may differ from the number of messages. To the best knowledge, this is the first result of this type.
4
cs.CR
1,486,601,621
Whealth Transforming Telehealth Services
A worldwide increase in proportions of older people in the population poses the challenge of managing their increasing healthcare needs within limited resources. To achieve this many countries are interested in adopting telehealth technology. Several shortcomings of state-of-the-art telehealth technology constrain widespread adoption of telehealth services. We present an ensemble-sensing framework - wHealth (short form of wireless health) for effective delivery of telehealth services. It extracts personal health information using sensors embedded in everyday devices and allows effective and seamless communication between patients and clinicians. Due to the non-stigmatizing design, ease of maintenance, simplistic interaction and seamless intervention, our wHealth platform has the potential to enable widespread adoption of telehealth services for managing elderly healthcare. We discuss the key barriers and potential solutions to make the wHealth platform a reality.
3
cs.CY
1,528,301,850
A Bi Level View of Inpainting Based Image Compression
Inpainting based image compression approaches, especially linear and non-linear diffusion models, are an active research topic for lossy image compression. The major challenge in these compression models is to find a small set of descriptive supporting points, which allow for an accurate reconstruction of the original image. It turns out in practice that this is a challenging problem even for the simplest Laplacian interpolation model. In this paper, we revisit the Laplacian interpolation compression model and introduce two fast algorithms, namely successive preconditioning primal dual algorithm and the recently proposed iPiano algorithm, to solve this problem efficiently. Furthermore, we extend the Laplacian interpolation based compression model to a more general form, which is based on principles from bi-level optimization. We investigate two different variants of the Laplacian model, namely biharmonic interpolation and smoothed Total Variation regularization. Our numerical results show that significant improvements can be obtained from the biharmonic interpolation model, and it can recover an image with very high quality from only 5% pixels.
16
cs.CV
1,542,788,159
Back to the Past Source Identification in Diffusion Networks From Partially Observed Cascades
When a piece of malicious information becomes rampant in an information diffusion network, can we identify the source node that originally introduced the piece into the network and infer the time when it initiated this? Being able to do so is critical for curtailing the spread of malicious information, and reducing the potential losses incurred. This is a very challenging problem since typically only incomplete traces are observed and we need to unroll the incomplete traces into the past in order to pinpoint the source. In this paper, we tackle this problem by developing a two-stage framework, which first learns a continuous-time diffusion network model based on historical diffusion traces and then identifies the source of an incomplete diffusion trace by maximizing the likelihood of the trace under the learned model. Experiments on both large synthetic and real-world data show that our framework can effectively go back to the past, and pinpoint the source node and its initiation time significantly more accurately than previous state-of-the-arts.
26
cs.SI
1,581,827,225
Homomorphic Encryption Theory and Application
The goal of this chapter is to present a survey of homomorphic encryption techniques and their applications. After a detailed discussion on the introduction and motivation of the chapter, we present some basic concepts of cryptography. The fundamental theories of homomorphic encryption are then discussed with suitable examples. The chapter then provides a survey of some of the classical homomorphic encryption schemes existing in the current literature. Various applications and salient properties of homomorphic encryption schemes are then discussed in detail. The chapter then introduces the most important and recent research direction in the filed - fully homomorphic encryption. A significant number of propositions on fully homomorphic encryption is then discussed. Finally, the chapter concludes by outlining some emerging research trends in this exicting field of cryptography.
4
cs.CR
1,585,744,708
Learning Transformations for Clustering and Classification
A low-rank transformation learning framework for subspace clustering and classification is here proposed. Many high-dimensional data, such as face images and motion sequences, approximately lie in a union of low-dimensional subspaces. The corresponding subspace clustering problem has been extensively studied in the literature to partition such high-dimensional data into clusters corresponding to their underlying low-dimensional subspaces. However, low-dimensional intrinsic structures are often violated for real-world observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using matrix rank, via its convex surrogate nuclear norm, as the optimization criteria. The learned linear transformation restores a low-rank structure for data from the same subspace, and, at the same time, forces a a maximally separated structure for data from different subspaces. In this way, we reduce variations within subspaces, and increase separation between subspaces for a more robust subspace clustering. This proposed learned robust subspace clustering framework significantly enhances the performance of existing subspace clustering methods. Basic theoretical results here presented help to further support the underlying framework. To exploit the low-rank structures of the transformed subspaces, we further introduce a fast subspace clustering technique, which efficiently combines robust PCA with sparse modeling. When class labels are present at the training stage, we show this low-rank transformation framework also significantly enhances classification performance. Extensive experiments using public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art methods for subspace clustering and classification.
16
cs.CV
1,586,330,215
Methods for Integrating Knowledge with the Three Weight Optimization Algorithm for Hybrid Cognitive Processing
In this paper we consider optimization as an approach for quickly and flexibly developing hybrid cognitive capabilities that are efficient, scalable, and can exploit knowledge to improve solution speed and quality. In this context, we focus on the Three-Weight Algorithm, which aims to solve general optimization problems. We propose novel methods by which to integrate knowledge with this algorithm to improve expressiveness, efficiency, and scaling, and demonstrate these techniques on two example problems (Sudoku and circle packing).
10
cs.AI
1,591,405,962
Csma Local Area Networking Under Dynamic Altruism
In this paper, we consider medium access control of local area networks (LANs) under limitedinformation conditions as befits a distributed system. Rather than assuming “by rule” conformance to a protocol designed to regulate packet-flow rates (e.g., CSMA windowing), we begin with a noncooperative game framework and build a dynamic altruism term into the net utility. The effects of altruism are analyzed at Nash equilibrium for both the ALOHA and CSMA frameworks in the quasistationary (fictitious play) regime. We consider either power or throughput based costs of networking, and the cases of identical or heterogeneous (independent) users/players. In a numerical study we consider diverse players, and we see that the effects of altruism for similar players can be beneficial in the presence of significant congestion, but excessive altruism may lead to underuse of the channel when demand is low.
8
cs.NI
1,595,098,738
Face Frontalization for Alignment and Recognition
Recently, it was shown that excellent results can be achieved in both face landmark localization and pose-invariant face recognition. These breakthroughs are attributed to the efforts of the community to manually annotate facial images in many different poses and to collect 3D faces data. In this paper, we propose a novel method for joint face landmark localization and frontal face reconstruction (pose correction) using a small set of frontal images only. By observing that the frontal facial image is the one with the minimum rank from all different poses we formulate an appropriate model which is able to jointly recover the facial landmarks as well as the frontalized version of the face. To this end, a suitable optimization problem, involving the minimization of the nuclear norm and the matrix $\ell_1$ norm, is solved. The proposed method is assessed in frontal face reconstruction (pose correction), face landmark localization, and pose-invariant face recognition and verification by conducting experiments on $6$ facial images databases. The experimental results demonstrate the effectiveness of the proposed method.
16
cs.CV
1,596,723,206
From Bounded Affine Types to Automatic Timing Analysis
Bounded linear types have proved to be useful for automated resource analysis and control in functional programming languages. In this paper we introduce an affine bounded linear typing discipline on a general notion of resource which can be modeled in a semiring. For this type system we provide both a general type-inference procedure, parameterized by the decision procedure of the semiring equational theory, and a (coherent) categorical semantics. This is a very useful type-theoretic and denotational framework for many applications to resource-sensitive compilation, and it represents a generalization of several existing type systems. As a non-trivial instance, motivated by our ongoing work on hardware compilation, we present a complex new application to calculating and controlling timing of execution in a (recursion-free) higher-order functional programming language with local store.
22
cs.PL
1,601,434,380
An Efficient Way to Perform the Assembly of Finite Element Matrices in Matlab and Octave
We describe different optimization techniques to perform the assembly of finite element matrices in Matlab and Octave, from the standard approach to recent vectorized ones, without any low level language used. We finally obtain a simple and efficient vectorized algorithm able to compete in performance with dedicated software such as FreeFEM++. The principle of this assembly algorithm is general, we present it for different matrices in the P1 finite elements case and in linear elasticity. We present numerical results which illustrate the computational costs of the different approaches
0
cs.NA
1,607,460,189
Constrained Parametric Proposals and Pooling Methods for Semantic Segmentation in Rgb D Images
We focus on the problem of semantic segmentation based on RGB-D data, with emphasis on analyzing cluttered indoor scenes containing many instances from many visual categories. Our approach is based on a parametric figure-ground intensity and depth-constrained proposal process that generates spatial layout hypotheses at multiple locations and scales in the image followed by a sequential inference algorithm that integrates the proposals into a complete scene estimate. Our contributions can be summarized as proposing the following: (1) a generalization of parametric max flow figure-ground proposal methodology to take advantage of intensity and depth information, in order to systematically and efficiently generate the breakpoints of an underlying spatial model in polynomial time, (2) new region description methods based on second-order pooling over multiple features constructed using both intensity and depth channels, (3) an inference procedure that can resolve conflicts in overlapping spatial partitions, and handles scenes with a large number of objects category instances, of very different scales, (4) extensive evaluation of the impact of depth, as well as the effectiveness of a large number of descriptors, both pre-designed and automatically obtained using deep learning, in a difficult RGB-D semantic segmentation problem with 92 classes. We report state of the art results in the challenging NYU Depth v2 dataset, extended for RMRC 2013 Indoor Segmentation Challenge, where currently the proposed model ranks first, with an average score of 24.61% and a number of 39 classes won. Moreover, we show that by combining second-order and deep learning features, over 15% relative accuracy improvements can be additionally achieved. In a scene classification benchmark, our methodology further improves the state of the art by 24%.
16
cs.CV
1,618,900,328
Regulation and the Integrity of Spreadsheets in the Information Supply Chain
Spreadsheets provide many of the key links between information systems, closing the gap between business needs and the capability of central systems. Recent regulations have brought these vulnerable parts of information supply chains into focus. The risk they present to the organisation depends on the role that they fulfil, with generic differences between their use as modeling tools and as operational applications. Four sections of the Sarbanes-Oxley Act (SOX) are particularly relevant to the use of spreadsheets. Compliance with each of these sections is dependent on maintaining the integrity of those spreadsheets acting as operational applications. This can be achieved manually but at high cost. There are a range of commercially available off-the-shelf solutions that can reduce this cost. These may be divided into those that assist in the debugging of logic and more recently the arrival of solutions that monitor the change and user activity taking place in business-critical spreadsheets. ClusterSeven provides one of these monitoring solutions, highlighting areas of operational risk whilst also establishing a database of information to deliver new business intelligence.
3
cs.CY
1,623,729,836
Reconfigurable Wireless Networks
Driven by the advent of sophisticated and ubiquitous applications, and the ever-growing need for information, wireless networks are without a doubt steadily evolving into profoundly more complex and dynamic systems. The user demands are progressively rampant, while application requirements continue to expand in both range and diversity. Future wireless networks, therefore, must be equipped with the ability to handle numerous, albeit challenging, requirements. Network reconfiguration, considered as a prominent network paradigm, is envisioned to play a key role in leveraging future network performance and considerably advancing current user experiences. This paper presents a comprehensive overview of reconfigurable wireless networks and an in-depth analysis of reconfiguration at all layers of the protocol stack. Such networks characteristically possess the ability to reconfigure and adapt their hardware and software components and architectures, thus enabling flexible delivery of broad services, as well as sustaining robust operation under highly dynamic conditions. The paper offers a unifying framework for research in reconfigurable wireless networks. This should provide the reader with a holistic view of concepts, methods, and strategies in reconfigurable wireless networks. Focus is given to reconfigurable systems in relatively new and emerging research areas such as cognitive radio networks, cross-layer reconfiguration, and software-defined networks. In addition, modern networks have to be intelligent and capable of self-organization. Thus, this paper discusses the concept of network intelligence as a means to enable reconfiguration in highly complex and dynamic networks. Key processes in network intelligence, such as reasoning, learning, and context awareness, are presented to illustrate how these methods can take reconfiguration to a new level. Finally, the paper is supported with several examples and case studies showing the tremendous impact of reconfiguration on wireless networks.
8
cs.NI
1,657,294,604
Pushdown Abstractions of Javascript
We design a family of program analyses for JavaScript that make no approximation in matching calls with returns, exceptions with handlers, and breaks with labels. We do so by starting from an established reduction semantics for JavaScript and systematically deriving its intensional abstract interpretation. Our first step is to transform the semantics into an equivalent low-level abstract machine: the JavaScript Abstract Machine (JAM). We then give an infinite-state yet decidable pushdown machine whose stack precisely models the structure of the concrete program stack. The precise model of stack structure in turn confers precise control-flow analysis even in the presence of control effects, such as exceptions and finally blocks. We give pushdown generalizations of traditional forms of analysis such as k-CFA, and prove the pushdown framework for abstract interpretation is sound and computable.
22
cs.PL
1,661,863,441
A Notion of Robustness for Cyber Physical Systems
Robustness as a system property describes the degree to which a system is able to function correctly in the presence of disturbances, i.e., unforeseen or erroneous inputs. In this paper, we introduce a notion of robustness termed input-output dynamical stability for cyber-physical systems (CPS) which merges existing notions of robustness for continuous systems and discrete systems. The notion captures two intuitive aims of robustness: bounded disturbances have bounded effects and the consequences of a sporadic disturbance disappear over time. We present a design methodology for robust CPS which is based on an abstraction and refinement process. We suggest several novel notions of simulation relations to ensure the soundness of the approach. In addition, we show how such simulation relations can be constructed compositionally. The different concepts and results are illustrated throughout the paper with examples.
19
cs.SY
1,665,669,548
Memristors Can Implement Fuzzy Logic
In our work we propose implementing fuzzy logic using memristors. Min and max operations are done by antipodally configured memristor circuits that may be assembled into computational circuits. We discuss computational power of such circuits with respect to m-efficiency and experimentally observed behavior of memristive devices. Circuits implemented with real devices are likely to manifest learning behavior. The circuits presented in the work may be applicable for instance in fuzzy classifiers.
18
cs.ET
1,681,484,497
Informetric Analyses of Knowledge Organization Systems Koss
A knowledge organization system (KOS) is made up of concepts and semantic relations between the concepts which represent a knowledge domain terminologically. We distinguish between five approaches to KOSs: nomenclatures, classification systems, thesauri, ontologies and, as a borderline case of KOSs, folksonomies. The research question of this paper is: How can we informetrically analyze the effectiveness of KOSs? Quantitative informetric measures and indicators allow for the description, for comparative analyses as well as for evaluation of KOSs and their quality. We describe the state of the art of KOS evaluation. Most of the evaluation studies found in the literature are about ontologies. We introduce measures of the structure of KOSs (e.g., groundedness, tangledness, fan-out factor, or granularity) and indicators of KOS quality (completeness, consistency, overlap, and use).
38
cs.DL
1,682,705,844
Latent Topic Models for Hypertext
Latent topic models have been successfully applied as an unsupervised topic discovery technique in large document collections. With the proliferation of hypertext document collection such as the Internet, there has also been great interest in extending these approaches to hypertext [6, 9]. These approaches typically model links in an analogous fashion to how they model words - the document-link co-occurrence matrix is modeled in the same way that the document-word co-occurrence matrix is modeled in standard topic models. In this paper we present a probabilistic generative model for hypertext document collections that explicitly models the generation of links. Specifically, links from a word w to a document d depend directly on how frequent the topic of w is in d, in addition to the in-degree of d. We show how to perform EM learning on this model efficiently. By not modeling links as analogous to words, we end up using far fewer free parameters and obtain better link prediction results.
31
cs.IR
1,698,782,162
Complete Security Framework for Wireless Sensor Networks
Security concern for a Sensor Networks and level of security desired may differ according to application specific needs where the sensor networks are deployed. Till now, most of the security solutions proposed for sensor networks are layer wise i.e a particular solution is applicable to single layer itself. So, to integrate them all is a new research challenge. In this paper we took up the challenge and have proposed an integrated comprehensive security framework that will provide security services for all services of sensor network. We have added one extra component i.e. Intelligent Security Agent (ISA) to assess level of security and cross layer interactions. This framework has many components like Intrusion Detection System, Trust Framework, Key Management scheme and Link layer communication protocol. We have also tested it on three different application scenarios in Castalia and Omnet++ simulator.
4
cs.CR
1,720,451,657
Network Maps of Technology Fields a Comparative Analysis of Relatedness Measures
Network maps of technology fields extracted from patent databases are useful to aid in technology forecasting and road mapping. Constructing such a network requires a measure of the relatedness between pairs of technology fields. Despite the existence of various relatedness measures in the literature, it is unclear how to consistently assess and compare them, and which ones to select for constructing technology network maps. This ambiguity has limited the use of technology network maps for technology forecasting and roadmap analyses. To address this challenge, here we propose a strategy to evaluate alternative relatedness measures and identify the superior ones by comparing the structure properties of resulting technology networks. Using United States patent data, we execute the strategy through a comparative analysis of twelve relatedness measures, which quantify inter-field knowledge input similarity, field-crossing diversification likelihood or frequency of innovation agents, and co-occurrences of technology classes in the same patents. Our comparative analyses suggest two superior relatedness measures, normalized co-reference and inventor diversification likelihood, for constructing technology network maps.
26
cs.SI
1,738,519,518
Continuous Double Auction Mechanism and Bidding Strategies in Cloud Computing Markets
Cloud computing has been an emerging model which aims at allowing customers to utilize computing resources hosted by Cloud Service Providers (CSPs). More and more consumers rely on CSPs to supply computing and storage service on the one hand, and CSPs try to attract consumers on favorable terms on the other. In such competitive cloud computing markets, pricing policies are critical to market efficiency. While CSPs often publish their prices and charge users according to the amount of resources they consume, auction mechanism is rarely applied. In fact a feasible auction mechanism is the most effective method for allocation of resources, especially double auction is more efficient and flexible for it enables buyers and sellers to enter bids and offers simultaneously. In this paper we bring up an electronic auction platform for cloud, and a cloud Continuous Double Auction (CDA) mechanism is formulated to match orders and facilitate trading based on the platform. Some evaluating criteria are defined to analyze the efficiency of markets and strategies. Furthermore, the selection of bidding strategies for the auction plays a very important role for each player to maximize its own profit, so we developed a novel bidding strategy for cloud CDA, BH-strategy, which is a two-stage game bidding strategy. At last we designed three simulation scenarios to compare the performance of our strategy with other dominating bidding strategies and proved that BH-strategy has better performance on surpluses, successful transactions and market efficiency. In addition, we discussed that our cloud CDA mechanism is feasible for cloud computing resource allocation.
5
cs.DC
1,754,384,483
Inference Less Density Estimation Using Copula Bayesian Networks
We consider learning continuous probabilistic graphical models in the face of missing data. For non-Gaussian models, learning the parameters and structure of such models depends on our ability to perform efficient inference, and can be prohibitive even for relatively modest domains. Recently, we introduced the Copula Bayesian Network (CBN) density model - a flexible framework that captures complex high-dimensional dependency structures while offering direct control over the univariate marginals, leading to improved generalization. In this work we show that the CBN model also offers significant computational advantages when training data is partially observed. Concretely, we leverage on the specialized form of the model to derive a computationally amenable learning objective that is a lower bound on the log-likelihood function. Importantly, our energy-like bound circumvents the need for costly inference of an auxiliary distribution, thus facilitating practical learning of highdimensional densities. We demonstrate the effectiveness of our approach for learning the structure and parameters of a CBN model for two reallife continuous domains.
24
cs.LG
1,791,983,455
A Survey on Handover Management in Mobility Architectures
This work presents a comprehensive and structured taxonomy of available techniques for managing the handover process in mobility architectures. Representative works from the existing literature have been divided into appropriate categories, based on their ability to support horizontal handovers, vertical handovers and multihoming. We describe approaches designed to work on the current Internet (i.e. IPv4-based networks), as well as those that have been devised for the "future" Internet (e.g. IPv6-based networks and extensions). Quantitative measures and qualitative indicators are also presented and used to evaluate and compare the examined approaches. This critical review provides some valuable guidelines and suggestions for designing and developing mobility architectures, including some practical expedients (e.g. those required in the current Internet environment), aimed to cope with the presence of NAT/firewalls and to provide support to legacy systems and several communication protocols working at the application layer.
8
cs.NI
1,798,241,237
Many Task Computing and Blue Waters
This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware.
5
cs.DC
1,812,070,052
Identifying Reliable Annotations for Large Scale Image Segmentation
Challenging computer vision tasks, in particular semantic image segmentation, require large training sets of annotated images. While obtaining the actual images is often unproblematic, creating the necessary annotation is a tedious and costly process. Therefore, one often has to work with unreliable annotation sources, such as Amazon Mechanical Turk or (semi-)automatic algorithmic techniques. In this work, we present a Gaussian process (GP) based technique for simultaneously identifying which images of a training set have unreliable annotation and learning a segmentation model in which the negative effect of these images is suppressed. Alternatively, the model can also just be used to identify the most reliably annotated images from the training set, which can then be used for training any other segmentation method. By relying on "deep features" in combination with a linear covariance function, our GP can be learned and its hyperparameter determined efficiently using only matrix operations and gradient-based optimization. This makes our method scalable even to large datasets with several million training instances.
16
cs.CV
1,818,296,812
Earthquake Disaster Based Efficient Resource Utilization Technique in Iaas Cloud
Cloud Computing is an emerging area. The main aim of the initial search-and-rescue period after strong earthquakes is to reduce the whole number of mortalities. One main trouble rising in this period is to and the greatest assignment of available resources to functioning zones. For this issue a dynamic optimization model is presented. The model uses thorough descriptions of the operational zones and of the available resources to determine the resource performance and efficiency for different workloads related to the response. A suitable solution method for the model is offered as well. In this paper, Earthquake Disaster Based Resource Scheduling (EDBRS) Framework has been proposed. The allocation of resources to cloud workloads based on urgency (emergency during Earthquake Disaster). Based on this criterion, the resource scheduling algorithm has been proposed. The performance of the proposed algorithm has been assessed with the existing common scheduling algorithms through the CloudSim. The experimental results show that the proposed algorithm outperforms the existing algorithms by reducing execution cost and time of cloud consumer workloads submitted to the cloud.
5
cs.DC
1,839,164,722
Learning Economic Parameters From Revealed Preferences
A recent line of work, starting with Beigman and Vohra (2006) and Zadimoghaddam and Roth (2012), has addressed the problem of {\em learning} a utility function from revealed preference data. The goal here is to make use of past data describing the purchases of a utility maximizing agent when faced with certain prices and budget constraints in order to produce a hypothesis function that can accurately forecast the {\em future} behavior of the agent. #R##N#In this work we advance this line of work by providing sample complexity guarantees and efficient algorithms for a number of important classes. By drawing a connection to recent advances in multi-class learning, we provide a computationally efficient algorithm with tight sample complexity guarantees ($\Theta(d/\epsilon)$ for the case of $d$ goods) for learning linear utility functions under a linear price model. This solves an open question in Zadimoghaddam and Roth (2012). Our technique yields numerous generalizations including the ability to learn other well-studied classes of utility functions, to deal with a misspecified model, and with non-linear prices.
36
cs.GT
1,844,261,290
Towards Adapting Imagenet to Reality Scalable Domain Adaptation with Implicit Low Rank Transformations
Images seen during test time are often not from the same distribution as images used for learning. This problem, known as domain shift, occurs when training classifiers from object-centric internet image databases and trying to apply them directly to scene understanding tasks. The consequence is often severe performance degradation and is one of the major barriers for the application of classifiers in real-world systems. In this paper, we show how to learn transform-based domain adaptation classifiers in a scalable manner. The key idea is to exploit an implicit rank constraint, originated from a max-margin domain adaptation formulation, to make optimization tractable. Experiments show that the transformation between domains can be very efficiently learned from data and easily applied to new categories. This begins to bridge the gap between large-scale internet image collections and object images captured in everyday life environments.
16
cs.CV
1,849,650,586
Using Multiple Criteria Methods to Evaluate Community Partitions
Community detection is one of the most studied problems on complex networks. Although hundreds of methods have been proposed so far, there is still no universally accepted formal definition of what is a good community. As a consequence, the problem of the evaluation and the comparison of the quality of the solutions produced by these algorithms is still an open question, despite constant progress on the topic. In this article, we investigate how using a multi-criteria evaluation can solve some of the existing problems of community evaluation, in particular the question of multiple equally-relevant solutions of different granularity. After exploring several approaches, we introduce a new quality function, called MDensity, and propose a method that can be related both to a widely used community detection metric, the Modularity, and to the Precision/Recall approach, ubiquitous in information retrieval.
26
cs.SI
1,852,713,323
Incremental Adaptation Strategies for Neural Network Language Models
It is today acknowledged that neural network language models outperform backoff language models in applications like speech recognition or statistical machine translation. However, training these models on large amounts of data can take several days. We present efficient techniques to adapt a neural network language model to new data. Instead of training a completely new model or relying on mixture approaches, we propose two new methods: continued training on resampled data or insertion of adaptation layers. We present experimental results in an CAT environment where the post-edits of professional translators are used to improve an SMT system. Both methods are very fast and achieve significant improvements without overfitting the small adaptation data.
13
cs.NE
1,899,741,157
Towards Ontological Support for Principle Solutions in Mechanical Engineering
The engineering design process follows a series of standardized stages of development, which have many aspects in common with software engineering. Among these stages, the principle solution can be regarded as an analogue of the design specification, fixing as it does the way the final product works. It is usually constructed as an abstract sketch (hand-drawn or constructed with a CAD system) where the functional parts of the product are identified, and geometric and topological constraints are formulated. Here, we outline a semantic approach where the principle solution is annotated with ontological assertions, thus making the intended requirements explicit and available for further machine processing; this includes the automated detection of design errors in the final CAD model, making additional use of a background ontology of engineering knowledge. We embed this approach into a document-oriented design workflow, in which the background ontology and semantic annotations in the documents are exploited to trace parts and requirements through the design process and across different applications.
23
cs.SE
1,912,377,242
Throughput Capacity of Two Hop Relay Manets Under Finite Buffers
Since the seminal work of Grossglauser and Tse [1], the two-hop relay algorithm and its variants have been attractive for mobile ad hoc networks (MANETs) due to their simplicity and efficiency. However, most literature assumed an infinite buffer size for each node, which is obviously not applicable to a realistic MANET. In this paper, we focus on the exact throughput capacity study of two-hop relay MANETs under the practical finite relay buffer scenario. The arrival process and departure process of the relay queue are fully characterized, and an ergodic Markov chain-based framework is also provided. With this framework, we obtain the limiting distribution of the relay queue and derive the throughput capacity under any relay buffer size. Extensive simulation results are provided to validate our theoretical framework and explore the relationship among the throughput capacity, the relay buffer size and the number of nodes.
29
cs.PF
1,913,435,380
Ranking the Importance Level of Intermediaries to a Criminal Using a Reliance Measure
Recent research on finding important intermediate nodes in a network suspected to contain criminal activity is highly dependent on network centrality values. Betweenness centrality, for example, is widely used to rank the nodes that act as brokers in the shortest paths connecting all source and all the end nodes in a network. However both the shortest path node betweenness and the linearly scaled betweenness can only show rankings for all the nodes in a network. In this paper we explore the mathematical concept of pair-dependency on intermediate nodes, adapting the concept to criminal relationships and introducing a new source-intermediate reliance measure. To illustrate our measure, we apply it to rank the nodes in the Enron email dataset and the Noordin Top Terrorist networks. We compare the reliance ranking with Google PageRank, Markov centrality as well as betweenness centrality and show that a criminal investigation using the reliance measure, will lead to a different prioritisation in terms of possible people to investigate. While the ranking for the Noordin Top terrorist network nodes yields more extreme differences than for the Enron email transaction network, in the latter the reliance values for the set of finance managers immediately identified another employee convicted of money laundering.
26
cs.SI
1,952,469,305
Paxoslease Diskless Paxos for Leases
This paper describes PaxosLease, a distributed algorithm for lease negotiation. PaxosLease is based on Paxos, but does not require disk writes or clock synchrony. PaxosLease is used for master lease negotation in the open-source Keyspace and ScalienDB replicated key-value stores.
5
cs.DC
1,955,776,637
On the Optimality of Simple Schedules for Networks with Multiple Half Duplex Relays
This paper studies networks that consist of $N$ half-duplex relays assisting the communication between a source and a destination. In ISIT’12 Brahma et al. conjectured that in Gaussian half-duplex diamond networks (i.e., without a direct link between the source and the destination, and with $N$ non-interfering relays), an approximately optimal relay scheduling policy (i.e., achieving the cut-set upper bound to within a constant gap uniformly over all channel gains) has at most $N+1$ active states (i.e., at most $N+1$ out of the $2^{N}$ possible relay listen-transmit configurations have a strictly positive probability). Such relay scheduling policies were referred to as simple. In ITW’13, we conjectured that simple approximately optimal relay scheduling policies exist for any Gaussian half-duplex multi-relay network irrespectively of the topology. This paper formally proves this more general version of the conjecture and shows it holds beyond Gaussian noise networks. In particular, for any class of memoryless half-duplex $N$ -relay networks with independent noises and for which independent inputs are approximately optimal in the cut-set upper bound, an approximately optimal simple relay scheduling policy exists. The key step of the proof is to write the minimum of the submodular cut-set function by means of its Lovasz extension and use the greedy algorithm for submodular polyhedra to highlight structural properties of the optimal solution. This, together with the saddle-point property of min–max problems and the existence of optimal basic feasible solutions for linear programs, proves the conjecture. As an example, for $N$ -relay Gaussian networks with independent noises, where each node is equipped with multiple antennas and where each antenna can be configured to listen or transmit irrespectively of the others, the existence of an approximately optimal simple relay scheduling policy with at most $N+1$ active states, irrespectively of the total number of antennas in the system, is proved.
28
cs.IT
1,956,364,207
Interaction and Resistance the Recognition of Intentions in New Human Computer Interaction
Just as AI has moved away from classical AI, human-computer interaction (HCI) must move away from what I call 'good old fashioned HCI' to 'new HCI' - it must become a part of cognitive systems research where HCI is one case of the interaction of intelligent agents (we now know that interaction is essential for intelligent agents anyway). For such interaction, we cannot just 'analyze the data', but we must assume intentions in the other, and I suggest these are largely recognized through resistance to carrying out one's own intentions. This does not require fully cognitive agents but can start at a very basic level. New HCI integrates into cognitive systems research and designs intentional systems that provide resistance to the human agent.
6
cs.HC
1,962,645,729
Optimizations for Decision Making and Planning in Description Logic Dynamic Knowledge Bases
Artifact-centric models for business processes recently raised a lot of attention, as they manage to combine structural (i.e. data related) with dynamical (i.e. process related) aspects in a seamless way. Many frameworks developed under this approach, although, are not built explicitly for planning, one of the most prominent operations related to business processes. In this paper, we try to overcome this by proposing a framework named Dynamic Knowledge Bases, aimed at describing rich business domains through Description Logic-based ontologies, and where a set of actions allows the system to evolve by modifying such ontologies. This framework, by offering action rewriting and knowledge partialization, represents a viable and formal environment to develop decision making and planning techniques for DL-based artifact-centric business domains.
10
cs.AI
1,965,341,892
Inference and Evaluation of the Multinomial Mixture Model for Text Clustering
In this article, we investigate the use of a probabilistic model for unsupervised clustering in text collections. Unsupervised clustering has become a basic module for many intelligent text processing applications, such as information retrieval, text classification or information extraction. Recent proposals have been made of probabilistic clustering models, which build ''soft'' theme-document associations. These models allow to compute, for each document, a probability vector whose values can be interpreted as the strength of the association between documents and clusters. As such, these vectors can also serve to project texts into a lower-dimensional ''semantic'' space. These models however pose non-trivial estimation problems, which are aggravated by the very high dimensionality of the parameter space. The model considered in this paper consists of a mixture of multinomial distributions over the word counts, each component corresponding to a different theme. We propose a systematic evaluation framework to contrast various estimation procedures for this model. Starting with the expectation-maximization (EM) algorithm as the basic tool for inference, we discuss the importance of initialization and the influence of other features, such as the smoothing strategy or the size of the vocabulary, thereby illustrating the difficulties incurred by the high dimensionality of the parameter space. We empirically show that, in the case of text processing, these difficulties can be alleviated by introducing the vocabulary incrementally, due to the specific profile of the word count distributions. Using the fact that the model parameters can be analytically integrated out, we finally show that Gibbs sampling on the theme configurations is tractable and compares favorably to the basic EM approach.
31
cs.IR
1,972,197,965
The Oblivious Transfer Capacity of the Wiretapped Binary Erasure Channel
We consider oblivious transfer between Alice and Bob in the presence of an eavesdropper Eve when there is a broadcast channel from Alice to Bob and Eve. In addition to the secrecy constraints of Alice and Bob, Eve should not learn the private data of Alice and Bob. When the broadcast channel consists of two independent binary erasure channels, we derive the oblivious transfer capacity for both 2-privacy (where the eavesdropper may collude with either party) and 1-privacy (where there are no collusions).
28
cs.IT
1,974,370,252
The Ergodic Capacity of Phase Fading Interference Networks
We identify the role of equal strength interference links as bottlenecks on the ergodic sum capacity of a K user phase-fading interference network, i.e., an interference network where the fading process is restricted primarily to independent and uniform phase variations while the channel magnitudes are held fixed across time. It is shown that even though there are K(K-1) cross-links, only about K/2 disjoint and equal strength interference links suffice to determine the capacity of the network regardless of the strengths of the rest of the cross channels. This scenario is called a minimal bottleneck state. It is shown that ergodic interference alignment is capacity optimal for a network in a minimal bottleneck state. The results are applied to large networks. It is shown that large networks are close to bottleneck states with a high probability, so that ergodic interference alignment is close to optimal for large networks. Limitations of the notion of bottleneck states are also highlighted for channels where both the phase and the magnitudes vary with time. It is shown through an example that for these channels, joint coding across different bottleneck states makes it possible to circumvent the capacity bottlenecks.
28
cs.IT
1,977,183,758
Greendcn a General Framework for Achieving Energy Efficiency in Data Center Networks
The popularization of cloud computing has raised concerns over the energy consumption that takes place in data centers. In addition to the energy consumed by servers, the energy consumed by large numbers of network devices emerges as a significant problem. Existing work on energy-efficient data center networking primarily focuses on traffic engineering, which is usually adapted from traditional networks. We propose a new framework to embrace the new opportunities brought by combining some special features of data centers with traffic engineering. Based on this framework, we characterize the problem of achieving energy efficiency with a time-aware model, and we prove its NP-hardness with a solution that has two steps. First, we solve the problem of assigning virtual machines (VM) to servers to reduce the amount of traffic and to generate favorable conditions for traffic engineering. The solution reached for this problem is based on three essential principles that we propose. Second, we reduce the number of active switches and balance traffic flows, depending on the relation between power consumption and routing, to achieve energy conservation. Experimental results confirm that, by using this framework, we can achieve up to 50 percent energy savings. We also provide a comprehensive discussion on the scalability and practicability of the framework.
8
cs.NI
1,981,029,888
Rappor Randomized Aggregatable Privacy Preserving Ordinal Response
Randomized Aggregatable Privacy-Preserving Ordinal Response, or RAPPOR, is a technology for crowdsourcing statistics from end-user client software, anonymously, with strong privacy guarantees. In short, RAPPORs allow the forest of client data to be studied, without permitting the possibility of looking at individual trees. By applying randomized response in a novel manner, RAPPOR provides the mechanisms for such collection as well as for efficient, high-utility analysis of the collected data. In particular, RAPPOR permits statistics to be collected on the population of client-side strings with strong privacy guarantees for each client, and without linkability of their reports. This paper describes and motivates RAPPOR, details its differential-privacy and utility guarantees, discusses its practical deployment and properties in the face of different attack models, and, finally, gives results of its application to both synthetic and real-world data.
4
cs.CR
1,983,815,018
Increasing Flash Memory Lifetime by Dynamic Voltage Allocation for Constant Mutual Information
The read channel in Flash memory systems degrades over time because the Fowler-Nordheim tunneling used to apply charge to the floating gate eventually compromises the integrity of the cell because of tunnel oxide degradation. While degradation is commonly measured in the number of program/erase cycles experienced by a cell, the degradation is proportional to the number of electrons forced into the floating gate and later released by the erasing process. By managing the amount of charge written to the floating gate to maintain a constant read-channel mutual information, Flash lifetime can be extended. This paper proposes an overall system approach based on information theory to extend the lifetime of a flash memory device. Using the instantaneous storage capacity of a noisy flash memory channel, our approach allocates the read voltage of flash cell dynamically as it wears out gradually over time. A practical estimation of the instantaneous capacity is also proposed based on soft information via multiple reads of the memory cells.
28
cs.IT
1,984,398,168
Proof Pad a New Development Environment for Acl2
Most software development projects rely on Integrated Development Environments (IDEs) based on the desktop paradigm, with an interactive, mouse-driven user interface. The standard installation of ACL2, on the other hand, is designed to work closely with Emacs. ACL2 experts, on the whole, like this mode of operation, but students and other new programmers who have learned to program with desktop IDEs often react negatively to the process of adapting to an unfamiliar form of interaction. #R##N#This paper discusses Proof Pad, a new IDE for ACL2. Proof Pad is not the only attempt to provide ACL2 IDEs catering to students and beginning programmers. The ACL2 Sedan and DrACuLa systems arose from similar motivations. Proof Pad builds on the work of those systems, while also taking into account the unique workflow of the ACL2 theorem proving system. #R##N#The design of Proof Pad incorporated user feedback from the outset, and that process continued through all stages of development. Feedback took the form of direct observation of users interacting with the IDE as well as questionnaires completed by users of Proof Pad and other ACL2 IDEs. The result is a streamlined interface and fast, responsive system that supports using ACL2 as a programming language and a theorem proving system. Proof Pad also provides a property-based testing environment with random data generation and automated interpretation of properties as ACL2 theorem definitions.
23
cs.SE
1,994,596,572
The Role of Peer Influence in Churn in Wireless Networks
Subscriber churn remains a top challenge for wireless carriers. These carriers need to understand the determinants of churn to confidently apply effective retention strategies to ensure their profitability and growth. In this paper, we look at the effect of peer influence on churn and we try to disentangle it from other effects that drive simultaneous churn across friends but that do not relate to peer influence. We analyze a random sample of roughly 10 thousand subscribers from large dataset from a major wireless carrier over a period of 10 months. We apply survival models and generalized propensity score to identify the role of peer influence. We show that the propensity to churn increases when friends do and that it increases more when many strong friends churn. Therefore, our results suggest that churn managers should consider strategies aimed at preventing group churn. We also show that survival models fail to disentangle homophily from peer influence over-estimating the effect of peer influence.
26
cs.SI
2,000,628,897
Off the Grid Spectral Compressed Sensing with Prior Information
Recent research in off-the-grid compressed sensing (CS) has demonstrated that, under certain conditions, one can successfully recover a spectrally sparse signal from a few time-domain samples even though the dictionary is continuous. In this paper, we extend off-the-grid CS to applications where some prior information about spectrally sparse signal is known. We specifically consider cases where a few contributing frequencies or poles, but not their amplitudes or phases, are known a priori. Our results show that equipping off-the-grid CS with the known-poles algorithm can increase the probability of recovering all the frequency components.
28
cs.IT
2,001,557,365
A Regularized Graph Layout Framework for Dynamic Network Visualization
Many real-world networks, including social and information networks, are dynamic structures that evolve over time. Such dynamic networks are typically visualized using a sequence of static graph layouts. In addition to providing a visual representation of the network structure at each time step, the sequence should preserve the mental map between layouts of consecutive time steps to allow a human to interpret the temporal evolution of the network. In this paper, we propose a framework for dynamic network visualization in the on-line setting where only present and past graph snapshots are available to create the present layout. The proposed framework creates regularized graph layouts by augmenting the cost function of a static graph layout algorithm with a grouping penalty, which discourages nodes from deviating too far from other nodes belonging to the same group, and a temporal penalty, which discourages large node movements between consecutive time steps. The penalties increase the stability of the layout sequence, thus preserving the mental map. We introduce two dynamic layout algorithms within the proposed framework, namely dynamic multidimensional scaling and dynamic graph Laplacian layout. We apply these algorithms on several data sets to illustrate the importance of both grouping and temporal regularization for producing interpretable visualizations of dynamic networks.
26
cs.SI
2,001,699,728
Area Coverage Under Low Sensor Density
This paper presents a solution to the problem of monitoring a region of interest (RoI) using a set of nodes that is not sufficient to achieve the required degree of monitoring coverage. In particular, sensing coverage of wireless sensor networks (WSNs) is a crucial issue in projects due to failure of sensors. The lack of sensor equipment resources hinders the traditional method of using mobile robots to move around the RoI to collect readings. Instead, our solution employs supervised neural networks to produce the values of the uncovered locations by extracting the non-linear relation among randomly deployed sensor nodes throughout the area. Moreover, we apply a hybrid backpropagation method to accelerate the learning convergence speed to a local minimum solution. We use a real-world data set from meteorological deployment for experimental validation and analysis.
8
cs.NI
2,001,942,931
Fast Resource Scheduling in Hetnets with D2d Support
Resource allocation in LTE networks is known to be an NP-hard problem. In this paper, we address an even more complex scenario: an LTE-based, 2-tier heterogeneous network where D2D mode is supported under the network control. All communications (macrocell, microcell and D2D-based) share the same frequency bands, hence they may interfere. We then determine (i) the network node that should serve each user and (ii) the radio resources to be scheduled for such communication. To this end, we develop an accurate model of the system and apply approximate dynamic programming to solve it. Our algorithms allow us to deal with realistic, large-scale scenarios. In such scenarios, we compare our approach to today's networks where eICIC techniques and proportional fairness scheduling are implemented. Results highlight that our solution increases the system throughput while greatly reducing energy consumption. We also show that D2D mode can effectively support content delivery without significantly harming macrocells or microcells traffic, leading to an increased system capacity. Interestingly, we find that D2D mode can be a low-cost alternative to microcells.
8
cs.NI
2,005,285,092
How to Transfer Zero Shot Object Recognition via Hierarchical Transfer of Semantic Attributes
Attribute based knowledge transfer has proven very successful in visual object analysis and learning previously unseen classes. However, the common approach learns and transfers attributes without taking into consideration the embedded structure between the categories in the source set. Such information provides important cues on the intraattribute variations. We propose to capture these variations in a hierarchical model that expands the knowledge source with additional abstraction levels of attributes. We also provide a novel transfer approach that can choose the appropriate attributes to be shared with an unseen class. We evaluate our approach on three public datasets: a Pascal, Animals with Attributes and CUB-200-2011 Birds. The experiments demonstrate the effectiveness of our model with significant improvement over state-of-the-art.
16
cs.CV
2,008,604,527
A General Quantitative Cryptanalysis of Permutation Only Multimedia Ciphers against Plaintext Attacks
In recent years secret permutations have been widely used for protecting different types of multimedia data, including speech files, digital images and videos. Based on a general model of permutation-only multimedia ciphers, this paper performs a quantitative cryptanalysis on the performance of these kind of ciphers against plaintext attacks. When the plaintext is of size MxN and with L different levels of values, the following quantitative cryptanalytic findings have been concluded under the assumption of a uniform distribution of each element in the plaintext: (1) all permutation-only multimedia ciphers are practically insecure against known/chosen-plaintext attacks in the sense that only O(log"L(MN)) known/chosen plaintexts are sufficient to recover not less than (in an average sense) half elements of the plaintext; (2) the computational complexity of the known/chosen-plaintext attack is only O(n.(MN)^2), where n is the number of known/chosen plaintexts used. When the plaintext has a non-uniform distribution, the number of required plaintexts and the computational complexity is also discussed. Experiments are given to demonstrate the real performance of the known-plaintext attack for a typical permutation-only image cipher.
4
cs.CR
2,009,318,757
An Efficient Assignment of Drainage Direction over Flat Surfaces in Raster Digital Elevation Models
In processing raster digital elevation models (DEMs) it is often necessary to assign drainage directions over flats-that is, over regions with no local elevation gradient. This paper presents an approach to drainage direction assignment which is not restricted by a flat's shape, number of outlets, or surrounding topography. Flow is modeled by superimposing a gradient away from higher terrain with a gradient towards lower terrain resulting in a drainage field exhibiting flow convergence, an improvement over methods which produce regions of parallel flow. This approach builds on previous work by Garbrecht and Martz (1997), but presents several important improvements. The improved algorithm guarantees that flats are only resolved if they have outlets. The algorithm does not require iterative application; a single pass is sufficient to resolve all flats. The algorithm presents a clear strategy for identifying flats and their boundaries. The algorithm is not susceptible to loss of floating-point precision. Furthermore, the algorithm is efficient, operating in O(N) time whereas the older algorithm operates in O ( N 3 / 2 ) time. In testing, the improved algorithm ran 6.5 times faster than the old for a 100i?100 cell flat and 69 times faster for a 700i?700 cell flat. In tests on actual DEMs, the improved algorithm finished its processing 38-110 times sooner while running on a single processor than a parallel implementation of the old algorithm did while running on 16 processors. The improved algorithm is an optimal, accurate, easy-to-implement drop-in replacement for the original. Pseudocode is provided in the paper and working source code is provided in the Supplemental Materials. HighlightsWe present an improved algorithm to model ow directions in at regions of DEMs.The algorithm works regardless of the topography surrounding the at region.The algorithm produces convergent ows away from higher and towards lower terrain.The algorithm has a number of sanity checks which guarantee correct output.The algorithm works in O(N) time and supplants an older O(N3\2) time algorithm.
34
cs.DS
2,010,239,981
Whittlesearch Interactive Image Search with Relative Attribute Feedback
We propose a novel mode of feedback for image search, where a user describes which properties of exemplar images should be adjusted in order to more closely match his/her mental model of the image sought. For example, perusing image results for a query "black shoes", the user might state, "Show me shoe images like these, but sportier." Offline, our approach first learns a set of ranking functions, each of which predicts the relative strength of a nameable attribute in an image (e.g., sportiness). At query time, the system presents the user with a set of exemplar images, and the user relates them to his/her target image with comparative statements. Using a series of such constraints in the multi-dimensional attribute space, our method iteratively updates its relevance function and re-ranks the database of images. To determine which exemplar images receive feedback from the user, we present two variants of the approach: one where the feedback is user-initiated and another where the feedback is actively system-initiated. In either case, our approach allows a user to efficiently "whittle away" irrelevant portions of the visual feature space, using semantic language to precisely communicate her preferences to the system. We demonstrate our technique for refining image search for people, products, and scenes, and we show that it outperforms traditional binary relevance feedback in terms of search speed and accuracy. In addition, the ordinal nature of relative attributes helps make our active approach efficient--both computationally for the machine when selecting the reference images, and for the user by requiring less user interaction than conventional passive and active methods.
16
cs.CV
2,023,346,713
Conflict Driven Asp Solving with External Sources
Answer Set Programming (ASP) is a well-known problem solving approach based on nonmonotonic logic programs and efficient solvers. To enable access to external information, hex-programs extend programs with external atoms, which allow for a bidirectional communication between the logic program and external sources of computation (e.g., description logic reasoners and Web resources). Current solvers evaluate hex-programs by a translation to ASP itself, in which values of external atoms are guessed and verified after the ordinary answer set computation. This elegant approach does not scale with the number of external accesses in general, in particular in presence of nondeterminism (which is instrumental for ASP). In this paper, we present a novel, native algorithm for evaluating hex-programs which uses learning techniques. In particular, we extend conflict-driven ASP solving techniques, which prevent the solver from running into the same conflict again, from ordinary to hex-programs. We show how to gain additional knowledge from external source evaluations and how to use it in a conflict-driven algorithm. We first target the uninformed case, i.e., when we have no extra information on external sources, and then extend our approach to the case where additional meta-information is available. Experiments show that learning from external sources can significantly decrease both the runtime and the number of considered candidate compatible sets.
10
cs.AI
2,023,669,216
Validity of Altmetrics Data for Measuring Societal Impact a Study Using Data From Altmetric and F1000prime
Can altmetric data be validly used for the measurement of societal impact? The current study seeks to answer this question with a comprehensive dataset (about 100,000 records) from very disparate sources (F1000, Altmetric, and an in-house database based on Web of Science). In the F1000 peer review system, experts attach particular tags to scientific papers which indicate whether a paper could be of interest for science or rather for other segments of society. The results show that papers with the tag "good for teaching" do achieve higher altmetric counts than papers without this tag - if the quality of the papers is controlled. At the same time, a higher citation count is shown especially by papers with a tag that is specifically scientifically oriented ("new finding"). The findings indicate that papers tailored for a readership outside the area of research should lead to societal impact. If altmetric data is to be used for the measurement of societal impact, the question arises of its normalization. In bibliometrics, citations are normalized for the papers' subject area and publication year. This study has taken a second analytic step involving a possible normalization of altmetric data. As the results show there are particular scientific topics which are of especial interest for a wide audience. Since these more or less interesting topics are not completely reflected in Thomson Reuters' journal sets, a normalization of altmetric data should not be based on the level of subject categories, but on the level of topics.
38
cs.DL
2,029,741,572
Leader Contention Based User Matching for 802 11 Multiuser Mimo Networks
In multiuser MIMO (MU-MIMO) LANs, the achievable throughput of a client depends on who is transmitting concurrently with it. Existing MU-MIMO MAC protocols, however, enable clients to use the traditional 802.11 contention to contend for concurrent transmission opportunities on the uplink. Such a contention-based protocol not only wastes lots of channel time on multiple rounds of contention but also fails to maximally deliver the gain of MU-MIMO because users randomly join concurrent transmissions without considering their channel characteristics. To address such inefficiency, this paper introduces MIMOMate, a leader-contention-based MU-MIMO MAC protocol that matches clients as concurrent transmitters according to their channel characteristics to maximally deliver the MU-MIMO gain while ensuring all users fairly share concurrent transmission opportunities. Furthermore, MIMOMate elects the leader of the matched users to contend for transmission opportunities using traditional 802.11 CSMA/CA. It hence requires only a single contention overhead for concurrent streams and can be compatible with legacy 802.11 devices. A prototype implementation in USRP N200 shows that MIMOMate achieves an average throughput gain of 1.42× and $1.52× over the traditional contention-based protocol for two- and three-antenna AP scenarios, respectively, and also provides fairness for clients.
8
cs.NI
2,039,623,973
Inertial Parameter Identification Including Friction and Motor Dynamics
Identification of inertial parameters is fundamental for the implementation of torque-based control in humanoids. At the same time, good models of friction and actuator dynamics are critical for the low-level control of joint torques. We propose a novel method to identify inertial, friction and motor parameters in a single procedure. The identification exploits the measurements of the PWM of the DC motors and a 6-axis force/torque sensor mounted inside the kinematic chain. The partial least-square (PLS) method is used to perform the regression. We identified the inertial, friction and motor parameters of the right arm of the iCub humanoid robot. We verified that the identified model can accurately predict the force/torque sensor measurements and the motor voltages. Moreover, we compared the identified parameters against the CAD parameters, in the prediction of the force/torque sensor measurements. Finally, we showed that the estimated model can effectively detect external contacts, comparing it against a tactile-based contact detection. The presented approach offers some advantages with respect to other state-of-the-art methods, because of its completeness (i.e. it identifies inertial, friction and motor parameters) and simplicity (only one data collection, with no particular requirements).
27
cs.RO
2,044,357,496
Verifiable Source Code Documentation in Controlled Natural Language
Writing documentation about software internals is rarely considered a rewarding activity. It is highly time-consuming and the resulting documentation is fragile when the software is continuously evolving in a multi-developer setting. Unfortunately, traditional programming environments poorly support the writing and maintenance of documentation. Consequences are severe as the lack of documentation on software structure negatively impacts the overall quality of the software product. We show that using a controlled natural language with a reasoner and a query engine is a viable technique for verifying the consistency and accuracy of documentation and source code. Using ACE, a state-of-the-art controlled natural language, we present positive results on the comprehensibility and the general feasibility of creating and verifying documentation. As a case study, we used automatic documentation verification to identify and fix severe flaws in the architecture of a non-trivial piece of software. Moreover, a user experiment shows that our language is faster and easier to learn and understand than other formal languages for software documentation.
23
cs.SE
2,045,492,734
How to Improve the Outcome of Performance Evaluations in Terms of Percentiles for Citation Frequencies of My Papers
Using empirical data I demonstrate that the result of performance evaluations by percentiles can be drastically influenced by the proper choice of the journal in which a manuscript is published.
38
cs.DL
2,046,927,882
Cooperative Estimation for Synchronization of Heterogeneous Multi Agent Systems Using Relative Information
Abstract In this paper, we present a distributed estimation setup where local agents estimate their states from relative measurements received from their neighbours. In the case of heterogeneous multi-agent systems, where only relative measurements are available, this is of high relevance. The objective is to improve the scalability of the existing distributed estimation algorithms by restricting the agents to estimating only their local states and those of immediate neighbours. The presented estimation algorithm also guarantees robust performance against model and measurement disturbances. It is shown that it can be integrated into output synchronization algorithms.
19
cs.SY
2,049,223,024
A Cooperative Q Learning Approach for Real Time Power Allocation in Femtocell Networks
In this paper, we address the problem of distributed interference management of cognitive femtocells that share the same frequency range with macrocells (primary user) using distributed multi-agent Q-learning. We formulate and solve three problems representing three different Q-learning algorithms: namely, centralized, distributed and partially distributed power control using Q-learning (CPC-Q, DPC-Q and PDPC-Q). CPCQ, although not of practical interest, characterizes the global optimum. Each of DPC-Q and PDPC-Q works in two different learning paradigms: Independent (IL) and Cooperative (CL). The former is considered the simplest form for applying Qlearning in multi-agent scenarios, where all the femtocells learn independently. The latter is the proposed scheme in which femtocells share partial information during the learning process in order to strike a balance between practical relevance and performance. In terms of performance, the simulation results showed that the CL paradigm outperforms the IL paradigm and achieves an aggregate femtocells capacity that is very close to the optimal one. For the practical relevance issue, we evaluate the robustness and scalability of DPC-Q, in real time, by deploying new femtocells in the system during the learning process, where we showed that DPC-Q in the CL paradigm is scalable to large number of femtocells and more robust to the network dynamics compared to the IL paradigm
11
cs.MA
2,052,978,498
Fusing Text and Image for Event Detection in Twitter
In this contribution, we develop an accurate and effective event detection method to detect events from a Twitter stream, which uses visual and textual information to improve the performance of the mining process. The method monitors a Twitter stream to pick up tweets having texts and images and stores them into a database. This is followed by applying a mining algorithm to detect an event. The procedure starts with detecting events based on text only by using the feature of the bag-of-words which is calculated using the term frequency-inverse document frequency (TF-IDF) method. Then it detects the event based on image only by using visual features including histogram of oriented gradients (HOG) descriptors, grey-level cooccurrence matrix (GLCM), and color histogram. K nearest neighbours (Knn) classification is used in the detection. The final decision of the event detection is made based on the reliabilities of text only detection and image only detection. The experiment result showed that the proposed method achieved high accuracy of 0.94, comparing with 0.89 with texts only, and 0.86 with images only .
31
cs.IR
2,054,156,261
Artificial Intelligence Markup Language a Brief Tutorial
The purpose of this paper is to serve as a reference guide for the development of chatterbots implemented with the AIML language. In order to achieve this, the main concepts in Pattern Recognition area are described because the AIML uses such theoretical framework in their syntactic and semantic structures. After that, AIML language is described and each AIML command/tag is followed by an application example. Also, the usage of AIML embedded tags for the handling of sequence dialogue limitations between humans and machines is shown. Finally, computer systems that assist in the design of chatterbots with the AIML language are classified and described.
10
cs.AI
2,056,465,590
Anomaly Detection in Online Social Networks
Anomalies in online social networks can signify irregular, and often illegal behaviour. Anomalies in online social networks can signify irregular, and often illegal behaviour. Detection of such anomalies has been used to identify malicious individuals, including spammers, sexual predators, and online fraudsters. In this paper we survey existing computational techniques for detecting anomalies in online social networks. We characterise anomalies as being either static or dynamic, and as being labelled or unlabelled, and survey methods for detecting these different types of anomalies. We suggest that the detection of anomalies in online social networks is composed of two sub-processes; the selection and calculation of network features, and the classification of observations from this feature space. In addition, this paper provides an overview of the types of problems that anomaly detection can address and identifies key areas of future research.
26
cs.SI
2,058,503,096
Efficient Synthesis of Network Updates
Software-defined networking (SDN) is revolutionizing the networking industry, but current SDN programming platforms do not provide automated mechanisms for updating global configurations on the fly. Implementing updates by hand is challenging for SDN programmers because networks are distributed systems with hundreds or thousands of interacting nodes. Even if initial and final configurations are correct, naively updating individual nodes can lead to incorrect transient behaviors, including loops, black holes, and access control violations. This paper presents an approach for automatically synthesizing updates that are guaranteed to preserve specified properties. We formalize network updates as a distributed programming problem and develop a synthesis algorithm based on counterexample-guided search and incremental model checking. We describe a prototype implementation, and present results from experiments on real-world topologies and properties demonstrating that our tool scales to updates involving over one-thousand nodes.
22
cs.PL
2,065,627,966
Cooperative Relaying Under Spatially and Temporally Correlated Interference
We analyze the performance of an interference-limited decode-and-forward cooperative relaying system that comprises a source, a destination, and $N$ relays, arbitrarily placed on the plane and suffering from interference by a set of interferers placed according to a spatial Poisson process. In each transmission attempt, first, the transmitter sends a packet; subsequently, a single one of the relays that received the packet correctly, if such a relay exists, retransmits it. We consider both selection combining and maximal ratio combining at the destination, Rayleigh fading, and interferer mobility. We derive expressions for the probability that a single transmission attempt is successful, as well as for the distribution of the transmission attempts until a packet is successfully transmitted. Results provide design guidelines applicable to a wide range of systems. Overall, the temporal and spatial characteristics of the interference play a significant role in shaping the system performance. Maximal ratio combining is only helpful when relays are close to the destination; in harsh environments, having many relays is particularly helpful, and relay placement is critical; the performance improves when interferer mobility increases; and a tradeoff exists between energy efficiency and throughput.
28
cs.IT
2,067,340,979
An Information Theoretic Location Verification System for Wireless Networks
As location-based applications become ubiquitous in emerging wireless networks, a reliable Location Verification System (LVS) will be of growing importance. In this paper we propose, for the first time, a rigorous information-theoretic framework for an LVS. The theoretical framework we develop illustrates how the threshold used in the detection of a spoofed location can be optimized in terms of the mutual information between the input and output data of the LVS. In order to verify the legitimacy of our analytical framework we have carried out detailed numerical simulations. Our simulations mimic the practical scenario where a system deployed using our framework must make a binary Yes/No “malicious decision” to each snapshot of the signal strength values obtained by base stations. The comparison between simulation and analysis shows excellent agreement. Our optimized LVS framework provides a defence against location spoofing attacks in emerging wireless networks such as those envisioned for Intelligent Transport Systems, where verification of location information is of paramount importance.
28
cs.IT
2,067,949,855
Petri Nets with Time and Cost
We consider timed Petri nets, i.e., unbounded Petri nets where each token carries a real-valued clock. Transition arcs are labeled with time intervals, which specify constraints on the ages of tokens. Our cost model assigns token storage costs per time unit to places, and firing costs to transitions. We study the cost to reach a given control-state. In general, a cost-optimal run may not exist. However,we show that the infimum of the costs is computable.
2
cs.LO
2,069,293,554
Novelty Detection Under Multi Label Multi Instance Framework
Novelty detection plays an important role in machine learning and signal processing. This paper studies novelty detection in a new setting where the data object is represented as a bag of instances and associated with multiple class labels, referred to as multi-instance multi-label (MIML) learning. Contrary to the common assumption in MIML that each instance in a bag belongs to one of the known classes, in novelty detection, we focus on the scenario where bags may contain novel-class instances. The goal is to determine, for any given instance in a new bag, whether it belongs to a known class or a novel class. Detecting novelty in the MIML setting captures many real-world phenomena and has many potential applications. For example, in a collection of tagged images, the tag may only cover a subset of objects existing in the images. Discovering an object whose class has not been previously tagged can be useful for the purpose of soliciting a label for the new object class. To address this novel problem, we present a discriminative framework for detecting new class instances. Experiments demonstrate the effectiveness of our proposed method, and reveal that the presence of unlabeled novel instances in training bags is helpful to the detection of such instances in testing stage.
24
cs.LG
2,074,934,005
Analysis and Design of Multi Hop Diffusion Based Molecular Communication Networks
In this paper, we consider a multi-hop molecular communication network consisting of one nanotransmitter, one nanoreceiver, and multiple nanotransceivers acting as relays. We consider three different relaying schemes to improve the range of diffusion-based molecular communication. In the first scheme, different types of messenger molecules are utilized in each hop of the multi-hop network. In the second and third schemes, we assume that two types of molecules and one type of molecule are utilized in the network, respectively. We identify self-interference, backward intersymbol interference (backward-ISI), and forward-ISI as the performance-limiting effects for the second and third relaying schemes. Furthermore, we consider two relaying modes analogous to those used in wireless communication systems, namely full-duplex and half-duplex relaying. We propose the adaptation of the decision threshold as an effective mechanism to mitigate self-interference and backward-ISI at the relay for full-duplex and half-duplex transmission. We derive closed-form expressions for the expected end-to-end error probability of the network for the three considered relaying schemes. Furthermore, we derive closed-form expressions for the optimal number of molecules released by the nanotransmitter and the optimal detection threshold of the nanoreceiver for minimization of the expected error probability of each hop.
28
cs.IT
2,079,489,844
Strategic Port Graph Rewriting an Interactive Modelling and Analysis Framework
We present strategic portgraph rewriting as a basis for the implementation of visual modelling and analysis tools. The goal is to facilitate the specification, analysis and simulation of complex systems, using port graphs. A system is represented by an initial graph and a collection of graph rewriting rules, together with a user-defined strategy to control the application of rules. The strategy language includes constructs to deal with graph traversal and management of rewriting positions in the graph. We give a small-step operational semantics for the language, and describe its implementation in the graph transformation and visualisation tool PORGY.
2
cs.LO
2,083,965,535
Iec 61499 Vs 61131 a Comparison Based on Misperceptions
The IEC 61131 standard has been widely#R##N#accepted in the industrial automation domain. However, it is claimed that the#R##N#standard does not address today the new requirements of complex industrial#R##N#systems, which include among others, portability, interoperability, increased#R##N#reusability and distribution. To address these restrictions, the IEC has#R##N#initiated the task of developing the IEC 61499, which is presented as a mature#R##N#technology to enable intelligent automation in various domains. This standard#R##N#was not accepted by industry even though it is highly promoted by the academic#R##N#community. In this paper, it is argued that IEC 61499 has been promoted by#R##N#academy based on unsubstantiated claims on its main features, i.e., reusability, portability, interoperability, event-driven#R##N#execution. A number of misperceptions are presented and discussed in this paper#R##N#to show that the comparison, which appears in the literature, between IEC 61499#R##N#and 61131 is not substantiated.
23
cs.SE
2,088,442,093
On Covert Acoustical Mesh Networks in Air
Covert channels can be used to circumvent system and network policies by establishing communications that have not been considered in the design of the computing system. We construct a covert channel between different computing systems that utilizes audio modulation/demodulation to exchange data between the computing systems over the air medium. The underlying network stack is based on a communication system that was originally designed for robust underwater communication. We adapt the communication system to implement covert and stealthy communications by utilizing the ultrasonic frequency range. We further demonstrate how the scenario of covert acoustical communication over the air medium can be extended to multi-hop communications and even to wireless mesh networks. A covert acoustical mesh network can be conceived as a meshed botnet or malnet that is accessible via inaudible audio transmissions. Different applications of covert acoustical mesh networks are presented, including the use for remote keylogging over multiple hops. It is shown that the concept of a covert acoustical mesh network renders many conventional security concepts useless, as acoustical communications are usually not considered. Finally, countermeasures against covert acoustical mesh networks are discussed, including the use of lowpass filtering in computing systems and a host-based intrusion detection system for analyzing audio input and output in order to detect any irregularities.
4
cs.CR
2,094,380,852
High Energy First Hef Heuristic for Energy Efficient Target Coverage Problem
Target coverage problem in wireless sensor networks is concerned with maximizing the lifetime of the network while continuously monitoring a set of targets. A sensor covers targets which are within the sensing range. For a set of sensors and a set of targets, the sensor-target coverage relationship is assumed to be known. A sensor cover is a set of sensors that covers all the targets. The target coverage problem is to determine a set of sensor covers with maximum aggregated lifetime while constraining the life of each sensor by its initial battery life. The problem is proved to be NP-complete and heuristic algorithms to solve this problem are proposed. In the present study, we give a unified interpretation of earlier algorithms and propose a new and efficient algorithm. We show that all known algorithms are based on a common reasoning though they seem to be derived from different algorithmic paradigms. We also show that though some algorithms guarantee bound on the quality of the solution, this bound is not meaningful and not practical too. Our interpretation provides a better insight to the solution techniques. We propose a new greedy heuristic which prioritizes sensors on residual battery life. We show empirically that the proposed algorithm outperforms all other heuristics in terms of quality of solution. Our experimental study over a large set of randomly generated problem instances also reveals that a very na\"ive greedy approach yields solutions which is reasonably (appx. 10%) close to the actual optimal solutions.
8
cs.NI
2,096,761,913
Faster Radix Sort via Virtual Memory and Write Combining
Sorting algorithms are the deciding factor for the performance of common operations such as removal of duplicates or database sort-merge joins. This work focuses on 32-bit integer keys, optionally paired with a 32-bit value. We present a fast radix sorting algorithm that builds upon a microarchitecture-aware variant of counting sort. Taking advantage of virtual memory and making use of write-combining yields a per-pass throughput corresponding to at least 88 % of the system’s peak memory bandwidth. Our implementation outperforms Intel’s recently published radix sort by a factor of 1.5. It also compares favorably to the reported performance of an algorithm for Fermi GPUs when data-transfer overhead is included. These results indicate that scalar, bandwidth-sensitive sorting algorithms remain competitive on current architectures. Various other memory-intensive applications can benefit from the techniques described herein.
34
cs.DS
2,097,759,652
Detecting Intentional Packet Drops on the Internet via Tcp Ip Side Channels Extended Version
We describe a method for remotely detecting intentional packet drops on the Internet via side channel inferences. That is, given two arbitrary IP addresses on the Internet that meet some simple requirements, our proposed technique can discover packet drops (e.g., due to censorship) between the two remote machines, as well as infer in which direction the packet drops are occurring. The only major requirements for our approach are a client with a global IP Identifier (IPID) and a target server with an open port. We require no special access to the client or server. Our method is robust to noise because we apply intervention analysis based on an autoregressive-moving-average (ARMA) model. In a measurement study using our method featuring clients from multiple continents, we observed that, of all measured client connections to Tor directory servers that were censored, 98% of those were from China, and only 0.63% of measured client connections from China to Tor directory servers were not censored. This is congruent with current understandings about global Internet censorship, leading us to conclude that our method is effective.
8
cs.NI
2,099,527,426
Cognitive and Cache Enabled D2d Communications in Cellular Networks
Caching popular contents for frequent access is an attractive way of employing the redundancy of user requests. Exploiting cognition to the cache-enabled D2D in the multichannel cellular network is the main focus of this paper. We contribute to analyzing the cache-based content delivery in a twotier heterogeneous network (HetNet) composed of base stations (BSs) and device-to-device (D2D) pairs, where the D2D accesses the networks with overlay spectrum sharing. Node locations are first modeled as mutually independent Poisson Point Processes (PPPs), and the service queueing process is formulated. The corresponding tier association and cognitive access protocol are developed. The D2D transmitter (TX) performs overlay spectrum sensing within its spectrum sensing region (SSR) to detect the idleness of cellular channels. Then the number of BSs and D2D TXs in the SSR are analyzed. We further elaborate the probability mass function (PMF) of the delay and the queue length, with modeling the traffic dynamics of request arrivals and departures at the BS and D2D TX as the discrete-time multiserver queue with priorities. Moreover, impacts of some key network parameters, e.g., the content popularity, the request density and the caching storage, on the system performance are investigated to provide a valuable insight.
28
cs.IT
2,100,725,109
Using Transmit Only Sensors to Reduce Deployment Cost of Wireless Sensor Networks
We consider a hybrid wireless sensor network with regular and transmit-only sensors. The transmit-only sensors do not have the receiver circuit (or have a very low data-rate one), hence are cheaper and less energy consuming, but their transmissions cannot be coordinated. Regular sensors, also called cluster-heads, are responsible for receiving information from the transmit-only sensors and forwarding it to sinks. The main goal of such a hybrid network is to reduce the cost of deployment while achieving some performance goals (minimum coverage, sensing rate, etc). In this paper we are interested in the communication between the transmit-only sensors and the cluster-heads. Since the sensors have no feedback, their transmission schedule is random. The cluster-heads, on the contrary, adapt their reception policy to achieve the performance goals. Using a mathematical model of random access networks developed in [1] we define and evaluate packet admission policies for different performance criteria. We show that the proposed hybrid network architecture, using the optimal policies, can achieve substantial dollar cost and power consumption savings as compared to conventional architectures while providing the same performance guarantees.
8
cs.NI
2,104,308,413
High Sir Transmission Capacity of Wireless Networks with General Fading and Node Distribution
In many wireless systems, interference is the main performance-limiting factor, and is primarily dictated by the locations of concurrent transmitters. In many earlier works, the locations of the transmitters is often modeled as a Poisson point process for analytical tractability. While analytically convenient, the PPP only accurately models networks whose nodes are placed independently and use ALOHA as the channel access protocol, which preserves the independence. Correlations between transmitter locations in non-Poisson networks, which model intelligent access protocols, makes the outage analysis extremely difficult. In this paper, we take an alternative approach and focus on an asymptotic regime where the density of interferers η goes to 0. We prove for general node distributions and fading statistics that the success probability Ps ~ 1 - γηκ for η → 0, and provide values of γ and κ for a number of important special cases. We show that κ is lower bounded by 1 and upper bounded by a value that depends on the path loss exponent and the fading. This new analytical framework is then used to characterize the transmission capacity of a very general class of networks, defined as the maximum spatial density of active links given an outage constraint.
28
cs.IT
2,109,463,955
Towards a Multi Criteria Development Distribution Model an Analysis of Existing Task Distribution Approaches
Distributing development tasks in the context of global software development bears both many risks and many opportunities. Nowadays, distributed development is often driven by only a few factors or even just a single factor such as workforce costs. Risks and other relevant factors such as workforce capabilities, the innovation potential of different regions, or cultural factors are often not recognized sufficiently. This could be improved by using empirically-based multi-criteria distribution models. Currently, there is a lack of such decision models for distributing software development work. This article focuses on mechanisms for such decision support. First, requirements for a distribution model are formulated based on needs identified from practice. Then, distribution models from different domains are surveyed, compared, and analyzed in terms of suitability. Finally, research questions and directions for future work are given.
23
cs.SE
2,114,297,064
Dynamic Autotuning of Adaptive Fast Multipole Methods on Hybrid Multicore Cpu and Gpu Systems
Dynamic autotuning of adaptive fast multipole methods on hybrid multicore CPU and GPU systems
5
cs.DC
2,117,327,438
Efficient Reconciliation Protocol for Discrete Variable Quantum Key Distribution
Reconciliation is an essential part of any secret-key agreement protocol and hence of a Quantum Key Distribution (QKD) protocol, where two legitimate parties are given correlated data and want to agree on a common string in the presence of an adversary, while revealing a minimum amount of information. In this paper, we show that for discrete-variable QKD protocols, this problem can be advantageously solved with Low Density Parity Check (LDPC) codes optimized for the binary symmetric channel (BSC). In particular, we demonstrate that our method leads to a significant improvement of the achievable secret key rate, with respect to earlier interactive reconciliation methods used in QKD.
28
cs.IT
2,128,095,813
Dolfin Automated Finite Element Computing
We describe here a library aimed at automating the solution of partial differential equations using the finite element method. By employing novel techniques for automated code generation, the library combines a high level of expressiveness with efficient computation. Finite element variational forms may be expressed in near mathematical notation, from which low-level code is automatically generated, compiled, and seamlessly integrated with efficient implementations of computational meshes and high-performance linear algebra. Easy-to-use object-oriented interfaces to the library are provided in the form of a C++ library and a Python module. This article discusses the mathematical abstractions and methods used in the design of the library and its implementation. A number of examples are presented to demonstrate the use of the library in application code.
32
cs.MS
2,130,190,443
Test Bed Based Comparison of Single and Parallel Tcp and the Impact of Parallelism on Throughput and Fairness in Heterogenous Networks
Parallel Transport Control Protocol (TCP) has been used to effectively utilize bandwidth for data intensive applications over high Bandwidth-Delay Product (BDP) networks. On the other hand, it has been argued that, a single based TCP connection with proper modification such as HSTCP can emulate and capture the robustness of parallel TCP and can well replace it. In this work a Comparison between Single-Based and the proposed parallel TCP has been conducted to show the differences in their performance measurements such as throughput performance and throughput ratio, as well as the link sharing Fairness also has been observed to show the impact of using the proposed Parallel TCP on the existing Single-Based TCP connections. The experiment results show that, single-based TCP cannot overcome Parallel TCP especially in heterogeneous networks where the packet losses are common. Furthermore, the proposed parallel TCP does not affect TCP fairness which makes parallel TCP highly recommended to effectively utilize bandwidth for data intensive applications.
8
cs.NI
2,131,830,786
Navigation Domain Representation for Interactive Multiview Imaging
Enabling users to interactively navigate through different viewpoints of a static scene is a new interesting functionality in 3D streaming systems. While it opens exciting perspectives toward rich multimedia applications, it requires the design of novel representations and coding techniques to solve the new challenges imposed by the interactive navigation. In particular, the encoder must prepare a priori a compressed media stream that is flexible enough to enable the free selection of multiview navigation paths by different streaming media clients. Interactivity clearly brings new design constraints: the encoder is unaware of the exact decoding process, while the decoder has to reconstruct information from incomplete subsets of data since the server generally cannot transmit images for all possible viewpoints due to resource constrains. In this paper, we propose a novel multiview data representation that permits us to satisfy bandwidth and storage constraints in an interactive multiview streaming system. In particular, we partition the multiview navigation domain into segments, each of which is described by a reference image (color and depth data) and some auxiliary information. The auxiliary information enables the client to recreate any viewpoint in the navigation segment via view synthesis. The decoder is then able to navigate freely in the segment without further data request to the server; it requests additional data only when it moves to a different segment. We discuss the benefits of this novel representation in interactive navigation systems and further propose a method to optimize the partitioning of the navigation domain into independent segments, under bandwidth and storage constraints. Experimental results confirm the potential of the proposed representation; namely, our system leads to similar compression performance as classical inter-view coding, while it provides the high level of flexibility that is required for interactive streaming. Because of these unique properties, our new framework represents a promising solution for 3D data representation in novel interactive multimedia services.
1
cs.MM
2,132,690,206
An Analysis of Device Free and Device Based Wifi Localization Systems
WiFi-based localization became one of the main indoor localization techniques due to the ubiquity of WiFi connectivity. However, indoor environments exhibit complex wireless propagation characteristics. Typically, these characteristics are captured by constructing a fingerprint map for the different locations in the area of interest. This finger print requires significant overhead in manual construction, and thus has been one of the major drawbacks of WiFi-based localization. In this paper, the authors present an automated tool for finger print constructions and leverage it to study novel scenarios for device-based and device-free WiFi-based localization that are difficult to evaluate in a real environment. In a particular, the authors examine the effect of changing the access points AP mounting location, AP technology upgrade, crowd effect on calibration and operation, among others; on the accuracy of the localization system. The authors present the analysis for the two classes of WiFi-based localization: device-based and device-free. The authors analysis highlights factors affecting the localization system accuracy, how to tune it for better localization, and provides insights for both researchers and practitioners.
8
cs.NI
2,136,787,699
Design and Analysis of a Multi Carrier Differential Chaos Shift Keying Communication System
A new Multi-Carrier Differential Chaos Shift Keying (MC-DCSK) modulation is presented in this paper. The system endeavors to provide a good trade-off between robustness, energy efficiency and high data rate, while still being simple compared to conventional multi-carrier spread spectrum systems. This system can be seen as a parallel extension of the DCSK modulation where one chaotic reference sequence is transmitted over a predefined subcarrier frequency. Multiple modulated data streams are transmitted over the remaining subcarriers. This transmitter structure increases the spectral efficiency of the conventional DCSK system and uses less energy. The receiver design makes this system easy to implement where no radio frequency (RF) delay circuit is needed to demodulate received data. Various system design parameters are discussed throughout the paper, including the number of subcarriers, the spreading factor, and the transmitted energy. Once the design is explained, the bit error rate performance of the MC-DCSK system is computed and compared to the conventional DCSK system under multipath Rayleigh fading and an additive white Gaussian noise (AWGN) channels. Simulation results confirm the advantages of this new hybrid design.
21
cs.OH
2,136,961,259
Depechemood a Lexicon for Emotion Analysis From Crowd Annotated News
While many lexica annotated with words polarity are available for sentiment analysis, very few tackle the harder task of emotion analysis and are usually quite limited in coverage. In this paper, we present a novel approach for extracting - in a totally automated way - a high-coverage and high-precision lexicon of roughly 37 thousand terms annotated with emotion scores, called DepecheMood. Our approach exploits in an original way 'crowd-sourced' affective annotation implicitly provided by readers of news articles from rappler.com. By providing new state-of-the-art performances in unsupervised settings for regression and classification tasks, even using a na\"{\i}ve approach, our experiments show the beneficial impact of harvesting social media data for affective lexicon building.
30
cs.CL
2,137,586,949
A Framework for Specifying Prototyping and Reasoning About Computational Systems
This thesis concerns the development of a framework that facilitates the design and analysis of formal systems. Specifically, this framework is intended to provide (1) a specification language which supports the concise and direct description of a system based on its informal presentation, (2) a mechanism for animating the specification language so that descriptions written in it can quickly and effectively be turned into prototypes of the systems they are about, and (3) a logic for proving properties of descriptions provided in the specification language and thereby of the systems they encode. A defining characteristic of the proposed framework is that it is based on two separate but closely intertwined logics. One of these is a specification logic that facilitates the description of computational structure while the other is a logic that exploits the special characteristics of the specification logic to support reasoning about the computational behavior of systems that are described using it. Both logics embody a natural treatment of binding structure by using the λ-calculus as a means for representing objects and by incorporating special mechanisms for working with such structure. By using this technique, they lift the treatment of binding from the object language into the domain of the relevant meta logic, thereby allowing the specification or analysis components to focus on the more essential logical aspects of the systems that are encoded. #R##N#One focus of this thesis is on developing a rich and expressive reasoning logic that is of use within the described framework. This work exploits a previously developed capability of definitions for embedding recursive specifications into the reasoning logic; this notion of definitions is complemented by a device for a case-analysis style reasoning over the descriptions they encode. Use is also made of a special kind of judgment called a generic judgment for reflecting object language binding into the meta logic and thereby for reasoning about such structure. Existing methods have, however, had a shortcoming in how they combine these two devices. Generic judgments lead to the introduction of syntactic objects called nominal constants into formulas and terms. The manner in which such objects are introduced often ensures that they satisfy certain properties which are necessary to take note of in the reasoning process. Unfortunately, this has heretofore not been possible to do. To overcome this problem, we introduce a special binary relation between terms called nominal abstraction and show this can be combined with definitions to encode the desired properties. The treatment of definitions is further enriched by endowing them with the capability of being interpreted inductively or co-inductively. The resulting logic is shown to be consistent and examples are presented to demonstrate its richness and usefulness in reasoning tasks. #R##N#This thesis is also concerned with the practical application of the logical machinery it develops. Specifically, it describes an interactive, tactic-style theorem prover called Abella that realizes the reasoning logic. Abella embodies the use of lemmas in proofs and also provides intuitively well-motivated tactics for inductive and co-inductive reasoning. The idea of reasoning using two-levels of logic is exploited in this context. This form of reasoning, pioneered by McDowell and Miller, embeds the specification logic explicitly into the reasoning logic and then reasons about particular specifications through this embedding. The usefulness of this approach is demonstrated by showing that general properties can be proved about the specification logic and then used as lemmas to simplify the overall reasoning process. We use these ideas together with Abella to develop several interesting and challenging proofs. The examples considered include ones in the recently proposed POPLmark challenge and a formalization of Girard’s proof of strong normalization for the simply-typed λ-calculus. We also explore the notion of adequacy that relates theorems proved using Abella to the properties of the object systems that are ultimately of primary interest. (Abstract shortened by UMI.)
2
cs.LO
2,137,768,004
Hierarchical Hidden Markov Model in Detecting Activities of Daily Living in Wearable Videos for Studies of Dementia
This paper presents a method for indexing activities of daily living in videos acquired from wearable cameras. It addresses the problematic of analyzing the complex multimedia data acquired from wearable devices, which has been recently a growing concern due to the increasing amount of this kind of multimedia data. In the context of dementia diagnosis by doctors, patient activities are recorded in the environment of their home using a lightweight wearable device, to be later visualized by the medical practitioners. The recording mode poses great challenges since the video data consists in a single sequence shot where strong motion and sharp lighting changes often appear. Because of the length of the recordings, tools for an efficient navigation in terms of activities of interest are crucial. Our work introduces a video structuring approach that combines automatic motion based segmentation of the video and activity recognition by a hierarchical two-level Hidden Markov Model. We define a multi-modal description space over visual and audio features, including mid-level features such as motion, location, speech and noise detections. We show their complementarities globally as well as for specific activities. Experiments on real data obtained from the recording of several patients at home show the difficulty of the task and the promising results of the proposed approach.
1
cs.MM
2,146,699,143
A Degree Centrality in Multi Layered Social Network
Multi-layered social networks reflect complex relationships existing in modern interconnected IT systems. In such a network each pair of nodes may be linked by many edges that correspond to different communication or collaboration user activities. Multi-layered degree centrality for multi-layered social networks is presented in the paper. Experimental studies were carried out on data collected from the real Web 2.0 site. The multi-layered social network extracted from this data consists of ten distinct layers and the network analysis was performed for different degree centralities measures.
26
cs.SI
2,148,698,193
Four Conceptions of Instruction Sequence Faults
The notion of an instruction sequence fault is considered from various perspectives. Four different viewpoints on what constitutes a fault, or how to use the notion of a fault, are formulated. An integration of these views is proposed.
23
cs.SE
2,149,210,625
A Game Theoretic Analysis of Incentives in Content Production and Sharing over Peer to Peer Networks
Peer-to-peer (P2P) networks can be easily deployed to distribute user-generated content at a low cost, but the free-rider problem hinders the efficient utilization of P2P networks. Using game theory, we investigate incentive schemes to overcome the free-rider problem in content production and sharing. We build a basic model and obtain two benchmark outcomes: 1) the non-cooperative outcome without any incentive scheme and 2) the cooperative outcome. We then propose and examine three incentive schemes based on pricing, reciprocation, and intervention. We also study a brute-force scheme that enforces full sharing of produced content. We find that 1) cooperative peers share all produced content while non-cooperative peers do not share at all without an incentive scheme; 2) by utilizing the P2P network efficiently, the cooperative outcome achieves higher social welfare than the non-cooperative outcome does; 3) a cooperative outcome can be achieved among non-cooperative peers by introducing an incentive scheme based on pricing, reciprocation, or intervention; and 4) enforced full sharing has ambiguous welfare effects on peers. In addition to describing the solutions of different formulations, we discuss enforcement and informational requirements to implement each solution, aiming to offer a guideline for protocol design for P2P networks.
8
cs.NI
2,151,152,358
Subjective and Objective Quality Assessment of Image a Survey
With the increasing demand for image-based applications, the efficient and reliable evaluation of image quality has increased in importance. Measuring the image quality is of fundamental importance for numerous image processing applications, where the goal of image quality assessment (IQA) methods is to automatically evaluate the quality of images in agreement with human quality judgments. Numerous IQA methods have been proposed over the past years to fulfill this goal. In this paper, a survey of the quality assessment methods for conventional image signals, as well as the newly emerged ones, which includes the high dynamic range (HDR) and 3-D images, is presented. A comprehensive explanation of the subjective and objective IQA and their classification is provided. Six widely used subjective quality datasets, and performance measures are reviewed. Emphasis is given to the full-reference image quality assessment (FR-IQA) methods, and 9 often-used quality measures (including mean squared error (MSE), structural similarity index (SSIM), multi-scale structural similarity index (MS-SSIM), visual information fidelity (VIF), most apparent distortion (MAD), feature similarity measure (FSIM), feature similarity measure for color images (FSIMC), dynamic range independent measure (DRIM), and tone-mapped images quality index (TMQI)) are carefully described, and their performance and computation time on four subjective quality datasets are evaluated. Furthermore, a brief introduction to 3-D IQA is provided and the issues related to this area of research are reviewed.
1
cs.MM
2,152,297,266
Convergence Analysis Using the Edge Laplacian Robust Consensus of Nonlinear Multi Agent Systems via Iss Method
This study develops an original and innovative matrix representation with respect to the information flow for networked multi-agent system. To begin with, the general concepts of the edge Laplacian of digraph are proposed with its algebraic properties. Benefit from this novel graph-theoretic tool, we can build a bridge between the consensus problem and the edge agreement problem; we also show that the edge Laplacian sheds a new light on solving the leaderless consensus problem. Based on the edge agreement framework, the technical challenges caused by unknown but bounded disturbances and inherently nonlinear dynamics can be well handled. In particular, we design an integrated procedure for a new robust consensus protocol that is based on a blend of algebraic graph theory and the newly developed cyclic-small-gain theorem. Besides, to highlight the intricate relationship between the original graph and cyclic-small-gain theorem, the concept of edge-interconnection graph is introduced for the first time. Finally, simulation results are provided to verify the theoretical analysis.
19
cs.SY
End of preview.

Multi-Scale Heterogeneous Text-Attributed Graph Datasets From Diverse Domains

  • Multi Scales. Our HTAG datasets span multiple scales, ranging from small (24K nodes, 104K edges) to large (5.6M nodes, 29.8M edges). Smaller datasets are suitable for testing computationally intensive algorithms, while larger datasets, such as DBLP and Patent, support the development of scalable models that leverage mini-batching and distributed training.
  • Diverse Domains. Our HTAG datasets include heterogeneous graphs that are representative of a wide range of domains: movie collaboration, community question answering, academic, book publication, and patent application. The broad coverage of domains empowers the development and demonstration of graph foundation models and helps differentiate them from domain-specific approaches.
  • Realistic and Reproducible Evaluation. We provide an automated evaluation pipeline for HTAGs that streamlines data processing, loading, and model evaluation, ensuring seamless reproducibility. Additionally, we employ time-based data splits for each dataset, which offer a more realistic and meaningful evaluation compared to traditional random splits.
  • Open-source Code for Dataset Construction. We have released the complete code for constructing our HTAG datasets, allowing researchers to build larger and more complex heterogeneous text-attribute graph datasets. For example, the CroVal dataset construction code can be used to create web-scale community question-answering networks, such as those derived from StackExchange data dumps. This initiative aims to further advance the field by providing the tools necessary for replicating and extending our datasets for a wide range of applications.

Download

from huggingface_hub import snapshot_download

# Download all
snapshot_download(repo_id="Cloudy1225/HTAG", repo_type="dataset", local_dir="./data")

# Or just download heterogeneous graphs and PLM-based node features
snapshot_download(repo_id="Cloudy1225/HTAG", repo_type="dataset", local_dir="./data", allow_patterns="*.pkl")

# Or just download raw texts
snapshot_download(repo_id="Cloudy1225/HTAG", repo_type="dataset", local_dir="./data", allow_patterns=["*.csv", "*.csv.zip"])

Dataset Format

The dataset includes heterogeneous graph edges, raw text, PLM-based features, labels, and years associated with text-attributed nodes. Raw text is provided in .csv or .csv.zip files, while the remaining data are stored in a dictionary object within a .pkl file. For example, by reading the tmdb.pkl file, the following dictionary can be obtained:

{'movie-actor': (array([   0,    0,    0, ..., 7504, 7504, 7504], dtype=int16),
  array([    0,     1,     2, ..., 11870,  1733, 11794], dtype=int16)),
 'movie-director': (array([   0,    0,    0, ..., 7503, 7503, 7504], dtype=int16),
  array([   0,    1,    2, ..., 3423,  966, 2890], dtype=int16)),
 'movie_labels': array([3, 1, 1, ..., 1, 1, 2], dtype=int8),
 'movie_feats': array([[ 0.00635284,  0.00649689,  0.01250827, ...,  0.06342042,
         -0.01747945,  0.0134356 ],
        [-0.14075027,  0.02825641,  0.02670695, ..., -0.12270895,
          0.08417314,  0.02486392],
        [ 0.00014208, -0.02286632,  0.00615967, ..., -0.03311544,
          0.04735276, -0.07458566],
        ...,
        [ 0.01835816,  0.07484645, -0.08099765, ..., -0.00150019,
          0.01669764,  0.00456595],
        [-0.00821487, -0.10434289,  0.01928608, ..., -0.06343049,
          0.05060194, -0.04229118],
        [-0.06465845,  0.13461556, -0.01640793, ..., -0.06274845,
          0.04002513, -0.00751513]], dtype=float32),
 'movie_years': array([2013, 1995, 1989, ..., 1939, 1941, 1965], dtype=int16)}

Dataset Statistics

# Nodes # Edges # Classes # Splits
TMDB 24,412 104,858 4 Train: 5,698
Movie: 7,505 Movie-Actor: 86,517 Valid: 711
Actor: 13,016 Movie-Director: 18,341 Test: 1,096
Director: 3,891
CroVal 44386 164,981 6 Train: 980
Question: 34153 Question-Question: 46,269 Valid: 1,242
User: 8898 Question-User: 34,153 Test: 31,931
Tag: 1335 Question-Tag: 84,559
ArXiv 231,111 2,075,692 40 Train: 47,084
Paper: 81,634 Paper-Paper: 1,019,624 Valid: 18,170
Author: 127,590 Paper-Author: 300,233 Test: 16,380
FoS: 21,887 Paper-FoS: 755,835
Book 786,257 9,035,291 8 Train: 330,201
Book Book-Book: 7,614,902 Valid: 57,220
Author Book-Author: 825,905 Test: 207,063
Publisher Book-Publisher: 594,484
DBLP 1,989,010 29,830,033 9 Train: 508,464
Paper: 964350 Paper-Paper: 16,679,526 Valid: 158,891
Author: 958961 Paper-Author: 3,070,343 Test: 296,995
FoS: 65699 Paper-FoS: 10,080,164
Patent 5,646,139 8,833,738 120 Train: 1,705,155
Patent: 2,762,187 Patent-Inventor: 6,071,551 Valid: 374,275
Inventor: 2,873,311 Patent-Examiner: 2,762,187 Test: 682,757
Examiner: 10,641

Dataset Construction

The code for dataset construction can be found in each graph_builder.ipynb file. Please see README.md in each subfolder for more details .

Downloads last month
103