diff --git "a/training/sample_data/specter_small.json" "b/training/sample_data/specter_small.json" deleted file mode 100644--- "a/training/sample_data/specter_small.json" +++ /dev/null @@ -1 +0,0 @@ -[{"query": {"sha": "d15ca38d2fb1250222132b5bf3fed8d249bcac45", "title": "The Effect of Aesthetic on the Usability of Data Visualization", "abstract": "Aesthetic seems currently under represented in most current data visualization evaluation methodologies. This paper investigates the results of an online survey of 285 participants, measuring both perceived aesthetic as well as the efficiency and effectiveness of retrieval tasks across a set of 11 different data visualization techniques. The data visualizations represent an identical hierarchical dataset, which has been normalized in terms of color, typography and layout balance. This study measured parameters such as speed of completion, accuracy rate, task abandonment and latency of erroneous response. Our findings demonstrate a correlation between latency in task abandonment and erroneous response time in relation to visualization's perceived aesthetic. These results support the need for an increased recognition for aesthetic in the typical evaluation process of data visualization techniques.", "corpus_id": 12871252}, "pos": {"sha": "536a5136eaf69bea96ba015d0a36b327c16909af", "title": "Aesthetics and Apparent Usability: Empirically Assessing Cultural and Methodological Issues", "abstract": "Three experiments were conducted to validate and replicate. in a different cultural setting. the results of a study by Kurosu and Kashimura [12] concerning the relationships between users\u2019 perceptions of interface aesthetics and usability. The results support the basic tindings by Kurosu and Kashimura. Very high correlations were found between perceived aesthetics of the interface and a priori perceived ease of use of the system. Differences of magnitude between correlations obtained in Japan and in Israel suggest the existence of cross-cultural differences. but these were not in the hypothesized direction.", "corpus_id": 207211621}, "neg": {"sha": "07cf39de2af9609e3946ade1b9fa2cb25550728d", "title": "Vision-Based Kidnap Recovery with SLAM for Home Cleaning Robots", "abstract": "Emerged as salient in the recent home appliance consumer market is a new generation of home cleaning robot featuring the capability of Simultaneous Localization and Mapping (SLAM). SLAM allows a cleaning robot not only to selfoptimize its work paths for efficiency but also to self-recover from kidnappings for user convenience. By kidnapping, we mean that a robot is displaced, in the middle of cleaning, without its SLAM aware of where it moves to. This paper presents a vision-based kidnap recovery with SLAM for home cleaning robots, the first of its kind, using a wheel drop switch and an upwardlooking camera for low-cost applications. In particular, a camera with a wide-angle lens is adopted for a kidnapped robot to be able to recover its pose on a global map with only a single image. First, the kidnapping situation is effectively detected based on a wheel drop switch. Then, for S. Lee \u00b7 S. Lee (B) School of Information and Communication Engineering and Department of Interaction Science, Sungkyunkwan University, Suwon, South Korea e-mail: lsh@ece.skku.ac.kr S. Lee e-mail: seongsu.lee@lge.com S. Lee \u00b7 S. Baek Future IT Laboratory, LG Electronics Inc., Seoul, South Korea e-mail: seungmin2.baek@lge.com an efficient kidnap recovery, a coarse-to-fine approach to matching the image features detected with those associated with a large number of robot poses or nodes, built as a map in graph representation, is adopted. The pose ambiguity, e.g., due to symmetry is taken care of, if any. The final robot pose is obtained with high accuracy from the fine level of the coarse-to-fine hierarchy by fusing poses estimated from a chosen set of matching nodes. The proposed method was implemented as an embedded system with an ARM11 processor on a real commercial home cleaning robot and tested extensively. Experimental results show that the proposed method works well even in the situation in which the cleaning robot is suddenly kidnapped during the map building process.", "corpus_id": 859098}}, {"query": {"sha": "682346450d9015975e1849203a23019d1ccb50ff", "title": "Non-Orthogonal Access with Random Beamforming and Intra-Beam SIC for Cellular MIMO Downlink", "abstract": "We investigate non-orthogonal access with a successive interference canceller (SIC) in the cellular multiple-input multiple-output (MIMO) downlink for systems beyond LTE-Advanced. Taking into account the overhead for the downlink reference signaling for channel estimation at the user terminal in the case of non-orthogonal multiuser multiplexing and the applicability of the SIC receiver in the MIMO downlink, we propose intra-beam superposition coding of a multiuser signal at the transmitter and the spatial filtering of inter-beam interference followed by the intra-beam SIC at the user terminal receiver. The intra-beam SIC cancels out the inter-user interference within a beam. Furthermore, the transmitter beamforming (precoding) matrix is controlled based on open loop-type random beamforming, which is very efficient in terms of the amount of feedback information from the user terminal. Simulation results show that the proposed non-orthogonal access scheme with random beamforming and the intra-beam SIC simultaneously achieves better sum and cell-edge user throughput compared to orthogonal access, which is assumed in LTE-Advanced.", "corpus_id": 5341074}, "pos": {"sha": "45ecadf65a779b3b5cbdbfc97fc839564405b24b", "title": "Opportunistic beamforming using dumb antennas", "abstract": "Multiuser diversity is a form of diversity inherent in a wireless network, provided by independent time varying channels across the different users. The diversity benefit is exploited by tracking the channel fluctuations of the users and scheduling transmissions to users when their instantaneous channel quality is near the peak. The diversity gain increases with the dynamic range of the fluctuations and is thus limited in environments with little scattering and/or slow fading. In such environments, we propose the use of multiple transmit antennas to artificially induce large and fast channel fluctuations so that multiuser diversity can still be exploited. The scheme can be interpreted as opportunistic beamforming and we show that true beamforming gains can be achieved when there are sufficient users, even though very limited channel feedback is needed. Furthermore, in a cellular system, the scheme plays an important and dual role of opportunistic nulling of the interference created on users of adjacent cells. We discuss the design implications of implementing this scheme in a complete wireless system.", "corpus_id": 1673156}, "neg": {"sha": "9f95eb7ce7ce190c7c8e6fca26de1a283f7007b1", "title": "On the Security of TLS 1.3 and QUIC Against Weaknesses in PKCS#1 v1.5 Encryption", "abstract": "Encrypted key transport with RSA-PKCS#1 v1.5 is the most commonly deployed key exchange method in all current versions of the Transport Layer Security (TLS) protocol, including the most recent version 1.2. However, it has several well-known issues, most importantly that it does not provide forward secrecy, and that it is prone to side channel attacks that may enable an attacker to learn the session key used for a TLS session. A long history of attacks shows that RSA-PKCS#1 v1.5 is extremely difficult to implement securely. The current draft of TLS version 1.3 dispenses with this encrypted key transport method. But is this sufficient to protect against weaknesses in RSA-PKCS#1 v1.5?\n We describe attacks which transfer the potential weakness of prior TLS versions to two recently proposed protocols that do not even support PKCS#1 v1.5 encryption, namely Google's QUIC protocol and TLS~1.3. These attacks enable an attacker to impersonate a server by using a vulnerable TLS-RSA server implementation as a \"signing oracle\" to compute valid signatures for messages chosen by the attacker.\n The first attack (on TLS 1.3) requires a very fast \"Bleichenbacher-oracle\" to create the TLS CertificateVerify message before the client drops the connection. Even though this limits the practical impact of this attack, it demonstrates that simply removing a legacy algorithm from a standard is not necessarily sufficient to protect against its weaknesses.\n The second attack on Google's QUIC protocol is much more practical. It can also be applied in settings where forging a signature with the help of a \"Bleichenbacher-oracle\" may take an extremely long time. This is because signed values in QUIC are independent of the client's connection request. Therefore the attacker is able to pre-compute the signature long before the client starts a connection. This makes the attack practical. Moreover, the impact on QUIC is much more dramatic, because creating a single forged signature is essentially equivalent to retrieving the long-term secret key of the server.", "corpus_id": 3256075}}, {"query": {"sha": "13b1d51bcacb2c83027808ab5ba5ea83df2eb968", "title": "Stereoscopic inpainting: Joint color and depth completion from stereo images", "abstract": "We present a novel algorithm for simultaneous color and depth inpainting. The algorithm takes stereo images and estimated disparity maps as input and fills in missing color and depth information introduced by occlusions or object removal. We first complete the disparities for the occlusion regions using a segmentation-based approach. The completed disparities can be used to facilitate the user in labeling objects to be removed. Since part of the removed regions in one image is visible in the other, we mutually complete the two images through 3D warping. Finally, we complete the remaining unknown regions using a depth-assisted texture synthesis technique, which simultaneously fills in both color and depth. We demonstrate the effectiveness of the proposed algorithm on several challenging data sets.", "corpus_id": 2533616}, "pos": {"sha": "b40323876b63a12f9eb323a8d0a4eebfcf44c118", "title": "Handling Occlusions in Dense Multi-view Stereo", "abstract": "While stereo matching was originally formulated as the recovery of 3D shape from a pair of images, it is now generally recognized that using more than two images can dramatically improve the quality of the reconstruction. Unfortunately, as more images are added, the prevalence of semioccluded regions (pixels visible in some but not all images) also increases. In this paper, we propose some novel techniques to deal with this problem. Our first idea is to use a combination of shiftable windows and a dynamically selected subset of the neighboring images to do the matches. Our second idea is to explicitly label occluded pixels within a global energy minimization framework, and to reason about visibility within this framework so that only truly visible pixels are matched. Experimental results show a dramatic improvement using the first idea over conventional multibaseline stereo, especially when used in conjunction with a global energy minimization technique. These results also show that explicit occlusion labeling and visibility reasoning do help, but not significantly, if the spatial and temporal selection is applied first.", "corpus_id": 7517501}, "neg": {"sha": "16d4c58fc710e5b38ed0be0214d4b82c6c469661", "title": "Verification of the randomized consensus algorithm of Aspnes and Herlihy: a case study", "abstract": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.", "corpus_id": 47180476}}, {"query": {"sha": "77e810f9d9194ce4b04dd96953b17caab298802f", "title": "Bringing computational thinking to K-12: what is Involved and what is the role of the computer science education community?", "abstract": "The process of increasing student exposure to computational thinking in K-12 is complex, requiring systemic change, teacher engagement, and development of signifi cant resources. Collaboration with the computer science education community is vital to this effort.", "corpus_id": 207184749}, "pos": {"sha": "ecfeecf0e9955070b64ec28a5a8bbc2e3828e9f9", "title": "A plea for modesty", "abstract": "From time to time a movement arises that promises to save the world, or at least to make it vastly better. The extraordinary achievements of digital computing make it a locus of such movements today. Yet we should be wary; when movements fail they provoke backlash that rejects the more limited gains that they might have afforded. Today \"computational thinking\" has a considerable following, and I would like to discuss some problems with its discourse. It is too often presented in terms that could be interpreted as arrogant or that are overstated. Its descriptions too often lack appropriate examples, and perhaps as a result, it gets misunderstood in casual writing.", "corpus_id": 207179060}, "neg": {"sha": "e982b7a3fbf0cb0304ee5049a07be21dddc863bd", "title": "Advantages and Disadvantages of PowerPoint in Lectures to Science Students", "abstract": "PowerPoint is now widely used in lectures to science students in most colleges of China. We summarize its advantages as producing better visual effects, high efficiency in information transfer, precise and systemic knowledge structure. Disadvantages of PowerPoint may be induced by irrelevant information in slides, neglect of interaction with students, uncontrolled speed in presenting or too strict order of slides. Strategies to avoid these disadvantages are proposed.", "corpus_id": 33078272}}, {"query": {"sha": "25ed2d8a5a8423f34cd86e73c1440f4b09f1760e", "title": "Subjective Panoramic Video Quality Assessment Database for Coding Applications", "abstract": "With the development of virtual reality, higher quality panoramic videos are in great demand to guarantee the immersive viewing experience. Therefore, quality assessment attaches much importance to correlated technologies. Considering the geometric transformation in projection and the limited resolution of head-mounted device (HMD), a modified display protocol of the high resolution sequences for the subjective rating test is proposed, in which an optimal display resolution is determined based on the geometry constraints between screen and human eyes. By sampling the videos to the optimal resolution before coding, the proposed method significantly alleviates the interference of HMD sampling while displaying, thus ensuring the reliability of subjective quality opinion in terms of video coding. Using the proposed display protocol, a subjective quality database for panoramic videos is established for video coding applications. The proposed database contains 50 distorted sequences obtained from ten raw panoramic video sequences. Distortions are introduced with the High Efficiency Video Coding compression. Each sequence is evaluated by 30 subjects on video quality, following the absolute category rating with hidden reference method. The rating scores and differential mean opinion scores (DMOSs) are recorded and included in the database. With the proposed database, several state-of-the-art objective quality assessment methods are further evaluated with correlation analysis. The database, including the video sequences, subjective rating scores and DMOS, can be used to facilitate future researches on coding applications.", "corpus_id": 46957366}, "pos": {"sha": "8351681915d4faf06b64ee412a1e8ee136c19c4a", "title": "How visual fatigue and discomfort impact 3D-TV quality of experience: a comprehensive review of technological, psychophysical, and psychological factors", "abstract": "The Quality of Experience (QoE) of 3D contents is usually considered to be the combination of the perceived visual quality, the perceived depth quality, and lastly the visual fatigue and comfort. When either fatigue or discomfort are induced, studies tend to show that observers prefer to experience a 2D version of the contents. For this reason, providing a comfortable experience is a prerequisite for observers to actually consider the depth effect as a visualization improvement. In this paper, we propose a comprehensive review on visual fatigue and discomfort induced by the visualization of 3D stereoscopic contents, in the light of physiological and psychological processes enabling depth perception. First, we review the multitude of manifestations of visual fatigue and discomfort (near triad disorders, symptoms for discomfort), as well as means for detection and evaluation. We then discuss how, in 3D displays, ocular and cognitive conflicts with real world experience may cause fatigue and discomfort; these includes the accommodation vergence conflict, the inadequacy between presented stimuli and observers depth of focus, and the cognitive integration of conflicting depth cues. We also discuss some limits for stereopsis that constrain our ability to perceive depth, and in particular the perception of planar and in-depth motion, the limited fusion range and various stereopsis disorders. Finally, this paper discusses how the different aspects of fatigue and discomfort apply to 3D technolo\u030a Corresponding author Matthieu Urvoy \u030a \u0308 Marcus Barkowsky \u0308 Patrick Le Callet LUNAM Universit\u00e9, Universit\u00e9 de Nantes, IRCCyN UMR CNRS 6597, Institut de Recherche en Communications et Cybern\u00e9tique de Nantes, Polytech Nantes, rue Christian Pauc BP 50609 44306 Nantes Cedex 3 E-mail: {matthieu.urvoy, marcus.barkowsky, patrick.lecallet} @univ-nantes.fr gies and contents. We notably highlight the need for respecting a comfort zone and avoiding camera and rendering artifacts. We also discuss the influence of visual attention, exposure duration and training. Conclusions provide guidance for best practices and future research.", "corpus_id": 11811246}, "neg": {"sha": "1c26245d3d499df588c3c451801c7303618b07fc", "title": "Biased inheritance of mitochondria during asymmetric cell division in the mouse oocyte.", "abstract": "A fundamental rule of cell division is that daughter cells inherit half the DNA complement and an appropriate proportion of cellular organelles. The highly asymmetric cell divisions of female meiosis present a different challenge because one of the daughters, the polar body, is destined to degenerate, putting at risk essential maternally inherited organelles such as mitochondria. We have therefore investigated mitochondrial inheritance during the meiotic divisions of the mouse oocyte. We find that mitochondria are aggregated around the spindle by a dynein-mediated mechanism during meiosis I, and migrate together with the spindle towards the oocyte cortex. However, at cell division they are not equally segregated and move instead towards the oocyte-directed spindle pole and are excluded from the polar body. We show that this asymmetrical inheritance in favour of the oocyte is not caused by bias in the spindle itself but is dependent on an intact actin cytoskeleton, spindle-cortex proximity, and cell cycle progression. Thus, oocyte-biased inheritance of mitochondria is a variation on rules that normally govern organelle segregation at cell division, and ensures that essential maternally inherited mitochondria are retained to provide ATP for early mammalian development.", "corpus_id": 18562645}}, {"query": {"sha": "b2904c7b2c0ceab4e78ec93b20aafb9d67242f8a", "title": "3D Human Pose Machines with Self-supervised Learning", "abstract": "Driven by recent computer vision and robotic applications, recovering 3D human poses has become increasingly important and attracted growing interests. In fact, completing this task is quite challenging due to the diverse appearances, viewpoints, occlusions and inherently geometric ambiguities inside monocular images. Most of the existing methods focus on designing some elaborate priors/constraints to directly regress 3D human poses based on the corresponding 2D human pose-aware features or 2D pose predictions. However, due to the insufficient 3D pose data for training and the domain gap between 2D space and 3D space, these methods have limited scalabilities for all practical scenarios (e.g., outdoor scene). Attempt to address this issue, this paper proposes a simple yet effective self-supervised correction mechanism to learn all intrinsic structures of human poses from abundant images without 3D pose annotations. We further apply our self-supervised correction mechanism to develop a recurrent 3D pose machine, which jointly integrates the 2D spatial relationship, temporal smoothness of predictions and 3D geometric knowledge. Extensive evaluations on the Human3.6M and HumanEva-I benchmarks demonstrate the superior performance and efficiency of our framework over all the compared computing methods.", "corpus_id": 58004714}, "pos": {"sha": "3325860c0c82a93b2eac654f5324dd6a776f609e", "title": "2D Human Pose Estimation: New Benchmark and State of the Art Analysis", "abstract": "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.", "corpus_id": 206592419}, "neg": {"sha": "7a061e7eab865fc8d2ef00e029b7070719ad2e9a", "title": "Efficiently Scaling up Crowdsourced Video Annotation", "abstract": "We present an extensive three year study on economically annotating video with crowdsourced marketplaces. Our public framework has annotated thousands of real world videos, including massive data sets unprecedented for their size, complexity, and cost. To accomplish this, we designed a state-of-the-art video annotation user interface and demonstrate that, despite common intuition, many contemporary interfaces are sub-optimal. We present several user studies that evaluate different aspects of our system and demonstrate that minimizing the cognitive load of the user is crucial when designing an annotation platform. We then deploy this interface on Amazon Mechanical Turk and discover expert and talented workers who are capable of annotating difficult videos with dense and closely cropped labels. We argue that video annotation requires specialized skill; most workers are poor annotators, mandating robust quality control protocols. We show that traditional crowdsourced micro-tasks are not suitable for video annotation and instead demonstrate that deploying time-consuming macro-tasks on MTurk is effective. Finally, we show that by extracting pixel-based features from manually labeled key frames, we are able to leverage more sophisticated interpolation strategies to maximize performance given a fixed budget. We validate the power of our framework on difficult, real-world data sets and we demonstrate an inherent trade-off between the mix of human and cloud computing used vs. the accuracy and cost of the labeling. We further introduce a novel, cost-based evaluation criteria that compares vision algorithms by the budget required to achieve an acceptable performance. We hope our findings will spur innovation in the creation of massive labeled video data sets and enable novel data-driven computer vision applications.", "corpus_id": 2315620}}, {"query": {"sha": "449bf3d0cdb94ed77d6ddedfcd69619617777d2a", "title": "Enhanced flexible LoRaWAN node for industrial IoT", "abstract": "The Industrial Internet of Things (IIoT) is introducing the IoT approach in the industrial automation world, paving the way to innovative services for improving efficiency, reliability and availability of industrial processes and products. The IIoT takes advantage of the collection of large amount of data by means of (wireless) links connecting smart sensors attached to the system of interest. Low Power Wide Area Networks emerged as a viable solution for implementing private cellular like communications. In this paper, the LoRaWAN technology is addressed, thanks to the wide acceptance it received in both industrial and academic worlds. In particular, an enhanced node is proposed as a building block of IIoT-enabled industrial wireless networks. It offers new features: it behaves as a regular node; it can act as a gateway toward legacy/different (wired) networks; and it can extend LoRaWAN coverage acting as a range extender (i.e. a single hop forwarder). After a brief overview of LoRa and LoRaWAN, the paper deals with the features of the realized node, exploiting commercially available hardware. The experimental results show the feasibility of the proposed approach. In particular, the range extender capability of transmitting replicas of an incoming messages is tested for different transmission delays.", "corpus_id": 49570616}, "pos": {"sha": "c8960733ecc0c4f34ea8366931e5290fa798bc62", "title": "Monitoring of Large-Area IoT Sensors Using a LoRa Wireless Mesh Network System: Design and Evaluation", "abstract": "Although many techniques exist to transfer data from the widely distributed sensors that make up the Internet of Things (IoT) (e.g., using 3G/4G networks or cables), these methods are associated with prohibitively high costs, making them impractical for real-life applications. Recently, several emerging wireless technologies have been proposed to provide long-range communication for IoT sensors. Among these, LoRa has been examined for long-range performance. Although LoRa shows good performance for long-range transmission in the countryside, its radio signals can be attenuated over distance, and buildings, trees, and other radio signal sources may interfere with the signals. Our observations show that in urban areas, LoRa requires dense deployment of LoRa gateways (GWs) to ensure that indoor LoRa devices can successfully transfer data back to remote GWs. Wireless mesh networking is a solution for increasing communication range and packet delivery ratio (PDR) without the need to install additional GWs. This paper presents a LoRa mesh networking system for large-area monitoring of IoT applications. We deployed 19 LoRa mesh networking devices over an $800\\,\\,\\text {m} \\times 600$ m area on our university campus and installed a GW that collected data at 1-min intervals. The proposed LoRa mesh networking system achieved an average 88.49% PDR, whereas the star-network topology used by LoRa achieved only 58.7% under the same settings. To the best of our knowledge, this is the first academic study discussing LoRa mesh networking in detail and evaluating its performance via real experiments.", "corpus_id": 51983195}, "neg": {"sha": "8c51e4c08a07b98b5fa45bb28302072af309e22e", "title": "Automatic segmentation of malaria parasites on thick blood film using blob analysis", "abstract": "Malaria remains a public health problem in Indonesia. There are still many deaths caused by malaria, particularly in eastern Indonesia. There are two types of blood perform in malaria, thick blood film and thin blood film. In Indonesia, thin blood film is used more frequently than thick blood film. Malaria parasites can be found in thick blood film rapidly due to the higher volume of the blood used and sweeping process is not as much on thin blood, still a lot of leukocytes or white blood cells and platelets in the thick blood film, making it more difficult to identify the malaria parasite. Therefore we need a method can identify malaria parasites in thick blood film with a high percentage of accuracy. This study aims to build a segmentation system more objective and reduce subjective factors of medical personnel in the diagnosis of malaria parasites. This study has two main stages, preprocessing and segmentation. We use the HSV color space in the preprocessing and morphological operations and blob analysis on the segmentation stage. From the results can be known that the blob analysis was able to identify malaria parasites automatically.", "corpus_id": 16911381}}, {"query": {"sha": "5e47372f571af1fc3065fa36b877e3e75e8f401c", "title": "Edge Provisioning with Flexible Server Placement", "abstract": "We present $\\sf {Tentacle}$ , a decision support framework to provision edge servers for online services providers (OSPs). $\\sf {Tentacle}$ takes advantage of the increasingly flexible edge server placement, which is enabled by new technologies such as edge computing platforms, cloudlets and network function virtualization, to optimize the overall performance and cost of edge infrastructures. The key difference between $\\sf {Tentacle}$ and traditional server placement approaches lies on that $\\sf {Tentacle}$ can discover proper unforeseen edge locations which significantly improve the efficiency and reduce the cost of edge provisioning. We show how $\\sf {Tentacle}$ effectively identifies promising edge locations which are close to a collection of users merely with inaccurate network distance estimation methods, e.g., geographic coordinate (GC) and network coordinate systems (NC). We also show how $\\sf {Tentacle}$ comprehensively considers various pragmatic concerns in edge provisioning, such as traffic limits by law or ISP policy, edge site deployment and resource usage cost, over-provisioning for fault tolerance, etc., with a simple optimization model. We simulate $\\sf {Tentacle}$ using real network data at global and county-wide scales. Measurement-driven simulations show that with a given cost budget $\\sf {Tentacle}$ can improve user performance by around 10-45 percent at global scale networks and 15-35 percent at a country-wide scale network.", "corpus_id": 7811824}, "pos": {"sha": "424909ea3e4e5a8cfe5363420926c1b10fbbf034", "title": "Vivaldi: a decentralized network coordinate system", "abstract": "Large-scale Internet applications can benefit from an ability to predict round-trip times to other hosts without having to contact them first. Explicit measurements are often unattractive because the cost of measurement can outweigh the benefits of exploiting proximity information. Vivaldi is a simple, light-weight algorithm that assigns synthetic coordinates to hosts such that the distance between the coordinates of two hosts accurately predicts the communication latency between the hosts. Vivaldi is fully distributed, requiring no fixed network infrastructure and no distinguished hosts. It is also efficient: a new host can compute good coordinates for itself after collecting latency information from only a few other hosts. Because it requires little com-munication, Vivaldi can piggy-back on the communication patterns of the application using it and scale to a large number of hosts. An evaluation of Vivaldi using a simulated network whose latencies are based on measurements among 1740 Internet hosts shows that a 2-dimensional Euclidean model with height vectors embeds these hosts with low error (the median relative error in round-trip time prediction is 11 percent).", "corpus_id": 722037}, "neg": {"sha": "215aad1520ec1b087ab2ba4043f5e0ecc32e7482", "title": "Reducibility Among Combinatorial Problems", "abstract": "A large class of computational problems involve the determination of properties of graphs, digraphs, integers, arrays of integers, finite families of finite sets, boolean formulas and elements of other countable domains. Through simple encodings from such domains into the set of words over a finite alphabet these problems can be converted into language recognition problems, and we can inquire into their computational complexity. It is reasonable to consider such a problem satisfactorily solved when an algorithm for its solution is found which terminates within a number of steps bounded by a polynomial in the length of the input. Many problems with wide applicability \u2013 e.g., set cover, knapsack, hitting set, max cut, and satisfiability \u2013 lack a polynomial algorithm for solving them, but also lack a proof that no such polynomial algorithm exists. Hence, they remain \u201copen problems.\u201d This paper references the recent work, \u201cOn the Reducibility of Combinatorial Problems\u201d [1]. BODY A large class of open problems are mutually convertible via poly-time reductions. Hence, either all can be solved in poly-time, or none can. REFERENCES [1] R. Karp. Reducibility Among Combinatorial Problems. In Complexity of Computer Computations, 1972. \u2217With apologies to Professor Richard Karp. Volume X of Tiny Transactions on Computer Science This content is released under the Creative Commons Attribution-NonCommercial ShareAlike License. Permission to make digital or hard copies of all or part of this work is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. CC BY-NC-SA 3.0: http://creativecommons.org/licenses/by-nc-sa/3.0/.", "corpus_id": 33509266}}, {"query": {"sha": "39f8b66d92f7ae9b1924d9b16f52444ee7507a9c", "title": "Auditing disclosure by relevance ranking", "abstract": "Numerous widely publicized cases of theft and misuse of private information underscore the need for audit technology to identify the sources of unauthorized disclosure. We present an auditing methodology that ranks potential disclosure sources according to their proximity to the leaked records. Given a sensitive table that contains the disclosed data, our methodology prioritizes by relevance the past queries to the database that could have potentially been used to produce the sensitive table. We provide three conceptually different measures of proximity between the sensitive table and a query result. One measure is inspired by information retrieval in text processing, another is based on statistical record linkage, and the third computes the derivation probability of the sensitive table in a tree-based generative model. We also analyze the characteristics of the three measures and the corresponding ranking algorithms.", "corpus_id": 15386581}, "pos": {"sha": "a2c4dec86a96a99adc00cb664b703e8407216183", "title": "Record Linkage: Current Practice and Future Directions", "abstract": "Record linkage is the task of quickly and accurately identifying records corresponding to the same entity from one or more data sources. Record linkage is also known as data cleaning, entity reconciliation or identification and the merge/purge problem. This paper presents the \u201cstandard\u201d probabilistic record linkage model and the associated algorithm. Recent work in information retrieval, federated database systems and data mining have proposed alternatives to key components of the standard algorithm. The impact of these alternatives on the standard approach are assessed. The key question is whether and how these new alternatives are better in terms of time, accuracy and degree of automation for a particular record linkage application.", "corpus_id": 18453442}, "neg": {"sha": "80d30b89e9c79887c2fb22c9dd1b9ed180c77f6e", "title": "A tutorial introduction to compressed sensing", "abstract": "Compressed sensing refers to recovering a large but sparse vector, or a large but low rank matrix, from a small number of linear measurements. This paper presents some of the most popular and useful approaches at present.", "corpus_id": 31985819}}, {"query": {"sha": "e96dfdfadd9cfa8c35f832572f75848446cc5d50", "title": "A low-phase noise 12 GHz digitally controlled oscillator in 65 nm CMOS for a FMCW radar frequency synthesizer", "abstract": "This paper presents a power-efficient low-phase noise digitally controlled oscillator (DCO) implemented in a 65 nm CMOS technology. The DCO is designed for a 12 GHz frequency synthesizer covering up to 1GHz of frequency sweep range demanded by the frequency modulated continuous wave (FMCW) radar. The realized DCO circuit is designed to meet the stringent phase noise specifications of less than \u2212110dBc/Hz @ 1MHz, required by the high resolution industrial indoor secondary FMCW radar system. The 12 GHz frequency synthesizer is based on a fractional-N all digital phased looked-loop (ADPLL) achieving a tuning range of 18.8 % using an 8-bit capacitive DAC array. This way, the DCO reliably covers up to 1GHz of frequency sweep range plus PVT variations. The low-power design achieves a phase noise performance of better than \u2212112.3 dBc/Hz at a 1MHz offset. In summary, the DCO achieves a best in class figure of merit (FOMT) value of \u2212186.7 dB with largest tuning range of 18.8 %, while only consuming 16.4 mW.", "corpus_id": 36820474}, "pos": {"sha": "51b38f45d8a66a996bc45edaef24feb8726b83cb", "title": "A 79 GHz Phase-Modulated 4 GHz-BW CW Radar Transmitter in 28 nm CMOS", "abstract": "Millimeter-wave sensors perform robust and accurate remote motion sensing. We propose a 28 nm CMOS Radar TX that modulates a 79 GHz carrier with a 2 Gsps Pseudo-Noise sequence. The measured modulated output power at 79 GHz in 4 GHz BW is higher than +11 dBm (27\u00b0C), while the spurious emissions are below -20 dBc, fully satisfying the spectral mask regulations. The output RF BW where we can lock the injection-locked LO is 13 GHz. Overall, the TX draws 121 mW from a 0.9 V supply resulting in a record efficiency above 10%. More importantly, the TX is functional up to 125\u00b0C still providing more than +7 dBm output power over the same RF BW.", "corpus_id": 22314853}, "neg": {"sha": "698b753bcce607ef668c3f8e24687fddb3aa58db", "title": "Radar cross section for pedestrian in 76GHz band", "abstract": "This paper describes the results of our evaluation of a pedestrian's radio wave reflection characteristics. The reflection characteristics of radio waves from a pedestrian were measured as part of the effort to improve the pedestrian detection performance of the radar sensor. A pedestrian's radio wave reflection intensity is low, at about 15-20dB less than that of the rear of a vehicle, and can vary by as much as 20dB. Evaluating these characteristics in detail is a prerequisite to the development of a radar sensor that is capable of detecting pedestrians reliably.", "corpus_id": 39567604}}, {"query": {"sha": "4cf316b587b200b491871f999744adad52629caa", "title": "Geometric Constraints for Human Detection in Aerial Imagery", "abstract": "In this paper, we propose a method for detecting humans in imagery taken from a UAV. This is a challenging problem due to small number of pixels on target, which makes it more difficult to distinguish people from background clutter, and results in much larger searchspace. We propose a method for human detection based on a number of geometric constraints obtained from the metadata. Specifically, we obtain the orientation of groundplane normal, the orientation of shadows cast by humans in the scene, and the relationship between human heights and the size of their corresponding shadows. In cases when metadata is not available we propose a method for automatically estimating shadow orientation from image data. We utilize the above information in a geometry based shadow, and human blob detector, which provides an initial estimation for locations of humans in the scene. These candidate locations are then classified as either human or clutter using a combination of wavelet features, and a Support Vector Machine. Our method works on a single frame, and unlike motion detection based methods, it bypasses the global motion compensation process, and allows for detection of stationary and slow moving humans, while avoiding the search across the entire image, which makes it more accurate and very fast. We show impressive results on sequences from the VIVID dataset and our own data, and provide comparative analysis.", "corpus_id": 7018553}, "pos": {"sha": "865074aaee6c70cfe63ce8cb4d0910913ef7bb78", "title": "Geo-spatial aerial video processing for scene understanding and object tracking", "abstract": "This paper presents an approach to extracting and using semantic layers from low altitude aerial videos for scene understanding and object tracking. The input video is captured by low flying aerial platforms and typically consists of strong parallax from non-ground-plane structures. A key aspect of our approach is the use of geo-registration of video frames to reference image databases (such as those available from Terraserver and Google satellite imagery) to establish a geo-spatial coordinate system for pixels in the video. Geo-registration enables Euclidean 3D reconstruction with absolute scale unlike traditional monocular structure from motion where continuous scale estimation over long periods of time is an issue. Geo-registration also enables correlation of video data to other stored information sources such as GIS (geo-spatial information system) databases. In addition to the geo-registration and 3D reconstruction aspects, the key contributions of this paper include: (1) exploiting appearance and 3D shape constraints derived from geo-registered videos for labeling of structures such as buildings, foliage, and roads for scene understanding, and (2) elimination of moving object detection and tracking errors using 3D parallax constraints and semantic labels derived from geo-registered videos. Experimental results on extended time aerial video data demonstrates the qualitative and quantitative aspects of our work.", "corpus_id": 11137900}, "neg": {"sha": "5010f30b0e6a71a16b49cbc2134450bb9e3a2659", "title": "The Visual Analysis of Human Movement: A Survey", "abstract": "The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment. Because of many potentially important applications, \u201clooking at people\u201d is currently one of the most active application domains in computer vision. This survey identifies a number of promising applications and provides an overview of recent developments in this domain. The scope of this survey is limited to work on whole-body or hand motion; it does not include work on human faces. The emphasis is on discussing the various methodologies; they are grouped in 2-D approaches with or without explicit shape models and 3-D approaches. Where appropriate, systems are reviewed. We conclude with some thoughts about future directions. c \u00a9 1999 Academic Press", "corpus_id": 7788290}}, {"query": {"sha": "59971c36c87b32b79f1d0227a6772e19e4c7e4f6", "title": "Perceptions of race", "abstract": "UNTIL RECENTLY, EXPERIMENTS ON PERSON PERCEPTION HAD LED TO TWO UNWELCOME CONCLUSIONS: (1) people encode the race of each individual they encounter, and (2) race encoding is caused by computational mechanisms whose operation is automatic and mandatory. Evolutionary analyses rule out the hypothesis that the brain mechanisms that cause race encoding evolved for that purpose. Consequently, race encoding must be a byproduct of mechanisms that evolved for some alternative function. But which one? Race is not encoded as a byproduct of domain-general perceptual processes. Two families of byproduct hypotheses remain: one invokes inferential machinery designed for tracking coalitional alliances, the other machinery designed for reasoning about natural kinds. Recent experiments show that manipulating coalitional variables can dramatically decrease the extent to which race is noticed and remembered.", "corpus_id": 11343153}, "pos": {"sha": "8a59fe9d74ece3cefcb16db9090af58e8d342aeb", "title": "Social cognition: thinking categorically about others.", "abstract": "In attempting to make sense of other people, perceivers regularly construct and use categorical representations to simplify and streamline the person perception process. Noting the importance of categorical thinking in everyday life, our emphasis in this chapter is on the cognitive dynamics of categorical social perception. In reviewing current research on this topic, three specific issues are addressed: (a) When are social categories activated by perceivers, (b) what are the typical consequences of category activation, and (c) can perceivers control the influence and expression of categorical thinking? Throughout the chapter, we consider how integrative models of cognitive functioning may inform our understanding of categorical social perception.", "corpus_id": 14816519}, "neg": {"sha": "b50a7b8aa2c6a9efad08ba43e48c30fb79615955", "title": "Learning Parameters and Constitutive Relationships with Physics Informed Deep Neural Networks", "abstract": "We present a physics informed deep neural network (DNN) method for estimating parameters and unknown physics (constitutive relationships) in partial differential equation (PDE) models. We use PDEs in addition to measurements to train DNNs to approximate unknown parameters and constitutive relationships as well as states. The proposed approach increases the accuracy of DNN approximations of partially known functions when a limited number of measurements is available and allows for training DNNs when no direct measurements of the functions of interest are available. We employ physics informed DNNs to estimate the unknown space-dependent diffusion coefficient in a linear diffusion equation and an unknown constitutive relationship in a non-linear diffusion equation. For the parameter estimation problem, we assume that partial measurements of the coefficient and states are available and demonstrate that under these conditions, the proposed method is more accurate than state-of-the-art methods. For the non-linear diffusion PDE model with a fully unknown constitutive relationship (i.e., no measurements of constitutive relationship are available), the physics informed DNN method can accurately estimate the non-linear constitutive relationship based on state measurements only. Finally, we demonstrate that the proposed method remains accurate in the presence of measurement noise.", "corpus_id": 52415616}}, {"query": {"sha": "bd2e1618a5720c64920335e0f02b4ff4c29a7e8b", "title": "Modeling and parameter estimation for in-pipe swimming robots", "abstract": "State-of-the-art, in-pipe, crawling robots face challenges in small diameter, non-smooth, water distribution pipes, mostly because of their direct contact with the pipe walls. On the other hand, swimming robots show greater potential in performing various maneuvers inside the pipes, because of the freedom in their motion. Such autonomous, swimming robots are needed for pipe-monitoring and leak detection in all sorts of pipe networks. Swimming motion inside confined environments is not well studied, and thus, this paper tackles the problem of modeling an in-pipe swimming vehicle. A conventional methodology that involved hydrodynamic coefficients is adopted, however the confined environment is affecting the parameters under study heavily. We discuss how these parameters rely on the pipe and robot geometry, unlike the case where the robot would swim in open-water.", "corpus_id": 19881653}, "pos": {"sha": "2d3f320ceff506e334587a20ffddb745e41a5934", "title": "Robotic devices for water main in-pipe inspection: A survey", "abstract": "Many water companies have a limited knowledge of the structural condition of their assets. Underground assets have been installed for a long time, so they are old and are failing, leading to leaks, breaks, and consequential damage to third parties. A common practice is the removal of pipe sections for condition assessment, causing service interruption and resulting in the assessment of a small percentage of the network as well as high replacement costs. Systematic condition assessment of pipes could help to prevent network problems as well as in proposing an efficient investment plan. This can be performed by using robotic inspection devices. Toward this end, this paper reviews existing robotic tools and analyzes open problems to be addressed for a successful robotic inspection device. C \u00a9 2010 Wiley Periodicals, Inc.", "corpus_id": 41519751}, "neg": {"sha": "832ebc9095d74f740598f444add6eb4843805869", "title": "PIRAT - A System for Quantitative Sewer Pipe Assessment", "abstract": "Sewers are aging, expensive assets that attract public attention only when they fail. Sewer operators are under increasing pressure to minimise their maintenance costs, while preventing sewer failures. Inspection can give early warning of failures and allow economical repair under noncrisis conditions. Current inspection techniques are subjective and detect only gross defects reliably. They cannot provide the data needed to confidently plan long-term maintenance. This paper describes PIRAT, a quantitative technique for sewer inspection. PIRAT measures the internal geometry of the sewer and then analyses these data to detect, classify, and rate defects automatically using artificial intelligence techniques. We describe the measuring system and present and discuss geometry results for different types of sewers. The defect analysis techniques are outlined and a sample defect report presented. PIRAT\u2019s defect reports are compared with reports from the conventional technique and the discrepancies discussed. We relate PIRAT to other work in sewer robotics. KEY WORDS\u2014sewer inspection robot, sewer condition assessment, neural network", "corpus_id": 14506304}}, {"query": {"sha": "78965f62cdf88f068a0e93bb641f72504182b840", "title": "Tushar Khot", "abstract": "Over the past years, Machine Learning (ML) approaches have taken large strides in their predictive accuracy and ease of use, resulting in ML being used in increasing number of domains. At the same time, information has grown exponentially in terms of its size and complexity. Inter-related objects (people, atoms, words, etc.) spread across multiple relations (friends, bonded, dependent, etc.) is now a common occurrence in many domains such as molecular chemistry, medical diagnosis, social networks and information extraction. To deal with noisy multi-relational data, Statistical Relational Learning (SRL) models have been proposed. Unlike most ML approaches that rely on a fixed number of features for every example, SRL models can handle an arbitrary number of features. For instance, a patient can have all their test results, where the number of tests may vary between patients, as features. But due to the increased complexity, SRL models do not scale to large domains, especially when learning the structure of the probabilistic dependencies (e.g., discovering the dependence of parents\u2019 chromosomes on a person\u2019s blood type). My research has mainly concentrated on developing more accurate, scalable structurelearning approaches for SRL models to make them more generally and easily applicable. Since these approaches do not rely on an expert designed model, I was able to use them in diverse domains ranging from natural language processing to medical diagnoses to network analysis.", "corpus_id": 50697255}, "pos": {"sha": "348bc39ab55ce59293d65037e5955c4fd6ebd420", "title": "Gradient-based boosting for statistical relational learning: The relational dependency network case", "abstract": "Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.", "corpus_id": 11478579}, "neg": {"sha": "5f0e8f91d57eaae3e22729b0cf1744d5cf7b526e", "title": "Mindfulness and self-compassion as predictors of psychological wellbeing in long-term meditators and matched nonmeditators", "abstract": "(2012): Mindfulness and self-compassion as predictors of psychological wellbeing in long-term meditators and matched nonmeditators, The Journal of Positive Psychology: This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "corpus_id": 15972961}}, {"query": {"sha": "f58d5affe5001348bb43e0edae1febb97ad93622", "title": "EigenNet: Towards Fast and Structural Learning of Deep Neural Networks", "abstract": "Deep Neural Network (DNN) is difficult to train and easy to overfit in training. We address these two issues by introducing EigenNet, an architecture that not only accelerates training but also adjusts number of hidden neurons to reduce over-fitting. They are achieved by whitening the information flows of DNNs and removing those eigenvectors that may capture noises. The former improves conditioning of the Fisher information matrix, whilst the latter increases generalization capability. These appealing properties of EigenNet can benefit many recent DNN structures, such as network in network and inception, by wrapping their hidden layers into the layers of EigenNet. The modeling capacities of the original networks are preserved. Both the training wall-clock time and number of updates are reduced by using EigenNet, compared to stochastic gradient descent on various datasets, including MNIST, CIFAR-10, and CIFAR-100.", "corpus_id": 28855125}, "pos": {"sha": "6c8b30f63f265c32e26d999aa1fef5286b8308ad", "title": "Dropout: a simple way to prevent neural networks from overfitting", "abstract": "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \u201cthinned\u201d networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "corpus_id": 6844431}, "neg": {"sha": "50178ab46efc34620a94ceaf19940d53d465784e", "title": "Investigation on Thermal Characterization of Eutectic Flip-Chip UV-LEDs With Different Bonding Voidage", "abstract": "Flip-chip ultraviolet light-emitting diode (FC UV-LED) fabricated by direct AuSn eutectic package is of high interest in Research and Development due to its excellent thermal performance and good reliability. However, the voids in eutectic bonding layer due to the lack of AuSn filled have a big influence on the thermal management and optical performance of FC UV-LEDs, and it is believed that the eutectic voids can affect the thermal-conduction resistance (the following unified called thermal resistance) and the junction temperature of FC UV-LEDs. In this paper, modeling and thermal simulation using finite element analysis is developed by considering the geometrical model of eutectic FC UV-LEDs with 3%, 10%, 20%, and 30% bonding voidage. Meanwhile, to validate the simulation, the thermal parameters of FC UV-LEDs are determined and measured using thermal transient tester, and it is found that UV-LED with 3% voidage shows lowest thermal resistance and junction temperature compared with the other samples in both simulation and experiment. Moreover, the optical performance of UV-LEDs is evaluated via the photoelectric analysis system, and the results confirm that the lowest thermal resistance leads to the lowest junction temperature but the highest light output power.", "corpus_id": 1860899}}, {"query": {"sha": "9970919c8e250f8c33d143ffa4e3d932f189a147", "title": "Preserving Communities in Anonymized Social Networks", "abstract": "Social media and social networks are embedded in our society to a point that could not have been imagined only ten years ago. Facebook, LinkedIn, and Twitter are already well known social networks that have a large audience in all age groups. The amount of data that those social sites gather from their users is continually increasing and this data is very valuable for marketing, research, and various other purposes. At the same time, this data usually contain a significant amount of sensitive information which should be protected against unauthorized disclosure. To protect the privacy of individuals, this data must be anonymized such that the risk of re-identification of specific individuals is very low. In this paper we study if anonymized social networks preserve existing communities from the original social networks. To perform this study, we introduce two approaches to measure the community preservation between the initial network and its anonymized version. In the first approach we simply count how many nodes from the original communities remained in the same community after the processes of anonymization and de-anonymization. In the second approach we consider the community preservation for each node individually. Specifically, for each node, we compare the original and final communities to which the node belongs. To anonymize social networks we use two models, namely, k-anonymity for social networks and k-degree anonymity. To determine communities in social networks we use an existing community detection algorithm based on modularity quality function. Our experiments on publically available datasets show that anonymized social networks satisfactorily preserve the community structure of their original networks. 56 Alina Campan, Yasmeen Alufaisan, Traian Marius Truta TRANSACTIONS ON DATA PRIVACY 8 (2015)", "corpus_id": 2443511}, "pos": {"sha": "7ff04ad7d3ff9ace191469c8c706a41e69967bcd", "title": "A Clustering Approach for Data and Structural Anonymity in Social Networks", "abstract": "The advent of social network sites in the last few years seems to be a trend that will likely continue in the years to come. Online social interaction has become very popular around the globe and most sociologists agree that this will not fade away. Such a development is possible due to the advancements in computer power, technologies, and the spread of the World Wide Web. What many na\u00efve technology users may not always realize is that the information they provide online is stored in massive data repositories and may be used for various purposes. Researchers have pointed out for some time the privacy implications of massive data gathering, and a lot of effort has been made to protect the data from unauthorized disclosure. However, most of the data privacy research has been focused on more traditional data models such as microdata (data stored as one relational table, where each row represents an individual entity). More recently, social network data has begun to be analyzed from a different, specific privacy perspective. Since the individual entities in social networks, besides the attribute values that characterize them, also have relationships with other entities, the possibility of privacy breaches increases. Our main contributions in this paper are the development of a greedy privacy algorithm for anonymizing a social network and the introduction of a structural information loss measure that quantifies the amount of information lost due to edge generalization in the anonymization process.", "corpus_id": 14209843}, "neg": {"sha": "f058bc6b98f81859f392e8e232f62575c446f282", "title": "Qualitative research sample design and sample size: resolving and unresolved issues and inferential imperatives.", "abstract": null, "corpus_id": 7781723}}, {"query": {"sha": "9e73e0f1ef4ad5e6aa71e0ad5475c8cf2221066d", "title": "Attentive Neural Network for Named Entity Recognition in Vietnamese", "abstract": "We propose an attentive neural network for the task of named entity recognition in Vietnamese. The proposed attentive neural model makes use of character-based language models and word embeddings to encode words as vector representations. A neural network architecture of encoder, attention, and decoder layers is then utilized to encode knowledge of input sentences and to label entity tags. The experimental results show that the proposed attentive neural network achieves the state-of-the-art results on the benchmark named entity recognition datasets in Vietnamese in comparison to both hand-crafted features based models and neural models.", "corpus_id": 53107302}, "pos": {"sha": "0c7f52c753a65ceaf3755e20b906ffd0c05c994a", "title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data", "abstract": "We presentconditional random fields, a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.", "corpus_id": 219683473}, "neg": {"sha": "10a4db59e81d26b2e0e896d3186ef81b4458b93f", "title": "Named Entity Recognition with Bidirectional LSTM-CNNs", "abstract": "Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.", "corpus_id": 6300165}}, {"query": {"sha": "032592a4057228d56687156d606a54dd97ea7898", "title": "Automating the Construction of Internet Portals with Machine Learning", "abstract": "Domain-specific internet portals are growing in popularity because they gather content from the Web and organize it for easy access, retrieval and search. For example, www.campsearch.com allows complex queries by age, location, cost and specialty over summer camps. This functionality is not possible with general, Web-wide search engines. Unfortunately these portals are difficult and time-consuming to maintain. This paper advocates the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific Internet portals. We describe new research in reinforcement learning, information extraction and text classification that enables efficient spidering, the identification of informative text segments, and the population of topic hierarchies. Using these techniques, we have built a demonstration system: a portal for computer science research papers. It already contains over 50,000 papers and is publicly available at www.cora.justresearch.com. These techniques are widely applicable to portal creation in other domains.", "corpus_id": 349242}, "pos": {"sha": "12d1d070a53d4084d88a77b8b143bad51c40c38f", "title": "Reinforcement Learning: A Survey", "abstract": "This paper surveys the eld of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the eld and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but di ers considerably in the details and in the use of the word \\reinforcement.\" The paper discusses central issues of reinforcement learning, including trading o exploration and exploitation, establishing the foundations of the eld via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.", "corpus_id": 1708582}, "neg": {"sha": "53b639319a495b45e84f1e3a09e1c8e437574e4c", "title": "Memory Approaches to Reinforcement Learning in Non-Markovian Domains", "abstract": "Reinforcement learning is a type of unsupervised learning for sequential decision making. Q-learning is probably the best-understood reinforcement learning algorithm. In Q-learning, the agent learns a mapping from states and actions to their utilities. An important assumption of Q-learning is the Markovian environment assumption, meaning that any information needed to determine the optimal actions is reeected in the agent's state representation. Consider an agent whose state representation is based solely on its immediate perceptual sensations. When its sensors are not able to make essential distinctions among world states, the Markov assumption is violated, causing a problem called perceptual aliasing. For example, when facing a closed box, an agent based on its current visual sensation cannot act optimally if the optimal action depends on the contents of the box. There are two basic approaches to addressing this problem| using more sensors or using history to gure out the current world state. This paper studies three connectionist approaches which learn to use history to handle perceptual aliasing: the window-Q, recurrent-Q, and recurrent-model architectures. Empirical study of these architectures is presented. Their relative strengths and weaknesses are also discussed.", "corpus_id": 18783919}}, {"query": {"sha": "f4080e989548bd38f525b618d6ab73a7711ee5bb", "title": "Discrete Dielectric Reflectarray and Lens for E-Band With Different Feed", "abstract": "This letter presents the design and results of low-loss discrete dielectric flat reflectarray and lens for E-band. Using two different kinds of feed, 3-D-pyramidal (wideband) horn and 2 \u00d7 2 planar microstrip array (narrowband) antenna, the radiation performances of the two collimating structures are investigated. The discrete lens is optimized to cover the frequencies 71-86 GHz (71-76- and 81-86-GHz bands), while the discrete reflectarray is optimized to cover the 71-76-GHz band. The presented designs utilize the principle of perforated dielectric substrate using a square lattice of drilled holes of different radii and can be fabricated using standard printed circuit board (PCB) technology. The discrete lens has 41 \u00d7 41 unit cells and thickness of 6.35 mm, while the reflectarray has 40 \u00d7 40 unit cells and thickness of 3.24 mm. A good impedance matching ( |S11|<; -10 dB) and peak gain of 34 \u00b11 dB with maximum aperture efficiency of 44.6% are achieved over 71-86 GHz for the lens case. On the other hand, reflectarray with peak gain of 32 \u00b11 dB and aperture efficiency of 41.9% are achieved for 71-76-GHz band.", "corpus_id": 45350848}, "pos": {"sha": "49349151e7cbdf5310dc9c08d1e7687392bdd8c2", "title": "Low-Profile 77-GHz Lens Antenna With Array Feeder", "abstract": "A 77-GHz lens antenna for automotive radar applications is presented. It consists of a feeder in the form of a 2\u00d72 patch array etched from a single layer on a 100-\u03bcm-thick substrate and a commercially available dielectric lens. Compared to previously published lens antennas, the presented design has the advantages of excellent electrical performance and a low profile in combination with a thin lens. Measurements of port impedance match and radiation patterns are presented. Beam tilt by lateral offset of the lens is demonstrated experimentally.", "corpus_id": 39733596}, "neg": {"sha": "bb9b1fda005a7d10b15898cae280050a15a9694d", "title": "Accurate and Practical Calibration of a Depth and Color Camera Pair", "abstract": "We present an algorithm that simultaneously calibrates a color camera, a depth camera, and the relative pose between them. The method is designed to have three key features that no other available algorithm currently has: accurate, practical, applicable to a wide range of sensors. The method requires only a planar surface to be imaged from various poses. The calibration does not use color or depth discontinuities in the depth image which makes it flexible and robust to noise. We perform experiments with particular depth sensor and achieve the same accuracy as the propietary calibration procedure of the manufacturer.", "corpus_id": 6231899}}, {"query": {"sha": "855d39477ae7494c0ca5cd397bac631c4ca313f5", "title": "An on-demand scatternet formation and multi-hop routing protocol for BLE-based wireless sensor networks", "abstract": "As new features are introduced into the Bluetooth core specification, the ability to use Bluetooth Low Energy (BLE) technology to construct a mobile ad-hoc network (MANET) becomes a reality. Key features included in Bluetooth Specification version 4.1 are the ability for a single node to be part of multiple piconets, and the ability for a node to act dual mode, as both a piconet master and slave. These features allow the possibility of multi-hop routing spanning multiple connected piconets. Although multi-hop routing is theoretically possible with version 4.1, no multi-hop routing algorithm has never been presented which exploits this possibility. In this paper we propose an approach to scatternet formation and multi-hop routing for networks using BLE version 4.1. We define procedures for device discovery, communication between piconets and forming multi-hop scatternet. Our approach has the following properties, 1) it can be used with existing Bluetooth hardware, 2) the protocol is fully distributed, so global connectivity information is not required for formation and routing, and 3) it supports ad-hoc network formation. We have implemented our approach using real Bluetooth SoCs including the Broadcom BCM434x chipset which is used in the iPhone 6. Our experiments demonstrate the routing delay and throughput using networks containing different numbers of nodes in order to demonstrate the impact of network size on performance. As the network size increases, our protocol does not incur a large delay and achieves better resource utilization.", "corpus_id": 18400501}, "pos": {"sha": "006df3db364f2a6d7cc23f46d22cc63081dd70db", "title": "Dynamic source routing in ad hoc wireless networks", "abstract": "An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host\u2019s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1% of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal.", "corpus_id": 131561}, "neg": {"sha": "a7f46ae35116f4c0b3aaa1c9b46d6e79e63b56c9", "title": "The Bluetooth radio system", "abstract": "A few years ago it was recognized that the vision of a truly low-cost, low-power radio-based cable replacement was feasible. Such a ubiquitous link would provide the basis for portable devices to communicate together in an ad hoc fashion by creating personal area networks which have similar advantages to their office environment counterpart, the local area network. Bluetooth/sup TM/ is an effort by a consortium of companies to design a royalty-free technology specification enabling this vision. This article describes the radio system behind the Bluetooth concept. Designing an ad hoc radio system for worldwide usage poses several challenges. The article describes the critical system characteristics and motivates the design choices that have been made.", "corpus_id": 2929882}}, {"query": {"sha": "b7535c5d6739c1d87e81a7d79e0c491fb0c19ad6", "title": "How effective is the Grey Wolf optimizer in training multi-layer perceptrons", "abstract": "This paper employs the recently proposed Grey Wolf Optimizer (GWO) for training Multi-Layer Perceptron (MLP) for the first time. Eight standard datasets including five classification and three function-approximation datasets are utilized to benchmark the performance of the proposed method. For verification, the results are compared with some of the most well-known evolutionary trainers: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Evolution Strategy (ES), and Population-based Incremental Learning (PBIL). The statistical results prove the GWO algorithm is able to provide very competitive results in terms of improved local optima avoidance. The results also demonstrate a high level of accuracy in classification and approximation of the proposed trainer.", "corpus_id": 2008117}, "pos": {"sha": "04bf8c1643afada04369292deefee5824b919248", "title": "Chaotic Krill Herd algorithm", "abstract": "Recently, Gandomi and Alavi proposed a meta-heuristic optimization algorithm, called Krill Herd (KH). This paper introduces the chaos theory into the KH optimization process with the aim of accelerating its global convergence speed. Various chaotic maps are considered in the proposed chaotic KH (CKH) method to adjust the three main movements of the krill in the optimization process. Several test problems are utilized to evaluate the performance of CKH. The results show that the performance of CKH, with an appropriate chaotic map, is better than or comparable with the KH and other robust optimization approaches. 2014 Elsevier Inc. All rights reserved.", "corpus_id": 113621}, "neg": {"sha": "296aed925371ec23010ddd55f782f29b24a35337", "title": "Antioxidant principles of Nelumbo nucifera stamens.", "abstract": "In our ongoing study to identity antioxidants from natural sources, the antioxidant activity of Nelumbo nucifera stamens was evaluated for their potential to scavenge stable 1,1-diphenyl-2-picrylhydrazyl (DPPH) free radicals, inhibit total reactive oxygen species (ROS) generation, in kidney homogenatas using 2',7'-dichlorodihydrofluorescein diacetate (DCHF-DA), and scavenge authentic peroxynitrites (ONOO-). A methanol (MeOH) extract of the stamens of N. nucifera showed strong antioxidant activity in the ONOO- system, and marginal activity in the DPPH and total ROS systems, so were therefore fractionated with several organic solvents, such as dichloromethane (CH2Cl2), ethyl acetate (EtOAc) and n-butanol (n-BuOH). The EtOAc soluble fraction, which exhibited strong antioxidant activity in all the model systems tested, was further purified by repeated silica gel and Sephadex LH-20 column chromatographies. Seven known flavonoids [kaempferol (1), kaempferol 3-O-beta-D-glucuronopyranosyl methylester (2), kaempferol 3-O-beta-D-glucopyranoside (3), kaempferol 3-O-beta-D-galactopyranoside (4), myricetin 3',5'-dimethylether 3-O-beta-D-glucopyranoside (5), kaempferol 3-O-alpha-L-rhamnopyranosyl-(1-->6)-beta-D-glucopyranoside (6) and kaempferol 3-O-beta-D-glucuronopyranoside (7)], along with beta-sitosterol glucopyranoside (8), were isolated. Compound 1 possessed good activities in all the model systems tested. Compounds 2 and 7 showed scavenging activities in the DPPH and ONOO- tests, while compounds 3 and 4 were only active in the ONOO- test. Conversely, compound 8 showed no activities in any of the model systems tested.", "corpus_id": 41789012}}, {"query": {"sha": "90196cd5472f1fd7776f5af84cbfa2b3ac56a82a", "title": "SDM: A Stripe-Based Data Migration Scheme to Improve the Scalability of RAID-6", "abstract": "In large scale data storage systems, RAID-6 has received more attention due to its capability to tolerate concurrent failures of any two disks, providing a higher level of reliability. However, a challenging issue is its scalability, or how to efficiently expand the disks. The main reason causing this problem is the typical fault tolerant scheme of most RAID-6 systems known as Maximum Distance Separable (MDS) codes, which offer data protection against disk failures with optimal storage efficiency but they are difficult to scale. To address this issue, we propose a novel Stripe-based Data Migration (SDM) scheme for large scale storage systems based on RAID-6 to achieve higher scalability. SDM is a stripe-level scheme, and the basic idea of SDM is optimizing data movements according to the future parity layout, which minimizes the overhead of data migration and parity modification. SDM scheme also provides uniform data distribution, fast data addressing and migration. We have conducted extensive mathematical analysis of applying SDM to various popular RAID-6 coding methods such as RDP, P-Code, H-Code, HDP, X-Code, and EVENODD. The results show that, compared to existing scaling approaches, SDM decreases more than 72.7% migration I/O operations and saves the migration time by up to 96.9%, which speeds up the scaling process by a factor of up to 32.", "corpus_id": 2298325}, "pos": {"sha": "1c54aa817fceb76fc2385501ee7888980586822e", "title": "HDP code: A Horizontal-Diagonal Parity Code to Optimize I/O load balancing in RAID-6", "abstract": "With higher reliability requirements in clusters and data centers, RAID-6 has gained popularity due to its capability to tolerate concurrent failures of any two disks, which has been shown to be of increasing importance in large scale storage systems. Among various implementations of erasure codes in RAID-6, a typical set of codes known as Maximum Distance Separable (MDS) codes aim to offer data protection against disk failures with optimal storage efficiency. However, because of the limitation of horizontal parity or diagonal/anti-diagonal parities used in MDS codes, storage systems based on RAID-6 suffers from unbalanced I/O and thus low performance and reliability. To address this issue, in this paper, we propose a new parity called Horizontal-Diagonal Parity (HDP), which takes advantages of both horizontal and diagonal/anti-diagonal parities. The corresponding MDS code, called HDP code, distributes parity elements uniformly in each disk to balance the I/O workloads. HDP also achieves high reliability via speeding up the recovery under single or double disk failure. Our analysis shows that HDP provides better balanced I/O and higher reliability compared to other popular MDS codes.", "corpus_id": 6676091}, "neg": {"sha": "131f78dbfaf740ceec4c9a233f1e8e28386a2124", "title": "ISLES 2015 - A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI", "abstract": "Ischemic stroke is the most common cerebrovascular disease, and its diagnosis, treatment, and study relies on non-invasive imaging. Algorithms for stroke lesion segmentation from magnetic resonance imaging (MRI) volumes are intensely researched, but the reported results are largely incomparable due to different datasets and evaluation schemes. We approached this urgent problem of comparability with the Ischemic Stroke Lesion Segmentation (ISLES) challenge organized in conjunction with the MICCAI 2015 conference. In this paper we propose a common evaluation framework, describe the publicly available datasets, and present the results of the two sub-challenges: Sub-Acute Stroke Lesion Segmentation (SISS) and Stroke Perfusion Estimation (SPES). A total of 16 research groups participated with a wide range of state-of-the-art automatic segmentation algorithms. A thorough analysis of the obtained data enables a critical evaluation of the current state-of-the-art, recommendations for further developments, and the identification of remaining challenges. The segmentation of acute perfusion lesions addressed in SPES was found to be feasible. However, algorithms applied to sub-acute lesion segmentation in SISS still lack accuracy. Overall, no algorithmic characteristic of any method was found to perform superior to the others. Instead, the characteristics of stroke lesion appearances, their evolution, and the observed challenges should be studied in detail. The annotated ISLES image datasets continue to be publicly available through an online evaluation system to serve as an ongoing benchmarking resource (www.isles-challenge.org).", "corpus_id": 206870364}}, {"query": {"sha": "14fbc17a490fa80522bdf99ce9fdf60449692d3f", "title": "Assessing Internet Enabled Business Value : An Exploratory Investigation ?", "abstract": "While the focus of electronic commerce has often been on \u201cdot coms\u201d or pure Internet based companies, a major transformation is under way in many traditional \u201cbricks-and-mortar\u201d organizations. The latter are investing heavily in Internet based technologies and applications in order to attain new heights of efficiency, productivity and business value. While anecdotes in the business press suggest that some firms have achieved unprecedented performance gains by leveraging the Internet, there is no systematic evidence in the Information Technology (IT) productivity or business value literature regarding the payoffs from Internet enabled business initiatives. We propose an exploratory model of electronic business value involving IT applications, processes, business partner readiness, and operational and financial performance measures. This model is rooted in IT business value and productivity research, and is empirically tested with data from over 1000 firms in manufacturing, retail, distribution and wholesale sectors. We find that electronic business initiatives involving customer-facing technologies lead to operational excellence in customer interactions and improved financial performance. Further, supplier related operational excellence is a key determinant of customer excellence, suggesting the related nature of customer and supplier related performance. Customer and supplier readiness to engage in online business have strong positive impacts on customer and supplier related operational excellence respectively, indicating the need for all entities in a value chain to simultaneously adopt Internet applications and business practices. To the best of our knowledge, this is the first study to address the business value of Internet initiatives.", "corpus_id": 9515374}, "pos": {"sha": "e8402b65103442e2517982e5e3eb330f72886731", "title": "Strategic Alignment: Leveraging Information Technology for Transforming Organizations", "abstract": null, "corpus_id": 2372874}, "neg": {"sha": "d5cc52b69018352fcf715435c573f5ea1a245303", "title": "Intramedullary k-wire fixation of metacarpal fractures", "abstract": "The majority of metacarpal fractures can be treated conservatively. Nevertheless, surgical treatment is justified in certain cases. Palmar dislocation of >30\u00b0 and shortening of >5\u00a0mm will significantly affect extension and flexion of the hand. Consequently, surgical treatment is indicated. The aim of our study was to evaluate the clinical results of intramedullary Kirschner-wire fixation of metacarpal fractures. In a retrospective study we analyzed the clinical results of 35 patients with metacarpal fractures that had been treated by closed reduction and elastic fixation with at least two intramedullary k-wires. Most of the patients were young, with good bone quality and low anesthetic risk, and they had suffered the fractures as a result of a direct trauma. Predominantly uncomplicated, the fractures were metaphyseal, subcapital and of the fifth metacarpal bone (750.3-B1 fractures). Surgical treatment was indicated for a palmar axis dislocation of >20\u00b0 or if a rotatory deficiency was present. Metacarpal joint function and correction of rotatory displacement could be assessed on median after a period of 1.1\u00a0year. In 34 patients flexion and extension was normal on both sides. In one patient we found an extension deficiency of 15\u00b0 and a rotatory deficiency of 10\u00b0. In 34 out 35 patients with metacarpal fractures, minimally invasive intramedullary k-wire osteosynthesis resulted in complete restoration. Intramedullary k-wire fixation is a minimally invasive method for stabilizing metacarpal fractures. The excellent long-term clinical results are due to the fact that the gliding tissue around the fracture will not be affected at all by the surgical procedure.", "corpus_id": 2170689}}, {"query": {"sha": "9ea558355c355ec4fb4ccde0035e244c5cd528a0", "title": "Self-Sorting Map: An Efficient Algorithm for Presenting Multimedia Data in Structured Layouts", "abstract": "This paper presents the Self-Sorting Map (SSM), a novel algorithm for organizing and presenting multimedia data. Given a set of data items and a dissimilarity measure between each pair of them, the SSM places each item into a unique cell of a structured layout, where the most related items are placed together and the unrelated ones are spread apart. The algorithm integrates ideas from dimension reduction, sorting, and data clustering algorithms. Instead of solving the continuous optimization problem that other dimension reduction approaches do, the SSM transforms it into a discrete labeling problem. As a result, it can organize a set of data into a structured layout without overlap, providing a simple and intuitive presentation. The algorithm is designed for sorting all data items in parallel, making it possible to arrange millions of items in seconds. Experiments on different types of data demonstrate the SSM's versatility in a variety of applications, ranging from positioning city names by proximities to presenting images according to visual similarities, to visualizing semantic relatedness between Wikipedia articles.", "corpus_id": 26446549}, "pos": {"sha": "385dcc1480e341435b1aa7b7a523b4c7d9563b95", "title": "An Efficient k-Means Clustering Algorithm: Analysis and Implementation", "abstract": "\u00d0In k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index Terms\u00d0Pattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.", "corpus_id": 12003435}, "neg": {"sha": "0cf8443bcb14cfd6ac5bcf0e3775c0aad45558b4", "title": "CRAFT Objects from Images", "abstract": "Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework [15] and its fast versions [14, 27]. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Regionproposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals, in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter-and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of the-art on object detection benchmarks like PASCAL VOC 07/12 and ILSVRC.", "corpus_id": 3203206}}, {"query": {"sha": "3135d3faa24c516ea0ac68acaa3c9d1c7bb6b268", "title": "A novel SIW six-port junction", "abstract": "Two architectures of a substrate integrated waveguide six-port junctions operating at 1.8 - 3.2 GHz were designed, simulated and compared at a center frequency of 2.45 GHz. The two structures composed of quadrature couplers were performed within an HFSS environment based on the finite element method (FEM) and printed on a Roger RT/duroid 6010. Simulation results of the SIW six-port junction without micro-strip lines showed much better performance than the other structure.", "corpus_id": 28474513}, "pos": {"sha": "cd97edd9bad08b89e4711747cf0193e9d9b3bb00", "title": "Feasibility Investigation of Low Cost Substrate Integrated Waveguide ( SIW ) Directional Couplers", "abstract": "In this paper, the feasibility of Substrate Integrated Waveguide (SIW) couplers, fabricated using single-layer TACONIC RF-35 dielectric substrate is investigated. The couplers have been produced employing a standard PCB process. The choice of the TACONIC RF-35 substrate as alternative to other conventional materials is motivated by its lower cost and high dielectric constant, allowing the reduction of the device size. The coupler requirements are 90-degree phase shift between the output and the coupled ports and frequency bandwidth from about 10.5 GHz to 12.5 GHz. The design and optimization of the couplers have been performed by using the software CST Microwave Studio c \u00a9. Eight different coupler configurations have been designed and compared. The better three couplers have been fabricated and characterized. The proposed SIW directional couplers could be integrated within more complex planar circuits or utilized as stand-alone devices, because of their compact size. They exhibit good performance and could be employed in communication applications as broadcast signal distribution and as key elements for the construction of other microwave devices and systems.", "corpus_id": 15065254}, "neg": {"sha": "872c7ec85d172d9409a476e122cb69f63a961672", "title": "The ventricular tachycardia score: a novel approach to electrocardiographic diagnosis of ventricular tachycardia.", "abstract": "AIMS\nElectrocardiographic diagnosis of wide QRS complex tachycardia (WCT) continues to be challenging as none one of the available methods is specific for ventricular tachycardia (VT) diagnosis. We aimed to construct a method for WCT differentiation based on a scoring system, in which ECGs are graded according to the number of VT-specific features. This novel method was validated and compared with Brugada algorithm and other methods.\n\n\nMETHODS AND RESULTS\nA total of 786 WCTs (512 VTs) from 587 consecutive patients with a proven diagnosis were analysed by two blinded observers. The VT score method was based on seven ECG features: initial R wave in V1, initial r > 40 ms in V1/V2, notched S in V1, initial R in aVR, lead II R wave peak time \u226550 ms, no RS in V1-V6, and atrioventricular dissociation. Atrioventricular dissociation was assigned two points, and each of the other features was assigned one point. The overall accuracy of VT score \u22651 for VT diagnosis (83%) was higher than that of the aVR (72%, P = 0.001) and Brugada (81%) algorithms. Ventricular tachycardia score \u22653 was present in 66% of VTs and was more specific (99.6%) than any other algorithm/criterion for VT diagnosis. Ventricular tachycardia score \u22654 was present in 33% of VTs and was 100% specific for VT.\n\n\nCONCLUSION\nThe new ECG-based method provides a certain diagnosis of VT in the majority of patients with VT, identifies unequivocal ECGs, and has superior overall diagnostic accuracy to other ECG methods.", "corpus_id": 205059528}}, {"query": {"sha": "182e3558c19dde7c479323877074e56488749e48", "title": "Sarcasm as Contrast between a Positive Sentiment and Negative Situation", "abstract": "A common form of sarcasm on Twitter consists of a positive sentiment contrasted with a negative situation. For example, many sarcastic tweets include a positive sentiment, such as \u201clove\u201d or \u201cenjoy\u201d, followed by an expression that describes an undesirable activity or state (e.g., \u201ctaking exams\u201d or \u201cbeing ignored\u201d). We have developed a sarcasm recognizer to identify this type of sarcasm in tweets. We present a novel bootstrapping algorithm that automatically learns lists of positive sentiment phrases and negative situation phrases from sarcastic tweets. We show that identifying contrasting contexts using the phrases learned through bootstrapping yields improved recall for sarcasm recognition.", "corpus_id": 10168779}, "pos": {"sha": "55e36d6b45c91a0daa49234bd47b856470d6825c", "title": "Identifying Sarcasm in Twitter: A Closer Look", "abstract": "Sarcasm transforms the polarity of an apparently positive or negative utterance into its opposite. We report on a method for constructing a corpus of sarcastic Twitter messages in which determination of the sarcasm of each message has been made by its author. We use this reliable corpus to compare sarcastic utterances in Twitter to utterances that express positive or negative attitudes without sarcasm. We investigate the impact of lexical and pragmatic factors on machine learning effectiveness for identifying sarcastic utterances and we compare the performance of machine learning techniques and human judges on this task. Perhaps unsurprisingly, neither the human judges nor the machine learning techniques perform very well.", "corpus_id": 15244007}, "neg": {"sha": "4b61acf4efbd48db013ff6702a9e9ea97c4ef681", "title": "Smart Manufacturing Standardization: Reference Model and Standards Framework", "abstract": "With the progress of world trade and globalization, and the development of information & communication technology (ICT) and industrial technology, manufacturing pattern and technology are now facing a turning point. In order to realize economic transformation, the Chinese government published China Manufacturing 2025 national strategy; German government published Industry 4.0; and American government proposed Re-industrialization and Industrial Internet. All of these mentioned strategies have a key topic: smart manufacturing. In order to present a systematic standard solution for smart manufacturing, standardization organizations of China, Germany and US published standards landscapes or roadmaps. This paper compares these smart manufacturing standardization architectures and methodology, develops a reference model for smart manufacturing standards development and implementation. At the end of the paper, a standards framework is presented.", "corpus_id": 32300470}}, {"query": {"sha": "798b456df852ba12af948cba4dfd7383ba4499a7", "title": "How Privacy Flaws Affect Consumer Perception", "abstract": "We examine how consumers perceive publicized instances of privacy flaws and private information data breaches.Using three real-world privacy breach incidents, we study how these flaws affected consumers' future purchasing behavior and perspective on a company's trustworthiness. We investigate whether despite a lack of widespread privacy enhancing technology (PET) usage, consumers are taking some basic security precautions when making purchasing decisions. We survey 600participants on three well-known privacy breaches. Our results show that, in general, consumers are less likely to purchase products that had experienced some form of privacy breach.We find evidence of a slight bias toward giving products the consumers owned themselves more leeway, as suggested by the endowment effect hypothesis.", "corpus_id": 775749}, "pos": {"sha": "2390d0ba96c89d60a15e1940c80a05f026508a39", "title": "The Effect of Internet Security Breach Announcements on Market Value: Capital Market Reactions for Breached Firms and Internet Security Developers", "abstract": "Assessing the value of information technology (IT) security is challenging because of the difficulty of measuring the cost of security breaches. An event-study analysis, using market valuations, was used to assess the impact of security breaches on the market value of breached firms. The information-transfer effect of security breaches (i.e., their effect on the market value of firms that develop security technology) was also studied. The results show that announcing an Internet security breach is negatively associated with the market value of the announcing firm. The breached firms in the sample lost, on average, 2.1 percent of their market value within two days of the announcement\u2014an average loss in market capitalization of $1.65 billion per breach. Firm type, firm size, and the year the breach occurred help explain the cross-sectional variations in abnormal returns produced by security breaches. The effects of security breaches are not restricted to the breached firms. The market value of security developers is positively associated with the disclosure of security breaches by other firms. The security developers in the sample realized an average abnormal return of 1.36 percent during the two-day period after the announcement\u2014an average gain of $1.06 billion in two days. The study suggests that the cost of poor security is very high for investors.", "corpus_id": 10753015}, "neg": {"sha": "882601daa429092f6fbfa3be7478481dd65ba8f8", "title": "AI Planning and Combinatorial Optimization for Web Service Composition in Cloud Computing", "abstract": "In recent years, there has been an increasing interest in web service composition due to its importance in practical applications. At the same time, cloud computing is gradually evolving as a widely used computing platform where many different web services are published and available in cloud data centers. The issue is that traditional service composition methods mainly focus on how to find service composition sequence in a single cloud, but not from a multi-cloud service base. It is challenging to efficiently find a composition solution in a multiple cloud base because it involves not only service composition but also combinatorial optimization. In this paper, we first propose a framework of service composition in multi-cloud base environments. Next, three different cloud combination methods are presented to select a cloud combination subject to not only finding feasible composition sequence, but also containing minimum clouds. Experimental results show that a proposed method based on artificial intelligence (AI) planning and combinatorial optimization can more effectively and efficiently find sub-optimal cloud combinations.", "corpus_id": 15985133}}, {"query": {"sha": "6124d347f897dce8edf6398dd5c99e13a89f2bd7", "title": "A Tensor Based Deep Learning Technique for Intelligent Packet Routing", "abstract": "Recently, network operators are confronting the challenge of exploding traffic and more complex network environments due to the increasing number of access terminals having various requirements for delay and package loss rate. However, traditional routing methods based on the maximum or minimum single metric value aim at improving the network quality of only one aspect, which makes them become incapable to deal with the increasingly complicated network traffic. Considering the improvement of deep learning techniques in recent years, in this paper, we propose a smart packet routing strategy with Tensor-based Deep Belief Architectures (TDBAs) that considers multiple parameters of network traffic. For better modeling the data in TDBAs, we use the tensors to represent the units in every layer as well as the weights and biases. The proposed TDBAs can be trained to predict the whole paths for every edge router. Simulation results demonstrate that our proposal outperforms the conventional Open Shortest Path First (OSPF) protocol in terms of overall packet loss rate and average delay per hop.", "corpus_id": 29407383}, "pos": {"sha": "2bbbc937de355cc2971433d5c67cd984d5472fe2", "title": "Deep Architecture for Traffic Flow Prediction: Deep Belief Networks With Multitask Learning", "abstract": "Traffic flow prediction is a fundamental problem in transportation modeling and management. Many existing approaches fail to provide favorable results due to being: 1) shallow in architecture; 2) hand engineered in features; and 3) separate in learning. In this paper we propose a deep architecture that consists of two parts, i.e., a deep belief network (DBN) at the bottom and a multitask regression layer at the top. A DBN is employed here for unsupervised feature learning. It can learn effective features for traffic flow prediction in an unsupervised fashion, which has been examined and found to be effective for many areas such as image and audio classification. To the best of our knowledge, this is the first paper that applies the deep learning approach to transportation research. To incorporate multitask learning (MTL) in our deep architecture, a multitask regression layer is used above the DBN for supervised prediction. We further investigate homogeneous MTL and heterogeneous MTL for traffic flow prediction. To take full advantage of weight sharing in our deep architecture, we propose a grouping method based on the weights in the top layer to make MTL more effective. Experiments on transportation data sets show good performance of our deep architecture. Abundant experiments show that our approach achieved close to 5% improvements over the state of the art. It is also presented that MTL can improve the generalization performance of shared tasks. These positive results demonstrate that deep learning and MTL are promising in transportation research.", "corpus_id": 16673459}, "neg": {"sha": "24299620f5b394f962b516578dafd3acc8b0a107", "title": "A Rule-Based Relation Extraction System using DBpedia and Syntactic Parsing", "abstract": "In this paper, we present a rule-based relation extraction approach which uses DBpedia and linguistic information provided by the syntactic parser Fips. Our goal is twofold: (i) the morpho-syntactic patterns are defined using the syntactic parser Fips to identify relations between named entities (ii) the RDF triples extracted from DBpedia are used to improve RE task by creating gazetteer relations. NEBHI, Kamel. A Rule-Based Relation Extraction System using DBpedia and Syntactic Parsing. In: Proceedings of the NLP-DBPEDIA-2013 Workshop co-located with the 12th International Semantic Web Conference (ISWC 2013). 2013.", "corpus_id": 18404395}}, {"query": {"sha": "4f56ed49a53ebb054fc66e585295ee82b3781df9", "title": "Frequency and voltage droop control of parallel inverters in microgrid", "abstract": "The distributed generation units are connected to microgrid through an interfacing inverter. Interfaced inverter plays main role in the operating performance of microgrid. In this paper, interfaced parallel inverter control using an P-F/Q-V droop control was investigated, when microgrid operated in islanded mode. In islanding mode the inverter droop control should maintain voltage and frequency stability. The droop control for parallel inverters is implemented and the proportional load sharing is obtained from each individual inverter. Droop control of inverter is simulated on Matlab/Simulink, the results indicate droop control has a significant effect on balancing the voltage magnitude, frequency and power sharing.", "corpus_id": 27991641}, "pos": {"sha": "a9a9e64bba4d015b73a01dc96c3af7cdb5169219", "title": "Adaptive Droop Control Applied to Voltage-Source Inverters Operating in Grid-Connected and Islanded Modes", "abstract": "This paper proposes a novel control for voltage-source inverters with the capability to flexibly operate in grid-connected and islanded modes. The control scheme is based on the droop method, which uses some estimated grid parameters such as the voltage and frequency and the magnitude and angle of the grid impedance. Hence, the inverter is able to inject independently active and reactive power to the grid. The controller provides a proper dynamics decoupled from the grid-impedance magnitude and phase. The system is also able to control active and reactive power flows independently for a large range of impedance grid values. Simulation and experimental results are provided in order to show the feasibility of the control proposed.", "corpus_id": 3339039}, "neg": {"sha": "47c5870d404133ced4ab1172ebea190f03f84a22", "title": "The use of entropy to measure structural diversity", "abstract": "In this paper entropy based methods are compared and used to measure structural diversity of an ensemble of 21 classifiers. This measure is mostly applied in ecology, whereby species counts are used as a measure of diversity. The measures used were Shannon entropy, Simpsons and the Berger Parker diversity indexes. As the diversity indexes increased so did the accuracy of the ensemble. An ensemble dominated by classifiers with the same structure produced poor accuracy. Uncertainty rule from information theory was also used to further define diversity. Genetic algorithms were used to find the optimal ensemble by using the diversity indices as the cost function. The method of voting was used to aggregate the decisions.", "corpus_id": 5818734}}, {"query": {"sha": "88605c7c2efcafc41827aa0f63edbc0cccfbbfad", "title": "Rnnotator: an automated de novo transcriptome assembly pipeline from stranded RNA-Seq reads", "abstract": "Comprehensive annotation and quantification of transcriptomes are outstanding problems in functional genomics. While high throughput mRNA sequencing (RNA-Seq) has emerged as a powerful tool for addressing these problems, its success is dependent upon the availability and quality of reference genome sequences, thus limiting the organisms to which it can be applied. Here, we describe Rnnotator, an automated software pipeline that generates transcript models by de novo assembly of RNA-Seq data without the need for a reference genome. We have applied the Rnnotator assembly pipeline to two yeast transcriptomes and compared the results to the reference gene catalogs of these organisms. The contigs produced by Rnnotator are highly accurate (95%) and reconstruct full-length genes for the majority of the existing gene models (54.3%). Furthermore, our analyses revealed many novel transcribed regions that are absent from well annotated genomes, suggesting Rnnotator serves as a complementary approach to analysis based on a reference genome for comprehensive transcriptomics. These results demonstrate that the Rnnotator pipeline is able to reconstruct full-length transcripts in the absence of a complete reference genome.", "corpus_id": 927069}, "pos": {"sha": "1e2ac9587c9a57a49583990602142c84b3f19625", "title": "Computation for ChIP-seq and RNA-seq studies", "abstract": "Genome-wide measurements of protein-DNA interactions and transcriptomes are increasingly done by deep DNA sequencing methods (ChIP-seq and RNA-seq). The power and richness of these counting-based measurements comes at the cost of routinely handling tens to hundreds of millions of reads. Whereas early adopters necessarily developed their own custom computer code to analyze the first ChIP-seq and RNA-seq datasets, a new generation of more sophisticated algorithms and software tools are emerging to assist in the analysis phase of these projects. Here we describe the multilayered analyses of ChIP-seq and RNA-seq datasets, discuss the software packages currently available to perform tasks at each layer and describe some upcoming challenges and features for future analysis tools. We also discuss how software choices and uses are affected by specific aspects of the underlying biology and data structure, including genome size, positional clustering of transcription factor binding sites, transcript discovery and expression quantification.", "corpus_id": 9496853}, "neg": {"sha": "11ae0814c38df9eb8c709ba530d5340d77a23de4", "title": "Genome-wide mapping of in vivo protein-DNA interactions.", "abstract": "In vivo protein-DNA interactions connect each transcription factor with its direct targets to form a gene network scaffold. To map these protein-DNA interactions comprehensively across entire mammalian genomes, we developed a large-scale chromatin immunoprecipitation assay (ChIPSeq) based on direct ultrahigh-throughput DNA sequencing. This sequence census method was then used to map in vivo binding of the neuron-restrictive silencer factor (NRSF; also known as REST, for repressor element-1 silencing transcription factor) to 1946 locations in the human genome. The data display sharp resolution of binding position [+/-50 base pairs (bp)], which facilitated our finding motifs and allowed us to identify noncanonical NRSF-binding motifs. These ChIPSeq data also have high sensitivity and specificity [ROC (receiver operator characteristic) area >/= 0.96] and statistical confidence (P <10(-4)), properties that were important for inferring new candidate interactions. These include key transcription factors in the gene network that regulates pancreatic islet cell development.", "corpus_id": 519841}}, {"query": {"sha": "4ac7328432243192caa6d838106d097f8e770290", "title": "Implementing a Rating-Based Item-to-Item Recommender System in PHP/SQL", "abstract": "User personalization and profiling is key to many succesful Web sites. Consider that there is considerable free content on the Web, but comparatively few tools to help us organize or mine such content for specific purposes. One solution is to ask users to rate resources so that they can help each other find better content: we call this rating-based collaborative filtering. This paper presents a database-driven approach to item-to-item collaborative filtering which is both easy to implement and can support a full range of applications.", "corpus_id": 16509380}, "pos": {"sha": "142457c5bad8337342302b0b997517317b84a11c", "title": "Slope One Predictors for Online Rating-Based Collaborative Filtering", "abstract": "Rating-based collaborative filtering is the process of predicting how a user would rate a given item from other user ratings. We propose three related slope one schemes with predictors of the form f (x) = x + b, which precompute the average difference between the ratings of one item and another for users who rated both. Slope one algorithms are easy to implement, efficient to query, reasonably accurate, and they support both online queries and dynamic updates, which makes them good candidates for real-world systems. The basic SLOPE ONE scheme is suggested as a new reference scheme for collaborative filtering. By factoring in items that a user liked separately from items that a user disliked, we achieve results competitive with slower memorybased schemes over the standard benchmark EachMovie and Movielens data sets while better fulfilling the desiderata of CF applications.", "corpus_id": 2361137}, "neg": {"sha": "7b3e3bc54d597ccd44374f4d77a1da04b0e1f909", "title": "RFID Circuit Design with Optimized CMOS Inductor for Monitoring Biomedical Signals", "abstract": "RFID is evolving as a major technology enabler for identifying and tracking goods. RFID applications in biomedical area not only need to detect but also require monitoring and transmitting vital signals like the electrocardiogram (and heartbeat), blood pressure, body temperature, etc. The basic building blocks of an RFID tag for biomedical application are studied. The sizing and powering up of the tags are critical issues and this paper mainly focuses on the design and optimization of the inductor for an RFID circuit. UHF is chosen for this application mainly because of the practical values of inductance that can be realized in CMOS chip to operate at this operating frequency. The layout parameters of inductor are optimized for a maximum Q-factor value at the desired frequency. Using the optimized inductor, the functioning of the important blocks of an RFID tag such as power feeding circuit, heartbeat detection circuit as well as the modulation circuit is verified. This work would be very relevant for the remote monitoring of biomedical signals.", "corpus_id": 14364470}}, {"query": {"sha": "d2657ded9605ea0ccd96224013c9ae007f50ddc9", "title": "ASAP: A Self-Adaptive Prediction System for Instant Cloud Resource Demand Provisioning", "abstract": "The promise of cloud computing is to provide computing resources instantly whenever they are needed. The state-of-art virtual machine (VM) provisioning technology can provision a VM in tens of minutes. This latency is unacceptable for jobs that need to scale out during computation. To truly enable on-the-fly scaling, new VM needs to be ready in seconds upon request. In this paper, We present an online temporal data mining system called ASAP, to model and predict the cloud VM demands. ASAP aims to extract high level characteristics from VM provisioning request stream and notify the provisioning system to prepare VMs in advance. For quantification issue, we propose Cloud Prediction Cost to encodes the cost and constraints of the cloud and guide the training of prediction algorithms. Moreover, we utilize a two-level ensemble method to capture the characteristics of the high transient demands time series. Experimental results using historical data from an IBM cloud in operation demonstrate that ASAP significantly improves the cloud service quality and provides possibility for on-the-fly provisioning.", "corpus_id": 6469915}, "pos": {"sha": "8aa09720221bdeef43e150fc7f6896f71600fb86", "title": "The cost of a cloud: research problems in data center networks", "abstract": "The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.", "corpus_id": 4410540}, "neg": {"sha": "b76bbddf92d247705c839436b5836081ab0add8a", "title": "The Indexing and Retrieval of Document Images: A Survey", "abstract": "The economic feasibility of maintaining large databases of document images has created a tremendous demand for robust ways to access and manipulate the information these images contain. In an attempt to move toward a paper-less o ce, large quantities of printed documents are often scanned and archived as images, without adequate index information. One way to provide traditional database indexing and retrieval capabilities is to fully convert the document to an electronic representation which can be indexed automatically. Unfortunately, there are many factors which prohibit complete conversion including high cost, low document quality, and the fact that many non-text components cannot be adequately represented in a converted form. In such cases, it can be advantageous to maintain a copy of and use the document in image form. In this paper, we provide a survey of methods developed by researchers to access and manipulate document images without the need for complete and accurate conversion. We brie y discuss traditional text indexing techniques on imperfect data and the retrieval of partially converted documents. This is followed by a more comprehensive review of techniques for the direct characterization, manipulation and retrieval, of images of documents containing text, graphics and scene images. The support of this research by the Department of Defense under contract MDA 9049-6C-1250 is gratefully acknowledged.", "corpus_id": 11498408}}, {"query": {"sha": "f8a7b00b707ac908914852ab4c463aefbf1433c6", "title": "High-Isolation CMOS T/R Switch Design Using a Two-Stage Equivalent Transmission Line Structure", "abstract": "A fully integrated Ku-band transmit/receive (T/R) switch based on a two-stage equivalent transmission line structure has been designed using a 180-nm complementary metal-oxide-semiconductor (CMOS) process. An analysis shows a relation between the series inductance and turn-on resistance for high isolation. A stack structure with feed-forward capacitors was chosen as a means of improving the power-handling capability of the switch. A low insertion loss (IL) of the switch was achieved by eliminating series transistors. The measured minimum ILs of the switch in the transmitter (TX) and receiver (RX) modes are 2.7 dB and 2.3 dB, respectively. The measured isolations in the TX and RX modes are greater than 34 and 25 dB, respectively, from 15 to 18 GHz. The design reaches a measured input 1-dB power compression point ( $IP_{1}dB$ ) of 22 dBm at 17 GHz. The switch achieves stringent isolation, insertion loss, and power-handling capability requirements along with the capability of full integration, demonstrating its great potential for use in fully integrated CMOS T/R chips.", "corpus_id": 28320277}, "pos": {"sha": "c9996c130e151fccb3e9a846b71b15b8838c5a27", "title": "Ultra-Compact High-Linearity High-Power Fully Integrated DC\u201320-GHz 0.18-$\\mu{\\hbox {m}}$ CMOS T/R Switch", "abstract": "A fully integrated ultra-broadband transmit/receive (T/R) switch has been developed using nMOS transistors with a deep n-well in a standard 0.18-mum CMOS process, and demonstrates unprecedented insertion loss, isolation, power handling, and linearity. The new CMOS T/R switch exploits patterned-ground-shield on-chip inductors together with MOSFET's parasitic capacitances to synthesize artificial transmission lines, which result in low insertion loss over an extremely wide bandwidth. Negative bias to the bulk or positive bias to the drain of the MOSFET devices with floating bulk is used to reduce effects of the parasitic diodes, leading to enhanced linearity and power handling for the switch. Within dc-10, 10-18, and 18-20 GHz, the developed CMOS T/R switch exhibits insertion loss of less than 0.7, 1.0, and 2.5 dB and isolation between 32-60, 25-32, and 25-27 dB, respectively. The measured 1-dB power compression point and input third-order intercept point reach as high as 26.2 and 41 dBm, respectively. The new CMOS T/R switch has a die area of only 230 mumtimes250 mum. The achieved ultra-broadband performance and high power-handling capability, approaching those achieved in GaAs-based T/R switches, along with the full-integration ability confirm the usefulness of switches in CMOS technology, and demonstrate their great potential for many broadband CMOS radar and communication applications", "corpus_id": 14272961}, "neg": {"sha": "36dec5f23a63bc701fee46610ee68b81080878cd", "title": "Domain-independent sentence type classification: examining the scenarios of scientific abstracts and scrum protocols", "abstract": "The amount of available textual information in everybody's daily environment is increasing steadily. To satisfy a user's information needs, the user has to examine numerous documents until the required information has been found. Additionally, the relevant information is often contained in only short sections of the considered documents. This leads to a high amount of irrelevant text the user has to read what could be solved by filtering relevant information within textual documents automatically. In this article we present our findings on the classification of sentences according to the type of information contained. Our evaluation has been conducted on documents from the field of abstracts of scientific publications and protocols of Scrum retrospective meetings. The results show the feasibility of our approach for finding a higher percentage of relevant information within textual documents and hence reducing the information overload for the users.", "corpus_id": 11096449}}, {"query": {"sha": "bc37b473630a2cdeee25c0b862202951ec2e6e0a", "title": "CORPP: Commonsense Reasoning and Probabilistic Planning, as Applied to Dialog with a Mobile Robot", "abstract": "In order to be fully robust and responsive to a dynamically changing real-world environment, intelligent robots will need to engage in a variety of simultaneous reasoning modalities. In particular, in this paper we consider their needs to i) reason with commonsense knowledge, ii) model their nondeterministic action outcomes and partial observability, and iii) plan toward maximizing long-term rewards. On one hand, Answer Set Programming (ASP) is good at representing and reasoning with commonsense and default knowledge, but is ill-equipped to plan under probabilistic uncertainty. On the other hand, Partially Observable Markov Decision Processes (POMDPs) are strong at planning under uncertainty toward maximizing long-term rewards, but are not designed to incorporate commonsense knowledge and inference. This paper introduces the CORPP algorithm which combines Plog, a probabilistic extension of ASP, with POMDPs to integrate commonsense reasoning with planning under uncertainty. Our approach is fully implemented and tested on a shopping request identification problem both in simulation and on a real robot. Compared with existing approaches using P-log or POMDPs individually, we observe significant improvements in both efficiency and accuracy.", "corpus_id": 7935891}, "pos": {"sha": "1cb0954115b1e2350627d9bfcab33cc44b635f15", "title": "Markov logic networks", "abstract": "We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.", "corpus_id": 12698795}, "neg": {"sha": "0e4d18f396a9d68a9505a1fc7f7b70e1009fc491", "title": "Content-based image retrieval systems: A survey", "abstract": "In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. Many of these collections are the product of digitizing existing collections of analogue photographs, diagrams, drawings, paintings, and prints. Usually, the only way of searching these collections was by keyword indexing, or simply by browsing. Digital images databases however, open the way to content-based searching. In this paper we survey some technical aspects of current content-based image retrieval systems. A number of other overviews on image database systems, image retrieval, or multimedia information systems have been published, see e.g. [TY84], [Gro94], [GR95], [Jai96], [EG99], [RHC99]. This survey however, is about the functionality of temporary image retrieval systems in terms of technical aspects: querying, relevance feedback, features, matching, indexing data structures, and result presentation. A number of keyword-based general WWW search engines allows to indicate that the media type must be images, see for example HotBot (http://hotbot.lycos.com/), and NBCi (http://www.nci.com/). A number of other general search engines are more speci cally for images, such as Yahoo!'s Image Surfer (http://isurf.yahoo.com/) or the multimedia searcher of Lycos (http://multimedia.lycos.com/), but they are still only keyword based. There are many special image collections on the web that can be searched with a number of alphanumerical keys. For example, ImageFinder (http://sunsite.berkeley.edu/ImageFinder/) provides a list of such collections as a tool to help teachers locate historical photographs from collections around the world. AltaVista Photo nder (see below) is a search engine that allows content-based image retrieval, both from special collections, and from the Web. In the remainder of this paper, we will give an overview of such content-based image retrieval systems, both commercial/production systems, and research/demonstration systems.", "corpus_id": 10757073}}, {"query": {"sha": "ee79c3df315286ce5c67bd4345f29a3ebb6ed969", "title": "Heightened stress responsiveness and emotional reactivity during pubertal maturation: implications for psychopathology.", "abstract": "The onset of adolescence, and more specifically the advent of pubertal maturation, represents a key developmental window for understanding the emergence of psychopathology in youth. The papers in this special section examine normative differences in the neurobiology of stress and emotional functioning over the peripubertal period. The work in this special section helps to fill in gaps in our understanding of key mechanisms that may contribute to increased vulnerabilities in behavioral and psychiatric morbidity during this developmental period.", "corpus_id": 13747147}, "pos": {"sha": "cf292aabdd708cd8bd510f99cd92621a6660597f", "title": "Brain Activation during Face Perception: Evidence of a Developmental Change", "abstract": "Behavioral studies suggest that children under age 10 process faces using a piecemeal strategy based on individual distinctive facial features, whereas older children use a configural strategy based on the spatial relations among the face's features. The purpose of this study was to determine whether activation of the fusiform gyrus, which is involved in face processing in adults, is greater during face processing in older children (1214 years) than in younger children (8 10 years). Functional MRI scans were obtained while children viewed faces and houses. A developmental change was observed: Older children, but not younger children, showed significantly more activation in bilateral fusiform gyri for faces than for houses. Activation in the fusiform gyrus correlated significantly with age and with a behavioral measure of configural face processing. Regions believed to be involved in processing basic facial features were activated in both younger and older children. Some evidence was also observed for greater activation for houses versus faces for the older children than for the younger children, suggesting that processing of these two stimulus types becomes more differentiated as children age. The current results provide biological insight into changes in visual processing of faces that occur with normal development.", "corpus_id": 18064034}, "neg": {"sha": "9288441325b7bdf7feb3e84a5bca3b722c3e2958", "title": "3D integration technologies for a planar dual band active array in Ka-band", "abstract": "In this paper, the concept and 3D integration technologies for a dual band e-scan antenna system for mobile satellite communications are presented. The antenna architecture is based on a low-profile (1 cm) and low cost approach using multifunctional Silicon Germanium Bipolar Complementary Metal-Oxide Semiconductor (SiGe BiCMOS) chip-sets and dual band planar antennas, covering both Ka-band operational frequencies, up-link at 30 GHz and down-link at 20 GHz. The key 3D integration methods for the RF distribution network and vertical transitions have been developed. First measurements show for different paths of the 36 way RF network an Insertion Loss (IL) between 2.5 dB and 7.5 dB and a Return Loss (RL) better than 10 dB. For vertical transitions, IL is smaller than 0.5 dB and RL better than 10 dB.", "corpus_id": 20349768}}, {"query": {"sha": "b4b89fca7c1704f48be86ae8d547d18e9ff46821", "title": "WatchConnect: A Toolkit for Prototyping Smartwatch-Centric Cross-Device Applications", "abstract": "People increasingly use smartwatches in tandem with other devices such as smartphones, laptops or tablets. This allows for novel cross-device applications that use the watch as both input device and output display. However, despite the increasing availability of smartwatches, prototyping cross-device watch-centric applications remains a challenging task. Developers are limited in the applications they can explore as available toolkits provide only limited access to different types of input sensors for cross-device interactions. To address this problem, we introduce WatchConnect, a toolkit for rapidly prototyping cross-device applications and interaction techniques with smartwatches. The toolkit provides developers with (i) an extendable hardware platform that emulates a smartwatch, (ii) a UI framework that integrates with an existing UI builder, and (iii) a rich set of input and output events using a range of built-in sensor mappings. We demonstrate the versatility and design space of the toolkit with five interaction techniques and applications.", "corpus_id": 6728624}, "pos": {"sha": "93107e0b5d64324ba71ecb1fdbc298a1c421368e", "title": "SideSight: multi-\"touch\" interaction around small devices", "abstract": "Interacting with mobile devices using touch can lead to fingers occluding valuable screen real estate. For the smallest devices, the idea of using a touch-enabled display is almost wholly impractical. In this paper we investigate sensing user touch around small screens like these. We describe a prototype device with infra-red (IR) proximity sensors embedded along each side and capable of detecting the presence and position of fingers in the adjacent regions. When this device is rested on a flat surface, such as a table or desk, the user can carry out single and multi-touch gestures using the space around the device. This gives a larger input space than would otherwise be possible which may be used in conjunction with or instead of on-display touch input. Following a detailed description of our prototype, we discuss some of the interactions it affords.", "corpus_id": 13162216}, "neg": {"sha": "9d38c14de6ace6763bec9b115582e18f672ac0a2", "title": "Shift: a technique for operating pen-based interfaces using touch", "abstract": "Retrieving the stylus of a pen-based device takes time and requires a second hand. Especially for short intermittent interactions many users therefore choose to use their bare fingers. Although convenient, this increases targeting times and error rates. We argue that the main reasons are the occlusion of the target by the user's finger and ambiguity about which part of the finger defines the selection point. We propose a pointing technique we call Shift that is designed to address these issues. When the user touches the screen, Shift creates a callout showing a copy of the occluded screen area and places it in a non-occluded location. The callout also shows a pointer representing the selection point of the finger. Using this visual feedback, users guide the pointer into the target by moving their finger on the screen surface and commit the target acquisition by lifting the finger. Unlike existing techniques, Shift is only invoked when necessary--over large targets no callout is created and users enjoy the full performance of an unaltered touch screen. We report the results of a user study showing that with Shift participants can select small targets with much lower error rates than an unaided touch screen and that Shift is faster than Offset Cursor for larger targets.", "corpus_id": 2040521}}, {"query": {"sha": "a5ade56a2f37f3f5f5b956b0c5546de9a3428537", "title": "Relational cost analysis", "abstract": "Establishing quantitative bounds on the execution cost of programs is essential in many areas of computer science such as complexity analysis, compiler optimizations, security and privacy. Techniques based on program analysis, type systems and abstract interpretation are well-studied, but methods for analyzing how the execution costs of two programs compare to each other have not received attention. Naively combining the worst and best case execution costs of the two programs does not work well in many cases because such analysis forgets the similarities between the programs or the inputs. \nIn this work, we propose a relational cost analysis technique that is capable of establishing precise bounds on the difference in the execution cost of two programs by making use of relational properties of programs and inputs. We develop , a refinement type and effect system for a higher-order functional language with recursion and subtyping. The key novelty of our technique is the combination of relational refinements with two modes of typing\u00e2\u0080\u0094relational typing for reasoning about similar computations/inputs and unary typing for reasoning about unrelated computations/inputs. This combination allows us to analyze the execution cost difference of two programs more precisely than a naive non-relational approach. \nWe prove our type system sound using a semantic model based on step-indexed unary and binary logical relations accounting for non-relational and relational reasoning principles with their respective costs. We demonstrate the precision and generality of our technique through examples.", "corpus_id": 1352012}, "pos": {"sha": "5e74f5ba5c7174e3ecf6ab2581a5e745bb69dd54", "title": "Will you still compile me tomorrow? static cross-version compiler validation", "abstract": "This paper describes a cross-version compiler validator and measures its effectiveness on the CLR JIT compiler. The validator checks for semantically equivalent assembly language output from various versions of the compiler, including versions across a seven-month time period, across two architectures (x86 and ARM), across two compilation scenarios (JIT and MDIL), and across optimizations levels. For month-to-month comparisons, the validator achieves a false alarm rate of just 2.2%. To help understand reported semantic differences, the validator performs a root-cause analysis on the counterexample traces generated by the underlying automated theorem proving tools. This root-cause analysis groups most of the counterexamples into a small number of buckets, reducing the number of counterexamples analyzed by hand by anywhere from 53% to 96%. The validator ran on over 500,000 methods across a large suite of test programs, finding 12 previously unknown correctness and performance bugs in the CLR compiler.", "corpus_id": 8712721}, "neg": {"sha": "3eb9f4ca21bd104b1d9963a5a74e0ad48a1a1bdf", "title": "Self-supervised Spatiotemporal Feature Learning by Video Geometric Transformations", "abstract": "To alleviate the expensive cost of data collection and annotation, many self-supervised learning methods were proposed to learn image representations without humanlabeled annotations. However, self-supervised learning for video representations is not yet well-addressed. In this paper, we propose a novel 3DConvNet-based fully selfsupervised framework to learn spatiotemporal video features without using any human-labeled annotations. First, a set of pre-designed geometric transformations (e.g. rotating 0\u25e6, 90\u25e6, 180\u25e6, and 270\u25e6) are applied to each video. Then a pretext task can be defined as \u201drecognizing the predesigned geometric transformations.\u201d Therefore, the spatiotemporal video features can be learned in the process of accomplishing this pretext task without using humanlabeled annotations. The learned spatiotemporal video representations can further be employed as pre-trained features for different video-related applications. The proposed geometric transformations (e.g. rotations) are proved to be effective to learn representative spatiotemporal features in our 3DConvNet-based fully self-supervised framework. With the pre-trained spatiotemporal features from two large video datasets, the performance of action recognition is significantly boosted up by 20.4% on UCF101 dataset and 16.7% on HMDB51 dataset respectively compared to that from the model trained from scratch. Furthermore, our framework outperforms the state-of-the-arts of fully self-supervised methods on both UCF101 and HMDB51 datasets and achieves 62.9% and 33.7% accuracy respectively.", "corpus_id": 53866070}}, {"query": {"sha": "3a30fba8f6abd80d0aac0fa2f5da66ab468b737c", "title": "Clusters, language models, and ad hoc information retrieval", "abstract": "The language-modeling approach to information retrieval provides an effective statistical framework for tackling various problems and often achieves impressive empirical performance. However, most previous work on language models for information retrieval focused on document-specific characteristics, and therefore did not take into account the structure of the surrounding corpus, a potentially rich source of additional information. We propose a novel algorithmic framework in which information provided by document-based language models is enhanced by the incorporation of information drawn from clusters of similar documents. Using this framework, we develop a suite of new algorithms. Even the simplest typically outperforms the standard language-modeling approach in terms of mean average precision (MAP) and recall, and our new interpolation algorithm posts statistically significant performance improvements for both metrics over all six corpora tested. An important aspect of our work is the way we model corpus structure. In contrast to most previous work on cluster-based retrieval that partitions the corpus, we demonstrate the effectiveness of a simple strategy based on a nearest-neighbors approach that produces overlapping clusters.", "corpus_id": 16864598}, "pos": {"sha": "1e56ed3d2c855f848ffd91baa90f661772a279e1", "title": "Latent Dirichlet Allocation", "abstract": "We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hofmann's aspect model , also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification.", "corpus_id": 3177797}, "neg": {"sha": "309075bd7a974a309783578449c51e4f22d69d1e", "title": "Gene ontology analysis for RNA-seq: accounting for selection bias", "abstract": "We present GOseq, an application for performing Gene Ontology (GO) analysis on RNA-seq data. GO analysis is widely used to reduce complexity and highlight biological processes in genome-wide expression studies, but standard methods give biased results on RNA-seq data due to over-detection of differential expression for long and highly expressed transcripts. Application of GOseq to a prostate cancer data set shows that GOseq dramatically changes the results, highlighting categories more consistent with the known biology.", "corpus_id": 1548824}}, {"query": {"sha": "ebd213c3348e8f35366a98806ea807c445301d0d", "title": "Knowledge sharing: moving away from the obsession with best practices", "abstract": "Purpose \u2013 How companies can become better at knowing what they know, and share what they know have in recent years become dominant fields of research within knowledge management. The literature focuses on why people share knowledge, or why they fail to share knowledge, whilst the discussion of what they actually share has been pinned down to the concept of best practices. In this paper it is argued that there is more to knowledge sharing than the sharing of best practices. Knowledge sharing is more than the closing of performance gaps and the sharing of stocks of knowledge \u2013 knowledge sharing is also about bridging situations of organizational interdependencies and thereby supporting ongoing organizational activities. Design/methodology/approach \u2013 The paper is both theoretical and empirical. Theoretically, the concept of organizational interdependence is applied to create a conceptual framework encompassing four types of knowledge to be shared. The theoretical framework is applied on a case company to empirically illustrate how knowledge sharing encompasses different types of knowledge. Findings \u2013 The paper identifies four types of knowledge that are pivotal to share: professional knowledge, coordinating knowledge, object-based knowledge, and know-who. Hence, the paper expands the common belief that knowledge sharing is solely about sharing best practices. Practical implications \u2013 Since knowledge sharing encompasses at least four types of knowledge, the practice of facilitating knowledge sharing must necessarily focus on different channels enabling the sharing of knowledge. The practical implications of the paper, hence, direct attention to not solely sharing best practices but also knowledge bridging organizational interdependencies. Originality/value \u2013 The paper argues that best practices have dominated the discourse on what knowledge is to be shared but, to become better at understanding and practising knowledge sharing, states that one must expand one\u2019s view on what knowledge is being shared.", "corpus_id": 18919701}, "pos": {"sha": "85dfb2913665a3f05130e4ef064d06f1cd5c9b3b", "title": "A Relational View of Information Seeking and Learning in Social Networks", "abstract": "Research in organizational learning has demonstrated processes and occasionally performance implications of acquisition of declarative (know-what) and procedural (know-how) knowledge. However, considerably less attention has been paid to learned characteristics of relationships that affect the decision to seek information from other people. Based on a review of the social network, information processing, and organizational learning literatures, along with the results of a previous qualitative study, we propose a formal model of information seeking in which the probability of seeking information from another person is a function of (1) knowing what that person knows; (2) valuing what that person knows; (3) being able to gain timely access to that person\u2019s thinking; and (4) perceiving that seeking information from that person would not be too costly. We also hypothesize that the knowing, access, and cost variables mediate the relationship between physical proximity and information seeking. The model is tested using two separate research sites to provide replication. The results indicate strong support for the model and the mediation hypothesis (with the exception of the cost variable). Implications are drawn for the study of both transactive memory and organizational learning, as well as for management practice. (Information; Social Networks; Organizational Learning; Transactive Knowledge)", "corpus_id": 15632422}, "neg": {"sha": "e739272ca474e8947408f17e25b440e338c63829", "title": "Do plant mites commonly prefer the underside of leaves?", "abstract": "The adaxial (upper) and abaxial (lower) surfaces of a plant leaf provide heterogeneous habitats for small arthropods with different environmental conditions, such as light, humidity, and surface morphology. As for plant mites, some agricultural pest species and their natural enemies have been observed to favor the abaxial leaf surface, which is considered an adaptation to avoid rain or solar ultraviolet radiation. However, whether such a preference for the leaf underside is a common behavioral trait in mites on wild vegetation remains unknown. The authors conducted a 2-year survey on the foliar mite assemblage found on Viburnum erosum var. punctatum, a deciduous shrub on which several mite taxa occur throughout the seasons, and 14 sympatric tree or shrub species in secondary broadleaf-forest sites in Kyoto, west\u2013central Japan. We compared adaxial\u2013abaxial surface distributions of mites among mite taxa, seasons, and morphology of host leaves (presence/absence of hairs and domatia). On V. erosum var. punctatum, seven of 11 distinguished mite taxa were significantly distributed in favor of abaxial leaf surfaces and the trend was seasonally stable, except for Eriophyoidea. Mite assemblages on 15 plant species were significantly biased towards the abaxial leaf surfaces, regardless of surface morphology. Our data suggest that many mite taxa commonly prefer to stay on abaxial leaf surfaces in wild vegetation. Oribatida displayed a relatively neutral distribution, and in Tenuipalpidae, the ratio of eggs collected from the adaxial versus the abaxial side was significantly higher than the ratio of the motile individuals, implying that some mite taxa exploit adaxial leaf surfaces as habitat.", "corpus_id": 3757519}}, {"query": {"sha": "4cea716342c7dc14a495de2092a1d67864654243", "title": "Automating 3D wireless measurements with drones", "abstract": "Wireless signals and networks are ubiquitous. Though more reliable than ever, wireless networks still struggle with weak coverage, blind spots, and interference. Having a strong understanding of wireless signal propagation is essential for increasing coverage, optimizing performance, and minimizing interference for wireless networks. Extensive studies have analyzed the propagation of wireless signals and proposed theoretical models to simulate wireless signal propagation. Unfortunately, models of signal propagation are often not accurate in reality. Real-world signal measurements are required for validation.\n Existing methods for collecting wireless measurements either involve researchers walking to each location of interest and manually collecting measurements, or place sensors at each measurement location. As such, they require large amounts of time and effort and can be costly. We propose DroneSense, a system for measuring wireless signals in the 3D space using autonomous drones. Drone-Sense reduces the time and effort required for measurement collection, and is affordable and accessible to all users. It provides researchers with an efficient method to quickly analyze wireless coverage and test their wireless propagation models.", "corpus_id": 16283237}, "pos": {"sha": "7b684afe9fbefa9c74075aa8a51b404ccbdf5499", "title": "A New Approach to Linear Filtering and Prediction Problems", "abstract": null, "corpus_id": 1242324}, "neg": {"sha": "c5196dc5048b41670f55d0c8c923a6fd477a72e3", "title": "DoubleFusion: Real-Time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor", "abstract": "We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with data-driven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera. One of the key contributions of this method is a double layer representation consisting of a complete parametric body shape inside, and a gradually fused outer surface layer. A pre-defined node graph on the body surface parameterizes the non-rigid deformations near the body, and a free-form dynamically changing graph parameterizes the outer surface layer far from the body, which allows more general reconstruction. We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance. Moreover, the inner body shape is optimized online and forced to fit inside the outer surface layer. Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios.", "corpus_id": 4891972}}, {"query": {"sha": "b5b6747dafd66fb37b78033cc51dd9aa94f2b9c7", "title": "CityTransfer: Transferring Inter- and Intra-City Knowledge for Chain Store Site Recommendation based on Multi-Source Urban Data", "abstract": "Chain businesses have been dominating the market in many parts of the world. It is important to identify the optimal locations for a new chain store. Recently, numerous studies have been done on chain store location recommendation. These studies typically learn a model based on the features of existing chain stores in the city and then predict what other sites are suitable for running a new one. However, these models do not work when a chain enterprise wants to open business in a new city where there is not enough data about this chain store. To solve the cold-start problem, we propose CityTransfer, which transfers chain store knowledge from semantically-relevant domains (e.g., other cities with rich knowledge, similar chain enterprises in the target city) for chain store placement recommendation in a new city. In particular, CityTransfer is a two-fold knowledge transfer framework based on collaborative filtering, which consists of the transfer rating prediction model, the inter-city knowledge association method and the intra-city semantic extraction method. Experiments using data of chain hotels from four different cities crawled from Ctrip (a popular travel reservation website in China) and the urban characters extracted from several other data sources validate the effectiveness of our approach on store site recommendation.", "corpus_id": 215790524}, "pos": {"sha": "71423bb17133402965a5cbaf31fa28b0366149fd", "title": "Personalized recommendation via cross-domain triadic factorization", "abstract": "Collaborative filtering (CF) is a major technique in recommender systems to help users find their potentially desired items. Since the data sparsity problem is quite commonly encountered in real-world scenarios, Cross-Domain Collaborative Filtering (CDCF) hence is becoming an emerging research topic in recent years. However, due to the lack of sufficient dense explicit feedbacks and even no feedback available in users' uninvolved domains, current CDCF approaches may not perform satisfactorily in user preference prediction. In this paper, we propose a generalized Cross Domain Triadic Factorization (CDTF) model over the triadic relation user-item-domain, which can better capture the interactions between domain-specific user factors and item factors. In particular, we devise two CDTF algorithms to leverage user explicit and implicit feedbacks respectively, along with a genetic algorithm based weight parameters tuning algorithm to trade off influence among domains optimally. Finally, we conduct experiments to evaluate our models and compare with other state-of-the-art models by using two real world datasets. The results show the superiority of our models against other comparative models.", "corpus_id": 13540908}, "neg": {"sha": "a7ef69e55244e3fa0b065746d596441103b293a5", "title": "Latent Class Models for Collaborative Filtering", "abstract": "This paper presents a statistical approach to collaborative ltering and investigates the use of latent class models for predicting individual choices and preferences based on observed preference behavior. Two models are discussed and compared: the aspect model, a probabilistic latent space model which models individual preferences as a convex combination of preference factors, and the two-sided clustering model, which simultaneously partitions persons and objects into clusters. We present EM algorithms for di erent variants of the aspect model and derive an approximate EM algorithmbased on a variational principle for the two-sided clustering model. The bene ts of the di erent models are experimentally investigated on a large movie data set.", "corpus_id": 632612}}, {"query": {"sha": "3e1ad55fc32d52893eea234bab7598306c9c0994", "title": "The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems", "abstract": "Clinical decision support systems (CDSS) are increasingly used by healthcare professionals for evidence-based diagnosis and treatment support. However, research has suggested that users often over-rely on system suggestions - even if the suggestions are wrong. Providing explanations could potentially mitigate misplaced trust in the system and over-reliance. In this paper, we explore how explanations are related to user trust and reliance, as well as what information users would find helpful to better understand the reliability of a system's decision-making. We investigated these questions through an exploratory user study in which healthcare professionals were observed using a CDSS prototype to diagnose hypothetic cases using fictional patients suffering from a balance-related disorder. Our results show that the amount of system confidence had only a slight effect on trust and reliance. More importantly, giving a fuller explanation of the facts used in making a diagnosis had a positive effect on trust but also led to over-reliance issues, whereas less detailed explanations made participants question the system's reliability and led to self-reliance problems. To help them in their assessment of the reliability of the system's decisions, study participants wanted better explanations to help them interpret the system's confidence, to verify that the disorder fit the suggestion, to better understand the reasoning chain of the decision model, and to make differential diagnoses. Our work is a first step toward improved CDSS design that better supports clinicians in making correct diagnoses.", "corpus_id": 12739635}, "pos": {"sha": "69e1b72b558700d1e9866c075dcedfdd7f5eb913", "title": "Similarities and differences between human-human and human-automation trust : an integrative review", "abstract": "Theoretical Issues in Ergonomics Science Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t713697886 Similarities and differences between human-human and human-automation trust: an integrative review P. Madhavan a; D. A. Wiegmann b a Carnegie Mellon University, Pittsburgh, PA 15213, USA b University of Illinois, Champaign, IL, USA", "corpus_id": 39064140}, "neg": {"sha": "57bdc11a19d8996e07262aa02e2fcf250b46de34", "title": "Insights into the Emergent Bacterial Pathogen Cronobacter spp., Generated by Multilocus Sequence Typing and Analysis", "abstract": "Cronobacter spp. (previously known as Enterobacter sakazakii) is a bacterial pathogen affecting all age groups, with particularly severe clinical complications in neonates and infants. One recognized route of infection being the consumption of contaminated infant formula. As a recently recognized bacterial pathogen of considerable importance and regulatory control, appropriate detection, and identification schemes are required. The application of multilocus sequence typing (MLST) and analysis (MLSA) of the seven alleles atpD, fusA, glnS, gltB, gyrB, infB, and ppsA (concatenated length 3036 base pairs) has led to considerable advances in our understanding of the genus. This approach is supported by both the reliability of DNA sequencing over subjective phenotyping and the establishment of a MLST database which has open access and is also curated; http://www.pubMLST.org/cronobacter. MLST has been used to describe the diversity of the newly recognized genus, instrumental in the formal recognition of new Cronobacter species (C. universalis and C. condimenti) and revealed the high clonality of strains and the association of clonal complex 4 with neonatal meningitis cases. Clearly the MLST approach has considerable benefits over the use of non-DNA sequence based methods of analysis for newly emergent bacterial pathogens. The application of MLST and MLSA has dramatically enabled us to better understand this opportunistic bacterium which can cause irreparable damage to a newborn baby's brain, and has contributed to improved control measures to protect neonatal health.", "corpus_id": 10077198}}, {"query": {"sha": "12391aa3643e09dfdd74bf41b9b76f20d642de43", "title": "Training Recurrent Networks by Evolino", "abstract": "In recent years, gradient-based LSTM recurrent neural networks (RNNs) solved many previously RNN-unlearnable tasks. Sometimes, however, gradient information is of little use for training RNNs, due to numerous local minima. For such cases, we present a novel method: EVOlution of systems with LINear Outputs (Evolino). Evolino evolves weights to the nonlinear, hidden nodes of RNNs while computing optimal linear mappings from hidden state to output, using methods such as pseudo-inverse-based linear regression. If we instead use quadratic programming to maximize the margin, we obtain the first evolutionary recurrent support vector machines. We show that Evolino-based LSTM can solve tasks that Echo State nets (Jaeger, 2004a) cannot and achieves higher accuracy in certain continuous function generation tasks than conventional gradient descent RNNs, including gradient-based LSTM.", "corpus_id": 11745761}, "pos": {"sha": "1a736409c7711f8673f31d366f583ddc8759547f", "title": "Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets", "abstract": "The long short-term memory (LSTM) network trained by gradient descent solves difficult problems which traditional recurrent neural networks in general cannot. We have recently observed that the decoupled extended Kalman filter training algorithm allows for even better performance, reducing significantly the number of training steps when compared to the original gradient descent training algorithm. In this paper we present a set of experiments which are unsolvable by classical recurrent networks but which are solved elegantly and robustly and quickly by LSTM combined with Kalman filters.", "corpus_id": 12588772}, "neg": {"sha": "82a70143fead623f6c5bef6c84b5d18b22c8fc56", "title": "Diacritics Recognition Based Urdu Nastalique OCR System", "abstract": "Improvements and new developments in the field of Artificial Intelligence have opened new horizons in the advancement of machines that originally have limited intelligence. As compared to human brain, machines have already better computational speed and storage however there is still much room to improve the capability to acquire and process data and draw conclusions from it on its own. Optical Character Recognition (OCR) deals exclusively with printed designs and hand written text in nature. Plenty of developments have been made in OCR so far in recognition of Latin, Asian, Arabic and Western texts. As far as Urdu is concerned the work is almost non-existent when compared with the languages cited above. One of its main reasons is the use of extremely complex characters of Nastalique style in Urdu. A methodology for the recognition and processing of the diacritics of Nastalique script is presented in this research work. The proposed technique is effective in recognizing cursive texts with invariant font size of 48. A dataset of 6728 main Urdu Nastalique ligatures is used for the testing purposes which shows that this new technique has the capacity to recognize Nastalique ligatures by having an accuracy of 97.40%. The proposed research work also focuses to improve the existing base mark association process of the Urdu OCR system.", "corpus_id": 7383837}}, {"query": {"sha": "cd63774bedfb2a7de7b0a9e77a7a810271253eef", "title": "Portraying Collective Spatial Attention in Twitter", "abstract": "Microblogging platforms such as Twitter have been recently frequently used for detecting real-time events. The spatial component, as reflected by user location, usually plays a key role in such systems. However, an often neglected source of spatial information are location mentions expressed in tweet contents. In this paper we demonstrate a novel visualization system for analyzing how Twitter users collectively talk about space and for uncovering correlations between geographical locations of Twitter users and the locations they tweet about. Our exploratory analysis is based on the development of a model of spatial information extraction and representation that allows building effective visual analytics framework for large scale datasets. We show visualization results based on half a year long dataset of Japanese tweets and a four months long collection of tweets from USA. The proposed system allows observing many space related aspects of tweet messages including the average scope of spatial attention of social media users and variances in spatial interest over time. The analytical framework we provide and the findings we outline can be valuable for scientists from diverse research areas and for any users interested in geographical and social aspects of shared online data.", "corpus_id": 6899127}, "pos": {"sha": "01ff2b834772dfc2b8b7ba00620b65abb9444a75", "title": "Event Detection in Twitter", "abstract": "Twitter, as a form of social media, is fast emerging in recent years. Users are using Twitter to report real-life events. This paper focuses on detecting those events by analyzing the text stream in Twitter. Although event detection has long been a research topic, the characteristics of Twitter make it a non-trivial task. Tweets reporting such events are usually overwhelmed by high flood of meaningless \u201cbabbles\u201d. Moreover, event detection algorithm needs to be scalable given the sheer amount of tweets. This paper attempts to tackle these challenges with EDCoW (Event Detection with C lustering of Wavelet-based Signals). EDCoW builds signals for individual words by applying wavelet analysis on the frequencybased raw signals of the words. It then filters away the trivial words by looking at their corresponding signal autocorrelations. The remaining words are then clustered to form events with a modularity-based graph partitioning technique. Experimental results show promising result of EDCoW.", "corpus_id": 5550836}, "neg": {"sha": "1c7833f6ffdfa0191dca57529c0652ade8ae8bc2", "title": "Assessment of intracranial translucency (IT) in the detection of spina bifida at the 11-13-week scan.", "abstract": "OBJECTIVE\nPrenatal diagnosis of open spina bifida is carried out by ultrasound examination in the second trimester of pregnancy. The diagnosis is suspected by the presence of a 'lemon-shaped' head and a 'banana-shaped' cerebellum, thought to be consequences of caudal displacement of the hindbrain. The aim of the study was to determine whether in fetuses with spina bifida this displacement of the brain is evident from the first trimester of pregnancy.\n\n\nMETHODS\nIn women undergoing routine ultrasound examination at 11-13 weeks' gestation as part of screening for chromosomal abnormalities, a mid-sagittal view of the fetal face was obtained to measure nuchal translucency thickness and assess the nasal bone. In this view the fourth ventricle, which presents as an intracranial translucency (IT) between the brain stem and choroid plexus, is easily visible. We measured the anteroposterior diameter of the fourth ventricle in 200 normal fetuses and in four fetuses with spina bifida.\n\n\nRESULTS\nIn the normal fetuses the fourth ventricle was always visible and the median anteroposterior diameter increased from 1.5 mm at a crown-rump length (CRL) of 45 mm to 2.5 mm at a CRL of 84 mm. In the four fetuses with spina bifida the ventricle was compressed by the caudally displaced hindbrain and no IT could be seen.\n\n\nCONCLUSION\nThe mid-sagittal view of the face as routinely used in screening for chromosomal defects can also be used for early detection of open spina bifida.", "corpus_id": 24010240}}, {"query": {"sha": "9a488e13b90bfce3c34ae0a37655e997b8f6c447", "title": "Variational Regularization and Fusion of Surface Normal Maps", "abstract": "In this work we propose an optimization scheme for variational, vectorial denoising and fusion of surface normal maps. These are common outputs of shape from shading, photometric stereo or single image reconstruction methods, but tend to be noisy and request post-processing for further usage. Processing of normals maps, which do not provide knowledge about the underlying scene depth, is complicated due to their unit length constraint which renders the optimization non-linear and non-convex. The presented approach builds upon a linearization of the constraint to obtain a convex relaxation, while guaranteeing convergence. Experimental results demonstrate that our algorithm generates more consistent representations from estimated and potentially complementary normal maps.", "corpus_id": 10831451}, "pos": {"sha": "31eceb2c642a54322fcd7853ea34c6db7c838550", "title": "Shape and Spatially-Varying BRDFs from Photometric Stereo", "abstract": "This paper describes a photometric stereo method designed for surfaces with spatially-varying BRDFs, including surfaces with both varying diffuse and specular properties. Our optimization-based method builds on the observation that most objects are composed of a small number of fundamental materials by constraining each pixel to be representable by a combination of at most two such materials. This approach recovers not only the shape but also material BRDFs and weight maps, yielding accurate rerenderings under novel lighting conditions for a wide variety of objects. We demonstrate examples of interactive editing operations made possible by our approach.", "corpus_id": 1669309}, "neg": {"sha": "27a7a1ffa6e32128d6769d46061f330b4e9b579c", "title": "Stock Chart Pattern recognition with Deep Learning", "abstract": "This study evaluates the performances of CNN and LSTM for recognizing common charts patterns in a stock historical data. It presents two common patterns, the method used to build the training set, the neural networks architectures and the accuracies obtained.", "corpus_id": 51892007}}, {"query": {"sha": "2cf3598af28e3317666817713a354d6967405b7d", "title": "Actionable information in vision", "abstract": "I propose a notion of visual information as the complexity not of the raw images, but of the images after the effects of nuisance factors such as viewpoint and illumination are discounted. It is rooted in ideas of J. J. Gibson, and stands in contrast to traditional information as entropy or coding length of the data regardless of its use, and regardless of the nuisance factors affecting it. The non-invertibility of nuisances such as occlusion and quantization induces an \u201cinformation gap\u201d that can only be bridged by controlling the data acquisition process. Measuring visual information entails early vision operations, tailored to the structure of the nuisances so as to be \u201clossless\u201d with respect to visual decision and control tasks (as opposed to data transmission and storage tasks implicit in traditional Information Theory). I illustrate these ideas on visual exploration, whereby a \u201cShannonian Explorer\u201d guided by the entropy of the data navigates unaware of the structure of the physical space surrounding it, while a \u201cGibsonian Explorer\u201d is guided by the topology of the environment, despite measuring only images of it, without performing 3D reconstruction. The operational definition of visual information suggests desirable properties that a visual representation should possess to best accomplish vision-based decision and control tasks.", "corpus_id": 369746}, "pos": {"sha": "07f9592a78ff4f8301dafc93699a32e855da3275", "title": "Computational modelling of visual attention", "abstract": "Five important trends have emerged from recent work on computational models of focal visual attention that emphasize the bottom-up, image-based control of attentional deployment. First, the perceptual saliency of stimuli critically depends on the surrounding context. Second, a unique 'saliency map' that topographically encodes for stimulus conspicuity over the visual scene has proved to be an efficient and plausible bottom-up control strategy. Third, inhibition of return, the process by which the currently attended location is prevented from being attended again, is a crucial element of attentional deployment. Fourth, attention and eye movements tightly interplay, posing computational challenges with respect to the coordinate system used to control attention. And last, scene understanding and object recognition strongly constrain the selection of attended locations. Insights from these five key areas provide a framework for a computational and neurobiological understanding of visual attention.", "corpus_id": 2329233}, "neg": {"sha": "27a949627289d48c4e213ab3435974093b35bc60", "title": "14.2 DNPU: An 8.1TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks", "abstract": "Recently, deep learning with convolutional neural networks (CNNs) and recurrent neural networks (RNNs) has become universal in all-around applications. CNNs are used to support vision recognition and processing, and RNNs are able to recognize time varying entities and to support generative models. Also, combining both CNNs and RNNs can recognize time varying visual entities, such as action and gesture, and to support image captioning [1]. However, the computational requirements in CNNs are quite different from those of RNNs. Fig. 14.2.1 shows a computation and weight-size analysis of convolution layers (CLs), fully-connected layers (FCLs) and RNN-LSTM layers (RLs). While CLs require a massive amount of computation with a relatively small number of filter weights, FCLs and RLs require a relatively small amount of computation with a huge number of filter weights. Therefore, when FCLs and RLs are accelerated with SoCs specialized for CLs, they suffer from high memory transaction costs, low PE utilization, and a mismatch of the computational patterns. Conversely, when CLs are accelerated with FCL- and RL-dedicated SoCs, they cannot exploit reusability and achieve required throughput. So far, works have considered acceleration of CLs, such as [2\u20134], or FCLs and RLs like [5]. However, there has been no work on a combined CNN-RNN processor. In addition, a highly reconfigurable CNN-RNN processor with high energy-efficiency is desirable to support general-purpose deep neural networks (DNNs).", "corpus_id": 206998709}}, {"query": {"sha": "8e45bcae56596673a84779e3e3b20f2d873ec968", "title": "A Unified Probabilistic Model for Aspect-Level Sentiment Analysis", "abstract": "A Unified Probabilistic Model for Aspect-Level Sentiment Analysis Daniel Stantic Advisor: University of Guelph, 2016 Dr. Fei Song In this thesis, we develop a new probabilistic model for aspect-level sentiment analysis based on POSLDA, a topic classifier that incorporates syntax modelling for better performance. POSLDA separates semantic words from purely functional words and restricts its topic modelling on the semantic words. We take this a step further by modelling the probability of a semantic word expressing sentiment based on its part-of-speech class and then modelling its sentiment if it is a sentiment word. We restructure the popular approach of topic-sentiment distributions within documents and add a few novel heuristic improvements. Our experiments demonstrate that our model produces results competitive to the state of the art systems. In addition to the model, we develop a multi-threaded version of the popular Gibbs sampling algorithm that can perform inference over 1000 times faster than the traditional implementation while preserving the quality of the results.", "corpus_id": 21074653}, "pos": {"sha": "59d97d6d76eff9238bb0dcadd416ec9523d204af", "title": "Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL", "abstract": "This paper presents a simple unsupervised learning algorithm for recognizing synonyms, based on statistical data acquir ed by querying a Web search engine. The algorithm, called PMI-IR, uses Pointwis e Mutual Information (PMI) and Information Retrieval (IR) to measure the similarity of pairs of words. PMI-IR is empirically evaluated using 80 syn onym test questions from the Test of English as a Foreign Language (TOEFL) a nd 50 synonym test questions from a collection of tests for students of En glish as a Second Language (ESL). On both tests, the algorithm obtains a score of 74%. PMI-IR is contrasted with Latent Semantic Analysis (LSA), which achieves a score of 64% on the same 80 TOEFL questions. The paper discusses po t ntial applications of the new unsupervised learning algorithm and some implic ations of the results for LSA and LSI (Latent Semantic Indexing).", "corpus_id": 5509836}, "neg": {"sha": "6c2c86d2e8ce185d8dc13ad9eede50d8bfebec9e", "title": "Improving classification accuracy of feedforward neural networks for spiking neuromorphic chips", "abstract": "Deep Neural Networks (DNN) achieve human level performance in many image analytics tasks but DNNs are mostly deployed to GPU platforms that consume a considerable amount of power. New hardware platforms using lower precision arithmetic achieve drastic reductions in power consumption. More recently, brain-inspired spiking neuromorphic chips have achieved even lower power consumption, on the order of milliwatts, while still offering real-time processing. However, for deploying DNNs to energy efficient neuromorphic chips the incompatibility between continuous neurons and synaptic weights of traditional DNNs, discrete spiking neurons and synapses of neuromorphic chips need to be overcome. Previous work has achieved this by training a network to learn continuous probabilities, before it is deployed to a neuromorphic architecture, such as IBM TrueNorth Neurosynaptic System, by random sampling these probabilities. The main contribution of this paper is a new learning algorithm that learns a TrueNorth configuration ready for deployment. We achieve this by training directly a binary hardware crossbar that accommodates the TrueNorth axon configuration constrains and we propose a different neuron model. Results of our approach trained on electroencephalogram (EEG) data show a significant improvement with previous work (76% vs 86% accuracy) while maintaining state of the art performance on the MNIST handwritten data set.", "corpus_id": 593754}}, {"query": {"sha": "0820ccfdba775c304bedb9c3d82ee8758e0a416b", "title": "Revisiting Multiple Instance Neural Networks", "abstract": "Recently neural networks and multiple instance learning are both attractive topics in Artificial Intelligence related research fields. Deep neural networks have achieved great success in supervised learning problems, and multiple instance learning as a typical weakly-supervised learning method is effective for many applications in computer vision, biometrics, nature language processing, etc. In this paper, we revisit the problem of solving multiple instance learning problems using neural networks. Neural networks are appealing for solving multiple instance learning problem. The multiple instance neural networks perform multiple instance learning in an end-to-end way, which take a bag with various number of instances as input and directly output bag label. All of the parameters in a multiple instance network are able to be optimized via back-propagation. We propose a new multiple instance neural network to learn bag representations, which is different from the existing multiple instance neural networks that focus on estimating instance label. In addition, recent tricks developed in deep learning have been studied in multiple instance networks, we find deep supervision is effective for boosting bag classification accuracy. In the experiments, the proposed multiple instance networks achieve state-of-the-art or competitive performance on several MIL benchmarks. Moreover, it is extremely fast for both testing and training, e.g., it takes only 0.0003 second to predict a bag and a few seconds to train on a MIL datasets on a moderate CPU.", "corpus_id": 17034913}, "pos": {"sha": "64372501affd8571db20dc606b0146a76c266303", "title": "Multiple instance classification: Review, taxonomy and comparative study", "abstract": "Multiple Instance Learning(MIL) has become an important topic in the pattern recogniti o community, and many solutions to this problem have been propose d until now. Despite this fact, there is a lack of comparative studies that shed light into the char acte istics and behavior of the different methods. In this work we provide such an analysis focu sed on the classification task (i.e., leaving out other learning tasks such as regression). In ord er to perform our study, we implemented fourteen methods grouped into three di fferent families. We analyze the performance of the approaches across a variety of well-known databases, an d we also study their behavior in synthetic scenarios in order to highlight their characteri s ics. As a result of this analysis, we conclude that methods that extract global bag-level informati on show a clearly superior performance in general. In this sense, the analysis permits us to underst and why some types of methods are more successful than others, and it permits us to establish g uidelines in the design of new MIL methods.", "corpus_id": 6825524}, "neg": {"sha": "b2988423738dc2be5a7a01ae773c12880ec8fcf1", "title": "Power loss of GaN transistor reverse diodes in a high frequency high voltage resonant rectifier", "abstract": "This paper presents power loss measurements of GaN transistor reverse diodes in high frequency high voltage conditions. To evaluate their performance, we use GaN transistor reverse diodes as rectifying devices in a class-DE resonant rectifier and operate the circuit at switching frequencies of 10s of MHz and with output voltages reaching 100s of volts. We use a thermometric calibration method to quantify the power loss in all GaN transistor reverse diodes and finds that the losses increase with both switching frequency and output voltage. Further, our experiments show that the device power loss is neither hard switching loss by a faulty design of resonant rectifiers nor traditional conduction loss from the forward voltage drop and static on-resistance of the diode. The comparison between power dissipation and GaN transistor output capacitances suggests that the device capacitance might be correlated with the observed device power loss.", "corpus_id": 6476357}}, {"query": {"sha": "489c395c6602227b100c5ac1433e47859762d762", "title": "DEP: Detailed execution profile", "abstract": "In many areas of computer architecture design and program development, the knowledge of dynamic program behavior can be very handy. Several challenges beset the accurate and complete collection of dynamic control flow and memory reference information. These include scalability issues, runtime-overhead, and code coverage. For example, while Tallam and Gupta's work on extending WPP (Whole Program Paths) showed good compressibility, their profile requires 500MBytes of intermediate memory space and an average of 23 times slowdown to be collected.To address these challenges, this paper presents DEP (Detailed Execution Profile). DEP captures the complete dynamic control flow, data dependency and memory reference of a whole program's execution. The profile size is significantly reduced due to the insight that most information can be recovered from a tightly coupled record of control flow and register value changes. DEP is collected in an infrastructure called Adept (A dynamic execution profiling tool), which uses the DynamoRIO binary instrumentation framework to insert profile-collecting instructions within the running application. DEP profiles user-level code execution in its entirety, including interprocedural paths and the execution of multiple threads.The framework for collecting DEP has been tested on real, large and commercial applications. Our experiments show that DEP of Linux SPECInt 2000 benchmarks and Windows SysMark benchmarks can be collected with an average of 5 times slowdown while maintaining competitive compressibility. DEP's profile sizes are about 60% that of traditional profiles.", "corpus_id": 18280638}, "pos": {"sha": "288280e41d9a7f84f7fc372330422bf4da90e4d3", "title": "Arithmetic program paths", "abstract": "We present Arithmetic Program Paths, a novel, efficient way to compress program control-flow traces that reduces program bit traces to less than a fifth of their original size while being fast and memory efficient. In addition, our method supports online, selective tracing and compression of individual conditionals, trading off memory usage and compression rate. We achieve these properties by recording only the directions taken by conditional statements during program execution, and using arithmetic coding for compression. We provide the arithmetic coder with a probability distribution for each conditional that we obtain using branch prediction techniques. We implemented the technique and experimented on several SPEC 2000 programs. Our method matches the compression rate of state-of-the-art tools while being an order of magnitude faster.", "corpus_id": 9183337}, "neg": {"sha": "2325882905357b4764d9066e9eea1ba10510f9dc", "title": "Wearable Sensors in Syncope Management", "abstract": "Syncope is a common disorder with a lifetime prevalence of about 40%. Implantable cardiac electronic devices, including implantable loop recorders (ILR) and implantable cardioverter-defibrillators (ICD), are well established in syncope management. However, despite the successful use of ILR and ICD, diagnosis and therapy still remain challenging in many patients due to the complex hemodynamic interplay of cardiac and vascular adaptations during impending syncopes. Wearable sensors might overcome some limitations, including misdiagnosis and inappropriate defibrillator shocks, because a variety of physiological measures can now be easily acquired by a single non-invasive device at high signal quality. In neurally-mediated syncope (NMS), which is the most common cause of syncope, advanced signal processing methodologies paved the way to develop devices for early syncope detection. In contrast to the relatively benign NMS, in arrhythmia-related syncopes immediate therapeutical intervention, predominantly by electrical defibrillation, is often mandatory. However, in patients with a transient risk of arrhythmia-related syncope, limitations of ICD therapy might outweigh their potential therapeutic benefits. In this context the wearable cardioverter-defibrillator offers alternative therapeutical options for some high-risk patients. Herein, we review recent evidence demonstrating that wearable sensors might be useful to overcome some limitations of implantable devices in syncope management.", "corpus_id": 15672415}}, {"query": {"sha": "58d720de01b2fee10cf921a872b9b84e1a4e273d", "title": "Improved Harris sub-pixel corner detection algorithm for chessboard image", "abstract": "Control point image locating accuracy of calibration plate is one of the major factors which determine the accuracy of camera calibration. Thus we can improve determine the accuracy of camera calibration by improve the control point image locating accuracy of calibration plate. And an improved Harris sub-pixel corner detection algorithm is put forward to improve the control point image locating accuracy. Initial location of corner point using Harris corner detection algorithm is implemented at first, and then improved Harris corner detection algorithm is used to realize sub-pixel locating accuracy. Corner points' coordinate values are gotten by means of initial location and fine location. The result of examine show that corner detection precision is improved greatly through improved Harris sub-pixel corner detection algorithm of chessboard image, and realize the sub-pixel corner detection.", "corpus_id": 14684343}, "pos": {"sha": "1d9e8248ec8b333f86233bb0c4a88060776f51b1", "title": "SUSAN\u2014A New Approach to Low Level Image Processing", "abstract": "This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new feature detectors are based on the minimization of this local image region, and the noise reduction method uses this region as the smoothing neighbourhood. The resulting methods are accurate, noise resistant and fast. Details of the new feature detectors and of the new noise reduction method are described, along with test results.", "corpus_id": 15033310}, "neg": {"sha": "2b16178afa30502121f19db637430bf6716efaee", "title": "AC Voltage Regulator Based on the AC-AC Buck-Boost Converter", "abstract": "The study and implementation of an ac voltage regulator is presented in this paper. Traditionally an ac voltage regulator is made with a transformer tap changer or with an ac-ac converter based on buck topologies, recently the developments in ac-ac converter makes feasible the implementation of voltage regulator with other topologies. In this paper is analyzed an ac voltage regulator based on the ac-ac buck-boost converter, the commutation trouble is solved with two inductors. The controller used permits to obtain a good dynamic response for large input voltage variations. The operation and brief analysis is included. Simulations and experimental results are presented.", "corpus_id": 29584626}}, {"query": {"sha": "1f4837fed1a33b282c135609ad8322c867ab9412", "title": "The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions", "abstract": "Using functional magnetic resonance imaging (fMRI) repetition suppression, we explored the selectivity of the human action perception system (APS), which consists of temporal, parietal and frontal areas, for the appearance and/or motion of the perceived agent. Participants watched body movements of a human (biological appearance and movement), a robot (mechanical appearance and movement) or an android (biological appearance, mechanical movement). With the exception of extrastriate body area, which showed more suppression for human like appearance, the APS was not selective for appearance or motion per se. Instead, distinctive responses were found to the mismatch between appearance and motion: whereas suppression effects for the human and robot were similar to each other, they were stronger for the android, notably in bilateral anterior intraparietal sulcus, a key node in the APS. These results could reflect increased prediction error as the brain negotiates an agent that appears human, but does not move biologically, and help explain the 'uncanny valley' phenomenon.", "corpus_id": 2882261}, "pos": {"sha": "4160afebba8a39116664d32bc18fe14b649ab5c2", "title": "Neural Circuits Involved in the Recognition of Actions Performed by Nonconspecifics: An fMRI Study", "abstract": "Functional magnetic resonance imaging was used to assess the cortical areas active during the observation of mouth actions performed by humans and by individuals belonging to other species (monkey and dog). Two types of actions were presented: biting and oral communicative actions (speech reading, lip-smacking, barking). As a control, static images of the same actions were shown. Observation of biting, regardless of the species of the individual performing the action, determined two activation foci (one rostral and one caudal) in the inferior parietal lobule and an activation of the pars opercularis of the inferior frontal gyrus and the adjacent ventral premotor cortex. The left rostral parietal focus (possibly BA 40) and the left premotor focus were very similar in all three conditions, while the right side foci were stronger during the observation of actions made by conspecifics. The observation of speech reading activated the left pars opercularis of the inferior frontal gyrus, the observation of lip-smacking activated a small focus in the pars opercularis bilaterally, and the observation of barking did not produce any activation in the frontal lobe. Observation of all types of mouth actions induced activation of extrastriate occipital areas. These results suggest that actions made by other individuals may be recognized through different mechanisms. Actions belonging to the motor repertoire of the observer (e.g., biting and speech reading) are mapped on the observer's motor system. Actions that do not belong to this repertoire (e.g., barking) are essentially recognized based on their visual properties. We propose that when the motor representation of the observed action is activated, the observer gains knowledge of the observed action in a personal perspective, while this perspective is lacking when there is no motor activation.", "corpus_id": 34516445}, "neg": {"sha": "71b3eab6d8adae502207ec1b98def9c81faaab46", "title": "Neural mechanisms of empathy in humans: a relay from neural systems for imitation to limbic areas.", "abstract": "How do we empathize with others? A mechanism according to which action representation modulates emotional activity may provide an essential functional architecture for empathy. The superior temporal and inferior frontal cortices are critical areas for action representation and are connected to the limbic system via the insula. Thus, the insula may be a critical relay from action representation to emotion. We used functional MRI while subjects were either imitating or simply observing emotional facial expressions. Imitation and observation of emotions activated a largely similar network of brain areas. Within this network, there was greater activity during imitation, compared with observation of emotions, in premotor areas including the inferior frontal cortex, as well as in the superior temporal cortex, insula, and amygdala. We understand what others feel by a mechanism of action representation that allows empathy and modulates our emotional content. The insula plays a fundamental role in this mechanism.", "corpus_id": 8263167}}, {"query": {"sha": "a4340386a9332d43e10e81d1bbbfaddca280d83a", "title": "Reading labels of cylinder objects for blind persons", "abstract": "We propose a camera-based assistive framework to help blind persons to read text labels from cylinder objects in their daily life. First, the object is detected from the background or other surrounding objects in the camera view by shaking the object. Then we propose a mosaic model to unwarp the text label on the cylinder object surface and reconstruct the whole label for recognizing text information. This model can handle cylinder objects in any orientations and scales. The text information is then extracted from the unwarped and flatted labels. The recognized text codes are then output to blind users in speech. Experimental results demonstrate the efficiency and effectiveness of the proposed framework from different cylinder objects with complex backgrounds.", "corpus_id": 1795078}, "pos": {"sha": "3d8650c28ae2b0f8d8707265eafe53804f83f416", "title": "Experiments with a New Boosting Algorithm", "abstract": "In an earlier paper [9], we introduced a new \u201cboosting\u201d algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a \u201cpseudo-loss\u201d which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman\u2019s [1] \u201cbagging\u201d method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.", "corpus_id": 1836349}, "neg": {"sha": "60ea87234f39be460fea4e3c5acebf477492b666", "title": "Applicability of RF-based methods for emotion recognition: A survey", "abstract": "Human emotion recognition has attracted a lot of research in recent years. However, conventional methods for sensing human emotions are either expensive or privacy intrusive. In this paper, we explore a connection between emotion recognition and RF-based activity recognition that can lead to a novel ubiquitous emotion sensing technology. We discuss the latest literature from both domains, highlight the potential of body movements for accurate emotion detection and focus on how emotion recognition could be done using inexpensive, less privacy intrusive, device-free RF sensing methods. Applications include environment and crowd behaviour tracking in real time, assisted living, health monitoring, or also domestic appliance control. As a result of this survey, we propose RF-based device free recognition for emotion detection based on body movements. However, it requires overcoming challenges, such as accuracy, to outperform classical methods.", "corpus_id": 5985466}}, {"query": {"sha": "def9ff028682c045e178ed27d45d03b603ef574f", "title": "Points of Interest and Visual Dictionaries for Automatic Retinal Lesion Detection", "abstract": "In this paper, we present an algorithm to detect the presence of diabetic retinopathy (DR)-related lesions from fundus images based on a common analytical approach that is capable of identifying both red and bright lesions without requiring specific pre- or postprocessing. Our solution constructs a visual word dictionary representing points of interest (PoIs) located within regions marked by specialists that contain lesions associated with DR and classifies the fundus images based on the presence or absence of these PoIs as normal or DR-related pathology. The novelty of our approach is in locating DR lesions in the optic fundus images using visual words that combines feature information contained within the images in a framework easily extendible to different types of retinal lesions or pathologies and builds a specific projection space for each class of interest (e.g., white lesions such as exudates or normal regions) instead of a common dictionary for all classes. The visual words dictionary was applied to classifying bright and red lesions with classical cross validation and cross dataset validation to indicate the robustness of this approach. We obtained an area under the curve (AUC) of 95.3% for white lesion detection and an AUC of 93.3% for red lesion detection using fivefold cross validation and our own data consisting of 687 images of normal retinae, 245 images with bright lesions, 191 with red lesions, and 109 with signs of both bright and red lesions. For cross dataset analysis, the visual dictionary also achieves compelling results using our images as the training set and the RetiDB and Messidor images as test sets. In this case, the image classification resulted in an AUC of 88.1% when classifying the RetiDB dataset and in an AUC of 89.3% when classifying the Messidor dataset, both cases for bright lesion detection. The results indicate the potential for training with different acquisition images under different setup conditions with a high accuracy of referral based on the presence of either red or bright lesions or both. The robustness of the visual dictionary against image quality (blurring), resolution, and retinal background, makes it a strong candidate for DR screening of large, diverse communities with varying cameras and settings and levels of expertise for image capture.", "corpus_id": 2928327}, "pos": {"sha": "0f0500b8e27037b1f6959a8749cf2f083eb950cc", "title": "Image retrieval: Ideas, influences, and trends of the new age", "abstract": "We have witnessed great interest and a wealth of promise in content-based image retrieval as an emerging technology. While the last decade laid foundation to such promise, it also paved the way for a large number of new techniques and systems, got many new people involved, and triggered stronger association of weakly related fields. In this article, we survey almost 300 key theoretical and empirical contributions in the current decade related to image retrieval and automatic image annotation, and in the process discuss the spawning of related subfields. We also discuss significant challenges involved in the adaptation of existing image retrieval techniques to build systems that can be useful in the real world. In retrospect of what has been achieved so far, we also conjecture what the future may hold for image retrieval research.", "corpus_id": 7060187}, "neg": {"sha": "88a939a06f69c0c02504243a52d61b6fd89e8575", "title": "Human Tracking by a Multi-rotor Drone Using HOG Features and Linear SVM on Images Captured by a Monocular Camera", "abstract": "Abstract\u2014In recent years, many researches of the drone (unmanned vehicles) have been carried out. Above all, the drone such as a multi-rotor craft might monitor and track suspicious person and find sufferers from disasters because it can move freely in the air. In the present research, a method was proposed that a multi-rotor drone can track a human by processing the two-dimensional images captured by a monocular camera installed on the multi-rotor drone. Furthermore, it can detect human without the differences of colors and movements of a target by using the Histograms of Oriented gradients (HOG) features and the linear Support Vector Machine (SVM). Then, it was shown that the multi-rotor drone could track a human by the proposed method.", "corpus_id": 29235116}}, {"query": {"sha": "ccc4694853013b79f13092ef6ffa9ed7d5757fba", "title": "A New Feature Selection Technique Combined with ELM Feature Space for Text Classification", "abstract": "The aim of text classification is to classify the text documents into a set of pre-defined categories. But the complexity of natural languages, high dimensional feature space and low quality of feature selection become the main problem for text classification process. Hence, in order strengthen the classification technique, selection of important features, and consequently removing the unimportant ones is the need of the day. The Paper proposes an approach called Commonality-Rarity Score Computation (CRSC) for selecting top features of a corpus and highlights the importance of ML-ELM feature space in the domain of text classification. Experimental results on two benchmark datasets signify the prominence of the proposed approach compared to other established approaches.", "corpus_id": 9736114}, "pos": {"sha": "004888621a4e4cee56b6633338a89aa036cf5ae5", "title": "Wrappers for Feature Subset Selection", "abstract": "In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes. @ 1997 Elsevier Science B.V.", "corpus_id": 15943670}, "neg": {"sha": "fbecb297b7bba052cf3e203ea63a88b2a336266d", "title": "A survey on security control and attack detection for industrial cyber-physical systems", "abstract": "Cyber-physical systems (CPSs), which are an integration of computation, networking, and physical processes, play an increasingly important role in critical infrastructure, government and everyday life. Due to physical constraints, embedded computers and networks may give rise to some additional security vulnerabilities, which results in losses of enormous economy benefits or disorder of social life. As a result, it is of significant to properly investigate the security issue of CPSs to ensure that such systems are operating in a safe manner. This paper, from a control theory perspective, presents an overview of recent advances on security control and attack detection of industrial CPSs. First, the typical system modeling on CPSs is summarized to cater for the requirement of the performance analysis. Then three typical types of cyber-attacks, i.e. denial-of-service attacks, replay attacks, and deception attacks, are disclosed from an engineering perspective. Moreover, robustness, security and resilience as well as stability are discussed to govern the capability of weakening various attacks. The development on attack detection for industrial CPSs is reviewed according to the categories on detection approaches. Furthermore, the security control and state estimation are discussed in detail. Finally, some challenge issues are raised for the future", "corpus_id": 44770106}}, {"query": {"sha": "7731d9fe9e81f474a57520150d00713e894c347a", "title": "Lexical Cohesion and Entailment based Segmentation for Arabic Text Summarization ( LCEAS )", "abstract": "Text summarization is the process of creating a short description of a specified text while preserving its information context. This paper tackles Arabic text summarization problem. The semantic redundancy and insignificance will be removed from the summarized text. This can be achieved by checking the text entailment relation, and lexical cohesion. Accordingly, a text summarization approach (called LCEAS) based on lexical cohesion and text entailment relation is developed. In LCEAS, text entailment approach is enhanced to suit Arabic language. Roots and semantic-relations are used between the senses of the words to extract the common words. New threshold values are specified to suit entailment based segmentation for Arabic text. LCEAS is a single document summarization, which is constructed using extraction technique. To evaluate LCEAS, its performance is compared with previous Arabic text summarization systems. Each system output is compared against Essex Arabic Summaries Corpus (EASC) corpus (the model summaries), using Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and Automatic Summarization Engineering (AutoSummEng) metrics. The outcome of LCEAS indicates that the developed approach outperforms the previous Arabic text summarization systems. KeywordsText Summarization; Text Segmentation; Lexical Cohesion; Text Entailment; Natural Language Processing.", "corpus_id": 16122372}, "pos": {"sha": "de61b10a7350f28fdcb7549dc15c5e7d00a713bf", "title": "Summarization system evaluation revisited: N-gram graphs", "abstract": "This article presents a novel automatic method (AutoSummENG) for the evaluation of summarization systems, based on comparing the character n-gram graphs representation of the extracted summaries and a number of model summaries. The presented approach is language neutral, due to its statistical nature, and appears to hold a level of evaluation performance that matches and even exceeds other contemporary evaluation methods. Within this study, we measure the effectiveness of different representation methods, namely, word and character n-gram graph and histogram, different n-gram neighborhood indication methods as well as different comparison methods between the supplied representations. A theory for the a priori determination of the methods' parameters along with supporting experiments concludes the study to provide a complete alternative to existing methods concerning the automatic summary system evaluation process.", "corpus_id": 15151049}, "neg": {"sha": "72bd127275454602b68a91fac9afe6948ab4c119", "title": "Adversarially Learned One-Class Classifier for Novelty Detection", "abstract": "Novelty detection is the process of identifying the observation(s) that differ in some respect from the training observations (the target class). In reality, the novelty class is often absent during training, poorly sampled or not well defined. Therefore, one-class classifiers can efficiently model such problems. However, due to the unavailability of data from the novelty class, training an end-to-end deep network is a cumbersome task. In this paper, inspired by the success of generative adversarial networks for training deep models in unsupervised and semi-supervised settings, we propose an end-to-end architecture for one-class classification. Our architecture is composed of two deep networks, each of which trained by competing with each other while collaborating to understand the underlying concept in the target class, and then classify the testing samples. One network works as the novelty detector, while the other supports it by enhancing the inlier samples and distorting the outliers. The intuition is that the separability of the enhanced inliers and distorted outliers is much better than deciding on the original samples. The proposed framework applies to different related applications of anomaly and outlier detection in images and videos. The results on MNIST and Caltech-256 image datasets, along with the challenging UCSD Ped2 dataset for video anomaly detection illustrate that our proposed method learns the target class effectively and is superior to the baseline and state-of-the-art methods.", "corpus_id": 3509717}}, {"query": {"sha": "b485165b25261a7b9a9aac330cf57e364633cff3", "title": "BankSealer: A decision support system for online banking fraud analysis and investigation", "abstract": "The significant growth of online banking frauds, fueled by the underground economy of malware, raised the need for effective fraud analysis systems. Unfortunately, almost all of the existing approaches adopt black box models and mechanisms that do not give any justifications to analysts. Also, the development of such methods is stifled by limited Internet banking data availability for the scientific community. In this paper we describe BANKSEALER, a decision support system for online banking fraud analysis and investigation. During a training phase, BANKSEALER builds easy-to-understand models for each customer's spending habits, based on past transactions. First, it quantifies the anomaly of each transaction with respect to the customer historical profile. Second, it finds global clusters of customers with similar spending habits. Third, it uses a temporal threshold system that measures the anomaly of the current spending pattern of each customer, with respect to his or her past spending behavior. With this threefold profiling approach, it mitigates the under-training due to the lack of historical data for building well-trained profiles, and the evolution of users' spending habits over time. At runtime, BANKSEALER supports analysts by ranking new transactions that deviate from the learned profiles, with an output that has an easily understandable, immediate statistical meaning. Our evaluation on real data, based on fraud scenarios built in collaboration with domain experts that replicate typical, real-world attacks (e.g., credential stealing, banking trojan activity, and frauds repeated over time), shows that our approach correctly ranks complex frauds. In particular, we measure the effectiveness, the computational resource requirements and the capabilities of BANKSEALER to mitigate the problem of users that performed a low number of transactions. Our system ranks frauds and anomalies with up to 98% detection rate and with a maximum daily computation time of 4 min. Given the good results, a leading Italian bank deployed a version of BANKSEALER in their environment to", "corpus_id": 39094419}, "pos": {"sha": "20a4215a6599b0b6856d7c2fa511e9f1cec8dc89", "title": "Effective detection of sophisticated online banking fraud on extremely imbalanced data", "abstract": "Sophisticated online banking fraud reflects the integrative abuse of resources in social, cyber and physical worlds. Its detection is a typical use case of the broad-based Wisdom Web of Things (W2T) methodology. However, there is very limited information available to distinguish dynamic fraud from genuine customer behavior in such an extremely sparse and imbalanced data environment, which makes the instant and effective detection become more and more important and challenging. In this paper, we propose an effective online banking fraud detection framework that synthesizes relevant resources and incorporates several advanced data mining techniques. By building a contrast vector for each transaction based on its customer\u2019s historical behavior sequence, we profile the differentiating rate of each current transaction against the customer\u2019s behavior preference. A novel algorithm, ContrastMiner, is introduced to efficiently mine contrast patterns and distinguish fraudulent from genuine behavior, followed by an effective pattern selection and risk scoring that combines predictions from different models. Results from experiments on large-scale real online banking data demonstrate that our system can achieve substantially higher accuracy and lower alert volume than the latest benchmarking fraud detection system incorporating domain knowledge and traditional fraud detection methods.", "corpus_id": 16429427}, "neg": {"sha": "fea6a2c5b5c0c8193b1d98254830ab9f45f45df2", "title": "UNPU: A 50.6TOPS/W unified deep neural network accelerator with 1b-to-16b fully-variable weight bit-precision", "abstract": "Deep neural network (DNN) accelerators [1-3] have been proposed to accelerate deep learning algorithms from face recognition to emotion recognition in mobile or embedded environments [3]. However, most works accelerate only the convolutional layers (CLs) or fully-connected layers (FCLs), and different DNNs, such as those containing recurrent layers (RLs) (useful for emotion recognition) have not been supported in hardware. A combined CNN-RNN accelerator [1], separately optimizing the computation-dominant CLs, and memory-dominant RLs or FCLs, was reported to increase overall performance, however, the number of processing elements (PEs) for CLs and RLs was limited by their area and consequently, performance was suboptimal in scenarios requiring only CLs or only RLs. Although the PEs for RLs can be reconfigured into PEs for CLs or vice versa, only a partial reconfiguration was possible resulting in marginal performance improvement. Moreover, previous works [1-2] supported a limited set of weight bit precisions, such as either 4b or 8b or 16b. However, lower weight bit-precisions can achieve better throughput and higher energy efficiency, and the optimal bit-precision can be varied according to different accuracy/performance requirements. Therefore, a unified DNN accelerator with fully-variable weight bit-precision is required for the energy-optimal operation of DNNs within a mobile environment.", "corpus_id": 3861747}}, {"query": {"sha": "b4a1ae8553bb4f396deb9a696d39f91c888d5e2b", "title": "Abstractive Text Summarization with Quasi-Recurrent Neural Networks", "abstract": "ive Text Summarization with Quasi-Recurrent Neural Networks Peter Adelson Department of Computer Science Stanford University University padelson@stanford.edu Sho Arora Department of Computer Science Stanford University University shoarora@stanford.edu Jeff Hara Department of Computer Science Stanford University University jhara18@stanford.edu", "corpus_id": 32904774}, "pos": {"sha": "4e88de2930a4435f737c3996287a90ff87b95c59", "title": "Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks", "abstract": "Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. TreeLSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).", "corpus_id": 3033526}, "neg": {"sha": "cbcd9f32b526397f88d18163875d04255e72137f", "title": "Gradient-based learning applied to document recognition", "abstract": null, "corpus_id": 14542261}}, {"query": {"sha": "5fc38a9b1301cd65604e67e1a424a1241f66cecf", "title": "On Stake and Consensus", "abstract": "In 2009, Satoshi Nakamoto introduced the Bitcoin cryptocurrency[Nak09], an online currency system which allowed peer-to-peer transfer of digital tokens. To ensure a consistent view of token ownership, Nakamoto used a public ledger which can be replicated and validated by all network participants. To avoid a single point of failure, this ledger is authenticated using a dynamic membership multiparty signature (DMMS)[BCD+14] consisting of an expensive (but cheaply verifiable) computation done on the entire ledger history every \u201cheartbeat\u201d. Unlike a traditional digital signature, there is no notion of \u201cforgability\u201d for a DMMS. Instead, every DMMS is costly to produce (in Bitcoin, by requiring a large energy expenditure) and rewarded by introduction of new coins on the ledger. Since these coins are only useful if others recognize them, participants are incentivized to extend one \u201ctrue ledger\u201d rather than attempting to create their own version of history1. Because Bitcoin\u2019s DMMS is computationally, and therefore thermodynamically[Poe14a], very expensive, alternatives have been proposed which seek to be economically and environmentally more efficient. One popular alternative, proof-of-stake, is frequently proposed as a mechanism for a cheap distributed consensus. As argued by the author[Poe14b] in 2014, this is simply not workable, but nonetheless the idea continues to arise in various forms. Meanwhile, the author\u2019s argument is commonly asserted on various forums to be \u201cdebunked\u201d or \u201cwrong\u201d, despite the author having never been made aware of any counterexamples or mistakes. (He has, of course, been contacted with many, many articles and descriptions of proof-of-stake systems which claim to be this. They are uniformly not.) This, combined with (correct) accusations that the paper is obtuse and unreadable, demonstrate that its exposition leaves much to be desired. Further, there has been significant progress in scientific understanding of Bitcoin\u2019s consensus[MLJ14, BMC+15] which was not available when the original paper was written. This paper aims to be an updated version of the author\u2019s original paper, which gives more explication on the problem Bitcoin solves, why it makes the design decisions that it does, and why proof-of-stake and similar mechanisms are fundamentally unable to produce a distributed consensus within Bitcoin\u2019s trust model.", "corpus_id": 16197025}, "pos": {"sha": "35fe18606529d82ce3fc90961dd6813c92713b3c", "title": "SoK: Research Perspectives and Challenges for Bitcoin and Cryptocurrencies", "abstract": "Bit coin has emerged as the most successful cryptographic currency in history. Within two years of its quiet launch in 2009, Bit coin grew to comprise billions of dollars of economic value despite only cursory analysis of the system's design. Since then a growing literature has identified hidden-but-important properties of the system, discovered attacks, proposed promising alternatives, and singled out difficult future challenges. Meanwhile a large and vibrant open-source community has proposed and deployed numerous modifications and extensions. We provide the first systematic exposition Bit coin and the many related crypto currencies or 'altcoins.' Drawing from a scattered body of knowledge, we identify three key components of Bit coin's design that can be decoupled. This enables a more insightful analysis of Bit coin's properties and future stability. We map the design space for numerous proposed modifications, providing comparative analyses for alternative consensus mechanisms, currency allocation mechanisms, computational puzzles, and key management tools. We survey anonymity issues in Bit coin and provide an evaluation framework for analyzing a variety of privacy-enhancing proposals. Finally we provide new insights on what we term disinter mediation protocols, which absolve the need for trusted intermediaries in an interesting set of applications. We identify three general disinter mediation strategies and provide a detailed comparison.", "corpus_id": 549362}, "neg": {"sha": "ce97a8e96582fc260b6878ecbbf62e58f73b9d74", "title": "Towards Compact and Fast Neural Machine Translation Using a Combined Method", "abstract": "Neural Machine Translation (NMT) lays intensive burden on computation and memory cost. It is a challenge to deploy NMT models on the devices with limited computation and memory budgets. This paper presents a four stage pipeline to compress model and speed up the decoding for NMT. Our method first introduces a compact architecture based on convolutional encoder and weight shared embeddings. Then weight pruning is applied to obtain a sparse model. Next, we propose a fast sequence interpolation approach which enables the greedy decoding to achieve performance on par with the beam search. Hence, the time-consuming beam search can be replaced by simple greedy decoding. Finally, vocabulary selection is used to reduce the computation of softmax layer. Our final model achieves 10\u00d7 speedup, 17\u00d7 parameters reduction, <35MB storage size and comparable performance compared to the baseline model.", "corpus_id": 5793818}}, {"query": {"sha": "25eeb5baaf7c9d1d5589e09841c15675baaca414", "title": "Effects of Conversational Agents on Human Communication in Thought-Evoking Multi-Party Dialogues", "abstract": "This paper presents an experimental study that analyzes how conversational agents activate human communication in thought-evoking multi-party dialogues between multi-users and multi-agents. A thought-evoking dialogue, which is a kind of interaction in which agents act on user willingness to provoke user thinking, has the potential to stimulate multi-party interaction. In this paper, we focus on quiz-style multi-party dialogues between two users and two agents as an example of a thought-evoking multi-party dialogue. The experiment results showed that the presence of a peer agent significantly improved user satisfaction and increased the number of user utterances. We also found that agent empathic expressions significantly improved user satisfaction, raised user ratings of a peer agent, and increased user utterances. Our findings will be useful for stimulating multi-party communication in various applications such as educational agents and community facilitators.", "corpus_id": 15963241}, "pos": {"sha": "d7695e53422bfd2cb8d7e29041846d2408c7bb64", "title": "To feel or not to feel: The role of affect in human-computer interaction", "abstract": "The past decade has witnessed an unprecedented growth in user interface and human\u2013 computer interaction (HCI) technologies and methods. The synergy of technological and methodological progress on the one hand, and changing user expectations on the other, are contributing to a redefinition of the requirements for effective and desirable human\u2013computer interaction. A key component of these emerging requirements, and of effective HCI in general, is the ability of these emerging systems to address user affect. The objective of this special issue is to provide an introduction to the emerging research area of affective HCI, some of the available methods and techniques, and representative systems and applications. r 2003 Elsevier Science Ltd. All rights reserved.", "corpus_id": 87779}, "neg": {"sha": "12e411cabad323e7d823480f7f6fc643b9713858", "title": "Cardiorespiratory fitness in young adulthood and the development of cardiovascular disease risk factors.", "abstract": "CONTEXT\nLow cardiorespiratory fitness is an established risk factor for cardiovascular and total mortality; however, mechanisms responsible for these associations are uncertain.\n\n\nOBJECTIVE\nTo test whether low fitness, estimated by short duration on a maximal treadmill test, predicted the development of cardiovascular disease risk factors and whether improving fitness (increase in treadmill test duration between examinations) was associated with risk reduction.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nPopulation-based longitudinal cohort study of men and women 18 to 30 years of age in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Participants who completed the treadmill examination according to the Balke protocol at baseline were followed up from 1985-1986 to 2000-2001. A subset of participants (n = 2478) repeated the exercise test in 1992-1993.\n\n\nMAIN OUTCOME MEASURES\nIncident type 2 diabetes, hypertension, the metabolic syndrome (defined according to National Cholesterol Education Program Adult Treatment Panel III), and hypercholesterolemia (low-density lipoprotein cholesterol > or =160 mg/dL [4.14 mmol/L]).\n\n\nRESULTS\nDuring the 15-year study period, the rates of incident diabetes, hypertension, the metabolic syndrome, and hypercholesterolemia were 2.8, 13.0, 10.2, and 11.7 per 1000 person-years, respectively. After adjustment for age, race, sex, smoking, and family history of diabetes, hypertension, or premature myocardial infarction, participants with low fitness (<20th percentile) were 3- to 6-fold more likely to develop diabetes, hypertension, and the metabolic syndrome than participants with high fitness (> or =60th percentile), all P<.001. Adjusting for baseline body mass index diminished the strength of these associations to 2-fold (all P<.001). In contrast, the association between low fitness and hypercholesterolemia was modest (hazard ratio [HR], 1.4; 95% confidence interval [CI], 1.1-1.7; P =.02) and attenuated to marginal significance after body mass index adjustment (P =.13). Improved fitness over 7 years was associated with a reduced risk of developing diabetes (HR, 0.4; 95% CI, 0.2-1.0; P =.04) and the metabolic syndrome (HR, 0.5; 95% CI, 0.3-0.7; P<.001), but the strength and significance of these associations was reduced after accounting for changes in weight.\n\n\nCONCLUSIONS\nPoor fitness in young adults is associated with the development of cardiovascular disease risk factors. These associations involve obesity and may be modified by improving fitness.", "corpus_id": 10203289}}, {"query": {"sha": "1594162a4dcd00d3c98fc2986b22938a35aa8336", "title": "On Statistical Model Checking of Stochastic Systems", "abstract": "Statistical methods to model check stochastic systems have been, thus far, developed only for a sublogic of continuous stochastic logic (CSL) that does not have steady state operators and unbounded until formulas. In this paper, we present a statistical model checking algorithm that also verifies CSL formulas with unbounded untils. The algorithm is based on Monte Carlo simulation of the model and hypothesis testing of the samples, as opposed to sequential hypothesis testing. The use of statistical hypothesis testing allows us to exploit the inherent parallelism in this approach. We have implemented the algorithm in a tool called VESTA, and found it to be effective in verifying several examples.", "corpus_id": 460008}, "pos": {"sha": "21ecd77531ff53d48ff519284e846a79308ef4f2", "title": "PRISM: Probabilistic Symbolic Model Checker", "abstract": "In this paper we describe PRISM, a tool being developed at the University of Birmingham for the analysis of probabilistic systems. PRISM supports three probabilistic models: discrete-time Markov chains, continuous-time Markov chains and Markov decision processes. Analysis is performed through model checking such systems against specifications written in the probabilistic temporal logics PCTL and CSL. The tool features three model checking engines: one symbolic, using BDDs (binary decision diagrams) and MTBDDs (multi-terminal BDDs); one based on sparse matrices; and one which combines both symbolic and sparse matrix methods. PRISM has been successfully used to analyse probabilistic termination, performance, dependability and quality of service properties for a range of systems, including randomized distributed algorithms [2], polling systems [22], workstation clusters [18] and wireless cell communication [17].", "corpus_id": 13202755}, "neg": {"sha": "e90dd6deeda97f3869bacc123fcdd78b433540cb", "title": "The Death Penalty", "abstract": "a Criminal Justice Bill. Sir Hartley Shawcross Attorney-General, moved that the House should disagree with the Lords' amendment to delete Clause I (suspension of the death penalty for murder). He urged the merits of the Government's proposal to recognize two categories of murder?the capital and non-capital. He thought that juries would have no difficulty in deciding whether a murder came into one of the five classes for which capital punishment was reserved : (1) Those committed in connection with robbery, burglary, or house breaking (gangster offences); wounding by three or more persons acting together; offences committed with explosive or destructive substances; rape, indecent assault and sodomy; (2) murder of a police officer, or a civilian who was assisting a police officer in the execution of the law; (3) poisoning when the poison had been systematically administered; (4) the murder of a prison officer; and (5) second murders. Mr. Winston Churchill said that the Government's clause would weaken the jury's sense of responsibility and introduce distinctions that would puzzle and baffle them, while its inconsistencies and absurdities would tend to bring the law into disrepute. The most frequent-types of murder, such as wounding, stabbing, and drowning, and the most wicked murders, would not carry the death penalty. Sir John Anderson thought the new clause was unsatisfactory because it sought to substitute a rigid and elaborate statutory code for the existing flexible, well-tried system. He thought that the words ' express malice' would give rise to serious difficulty. The only possible and sensible course for those who believed that the capital sentence was", "corpus_id": 13098223}}, {"query": {"sha": "d0365fa63fd99148c4658d2d2dcd02b4f6702f2c", "title": "Practical chemical sensors from chemically derived graphene.", "abstract": "We report the development of useful chemical sensors from chemically converted graphene dispersions using spin coating to create single-layer films on interdigitated electrode arrays. Dispersions of graphene in anhydrous hydrazine are formed from graphite oxide. Preliminary results are presented on the detection of NO(2), NH(3), and 2,4-dinitrotoluene using this simple and scalable fabrication method for practical devices. Current versus voltage curves are linear and ohmic in all cases, studied independent of metal electrode or presence of analytes. The sensor response is consistent with a charge transfer mechanism between the analyte and graphene with a limited role of the electrical contacts. A micro hot plate sensor substrate is also used to monitor the temperature dependence of the response to nitrogen dioxide. The results are discussed in light of recent literature on carbon nanotube and graphene sensors.", "corpus_id": 31235690}, "pos": {"sha": "3b705556d6aa4947c6b028dd29f0c084da96bf44", "title": "Electric field effect in atomically thin carbon films.", "abstract": "We describe monocrystalline graphitic films, which are a few atoms thick but are nonetheless stable under ambient conditions, metallic, and of remarkably high quality. The films are found to be a two-dimensional semimetal with a tiny overlap between valence and conductance bands, and they exhibit a strong ambipolar electric field effect such that electrons and holes in concentrations up to 10(13) per square centimeter and with room-temperature mobilities of approximately 10,000 square centimeters per volt-second can be induced by applying gate voltage.", "corpus_id": 5729649}, "neg": {"sha": "76a09faba8dc77a06929d8d4a748b99b60a735d1", "title": "A fully automated greedy square jigsaw puzzle solver", "abstract": "In the square jigsaw puzzle problem one is required to reconstruct the complete image from a set of non-overlapping, unordered, square puzzle parts. Here we propose a fully automatic solver for this problem, where unlike some previous work, it assumes no clues regarding parts' location and requires no prior knowledge about the original image or its simplified (e.g., lower resolution) versions. To do so, we introduce a greedy solver which combines both informed piece placement and rearrangement of puzzle segments to find the final solution. Among our other contributions are new compatibility metrics which better predict the chances of two given parts to be neighbors, and a novel estimation measure which evaluates the quality of puzzle solutions without the need for ground-truth information. Incorporating these contributions, our approach facilitates solutions that surpass state-of-the-art solvers on puzzles of size larger than ever attempted before.", "corpus_id": 8290588}}, {"query": {"sha": "468d43734488e0e29c3c11f2c15d9b1fb6f1adc4", "title": "CoBots: Robust Symbiotic Autonomous Mobile Service Robots", "abstract": "We research and develop autonomous mobile service robots as Collaborative Robots, i.e., CoBots. For the last three years, our four CoBots have autonomously navigated in our multi-floor office buildings for more than 1,000km, as the result of the integration of multiple perceptual, cognitive, and actuations representations and algorithms. In this paper, we identify a few core aspects of our CoBots underlying their robust functionality. The reliable mobility in the varying indoor environments comes from a novel episodic non-Markov localization. Service tasks requested by users are the input to a scheduler that can consider different types of constraints, including transfers among multiple robots. With symbiotic autonomy, the CoBots proactively seek external sources of help to fill-in for their inevitable occasional limitations. We present sampled results from a deployment and conclude with a brief review of other features of our service robots.", "corpus_id": 9653894}, "pos": {"sha": "550df09a22d0e99d04357dfce2b0baf6f3163aec", "title": "An effective personal mobile robot agent through symbiotic human-robot interaction", "abstract": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor\u2019s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.", "corpus_id": 17475008}, "neg": {"sha": "9ba344a934fd7eaf2b6361fdec927b36db8a9944", "title": "IGBT gate-drive with PCB Rogowski coil for improved short circuit detection and current turn-off capability", "abstract": "In this paper, a gate drive using gate boosting and double-stage turn off including voltage clamping as well as with detection of overcurrent and a too high di/dt during turn on is discussed in detail. Besides the gate drive, also the design of a PCB-Rogowski coil, which is used for measuring currents and for di/dt detection, is explained and different designs are compared. The presented coil has a bandwidth of more than 28MHz and a propagation delay of 11 ns.", "corpus_id": 14414479}}, {"query": {"sha": "3c4c15a6597223887e0a5384237fd2a89b176e4a", "title": "Optimized Product Quantization", "abstract": "Product quantization (PQ) is an effective vector quantization method. A product quantizer can generate an exponentially large codebook at very low memory/time cost. The essence of PQ is to decompose the high-dimensional vector space into the Cartesian product of subspaces and then quantize these subspaces separately. The optimal space decomposition is important for the PQ performance, but still remains an unaddressed issue. In this paper, we optimize PQ by minimizing quantization distortions w.r.t the space decomposition and the quantization codebooks. We present two novel solutions to this challenging optimization problem. The first solution iteratively solves two simpler sub-problems. The second solution is based on a Gaussian assumption and provides theoretical analysis of the optimality. We evaluate our optimized product quantizers in three applications: (i) compact encoding for exhaustive ranking [1], (ii) building inverted multi-indexing for non-exhaustive search [2], and (iii) compacting image representations for image retrieval [3]. In all applications our optimized product quantizers outperform existing solutions.", "corpus_id": 6033212}, "pos": {"sha": "a718b85520bea702533ca9a5954c33576fd162b0", "title": "SOME METHODS FOR CLASSIFICATION AND ANALYSIS OF MULTIVARIATE OBSERVATIONS", "abstract": "The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special", "corpus_id": 6278891}, "neg": {"sha": "561fbe35d3d0262f95a16b0e935678286c595449", "title": "Priming for health: gut microbiota acquired in early life regulates physiology, brain and behaviour.", "abstract": "UNLABELLED\nThe infant gut microbiome is dynamic, and radical shifts in composition occur during the first 3\u00a0years of life. Disruption of these developmental patterns, and the impact of the microbial composition of our gut on brain and behaviour, has attracted much recent attention. Integrating these observations is an important new research frontier.\n\n\nCONCLUSION\nEarly-life perturbations of the developing gut microbiota can impact on the central nervous system and potentially lead to adverse mental health outcomes.", "corpus_id": 35225394}}, {"query": {"sha": "e559e310a854b9db0efcd7cc4a313c94bfe41a78", "title": "Stationary and moving targets detection on FMCW radar using GNU radio-based software defined radio", "abstract": "This paper discusses the implementation of GNU radio-based software defined radio (SDR) for designing a frequency modulated continuous wave (FMCW) radar to detect stationary and moving targets. The use of SDR system in which its components are implemented by means of software is to reduce cost and complexity in the design and implementation. Whilst the signal processing of FMCW radar is carried out using Matlab R\u00ae with triangular linear frequency modulation (LFM) waveform to obtain the target distance and the target relative speed for stationary and moving target, respectively. From the result, it is shown that the radar is successfully implemented using GNU radio-based SDR with the capability in distance target detection of 14.79km for a moving target away from the radar with the relative speed of 50m/s.", "corpus_id": 17458149}, "pos": {"sha": "3d213d226eec14c56a679f3c1307742df0048f87", "title": "Accuracy analysis of FM chirp in GNU radio-based FMCW radar for multiple target detection", "abstract": "In this paper, different waveforms of frequency modulation (FM) chirp are investigated to analyze the accuracy of GNU radio-based frequency-modulated continuous wave (FMCW) radar for multiple target detection. The 3 waveforms used for the investigation as FM chirp are sinusoidal, triangular, and sawtooth waveforms. The analysis is performed by use of GNU radio referred as an open source software-define-radio project. There are 2 methods employed for the detection process; the first is real-condition simulation method and the second is USRP-based implementation method. In the analysis, some targets in different ranges are characterized using both methods to determine the accuracy of target range. By using FFT (Fast Fourier Transform) function from Matlab\u00ae to obtain the result in frequency domain, both methods show that the triangular waveform has the highest average accuracy, i.e. 95.73% for the 1st method and 99.75% for the 2nd method. The sawtooth waveform has the lower average accuracy than the triangular, i.e. 94.93 for the 1st method and 98.33% for the 2nd method, whilst the sinusoidal waveform has the lowest average accuracy, i.e. 92.60% for the 1st method and 98.59% for the 2nd method. From the result, it shows that the USRP-based implementation method has better average accuracy than the real-condition simulation method.", "corpus_id": 18248829}, "neg": {"sha": "32658c8c13f0b376399b16c1a15933ab13fcda15", "title": "An Ultra-Wideband 80 GHz FMCW Radar System Using a SiGe Bipolar Transceiver Chip Stabilized by a Fractional-N PLL Synthesizer", "abstract": "A radar system with an ultra-wide FMCW ramp bandwidth of 25.6 GHz (\u224832%) around a center frequency of 80 GHz is presented. The system is based on a monostatic fully integrated SiGe transceiver chip, which is stabilized using conventional fractional-N PLL chips at a reference frequency of 100 MHz. The achieved in-loop phase noise is \u2248 -88 dBc/Hz (10 kHz offset frequency) for the center frequency and below \u2248-80 dBc/Hz in the wide frequency band of 25.6 GHz for all offset frequencies >;1 kHz. The ultra-wide PLL-stabilization was achieved using a reverse frequency position mixer in the PLL (offset-PLL) resulting in a compensation of the variation of the oscillators tuning sensitivity with the variation of the N-divider in the PLL. The output power of the transceiver chip, as well as of the mm-wave module (containing a waveguide transition), is sufficiently flat versus the output frequency (variation <;3 dB). In radar measurements using the full bandwidth an ultra-high spatial resolution of 7.12 mm was achieved. The standard deviation between repeated measurements of the same target is 0.36 \u03bcm.", "corpus_id": 15180486}}, {"query": {"sha": "30aa800df7a4dea69f95feeaf16e885e50d8a49f", "title": "Detection of complex video events through visual rhythm", "abstract": "The recognition of complex events in videos has currently several important applications, particularly due to the wide availability of digital cameras in environments such as airports, train and bus stations, shopping centers, stadiums, hospitals, schools, buildings, roads, among others. Advances in digital technology have enhanced the capabilities for detection of video events through the development of devices with high resolution, small physical size, and high sampling rates. This work presents and evaluates the use of feature descriptors extracted from visual rhythms of video sequences in three computer vision problems: abnormal event detection, human action classification, and gesture recognition. Experiments conducted on well-known public datasets demonstrate that the method produces promising results.", "corpus_id": 14746978}, "pos": {"sha": "0bad381b84f48b28abc1a98f05993c8eb5be747d", "title": "Anomaly detection: A survey", "abstract": "Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.", "corpus_id": 207172599}, "neg": {"sha": "67522b1dde6b2321353e0f52323aa43b340abaf0", "title": "The Specific Role of Relationship Life Events in the Onset of Depression during Pregnancy and the Postpartum", "abstract": "BACKGROUND\nThe precipitating role of life events in the onset of depression is well-established. The present study sought to examine whether life events hypothesised to be personally salient would be more strongly associated with depression than other life events. In a sample of women making the first transition to parenthood, we hypothesised that negative events related to the partner relationship would be particularly salient and thus more strongly predictive of depression than other events.\n\n\nMETHODS\nA community-based sample of 316 first-time mothers stratified by psychosocial risk completed interviews at 32 weeks gestation and 29 weeks postpartum to assess dated occurrence of life events and depression onsets from conception to 29 weeks postpartum. Complete data was available from 273 (86.4%). Cox proportional hazards regression was used to examine risk for onset of depression in the 6 months following a relationship event versus other events, after accounting for past history of depression and other potential confounders.\n\n\nRESULTS\n52 women (19.0%) experienced an onset of depression between conception and 6 months postpartum. Both relationship events (Hazard Ratio = 2.1, p = .001) and other life events (Hazard Ratio = 1.3, p = .020) were associated with increased risk for depression onset; however, relationship events showed a significantly greater risk for depression than did other life events (p = .044).\n\n\nCONCLUSIONS\nThe results are consistent with the hypothesis that personally salient events are more predictive of depression onset than other events. Further, they indicate the clinical significance of events related to the partner relationship during pregnancy and the postpartum.", "corpus_id": 4861866}}, {"query": {"sha": "fcb3a54d3a6b9b339eb6f8583f84cc10efae8986", "title": "Firefly algorithm with neighborhood attraction", "abstract": "Firefly algorithm (FA) is a new optimization technique based on swarm intelligence. It simulates the social behavior of fireflies. The search pattern of FA is determined by the attractions among fireflies, whereby a less bright firefly moves toward a brighter firefly. In FA, each firefly can be attracted by all other brighter fireflies in the population. However, too many attractions may result in oscillations during the search process and high computational time complexity. To overcome these problems, we propose a new FA variant called FA with neighborhood attraction (NaFA). In NaFA, each firefly is attracted by other brighter fireflies selected from a predefined neighborhood rather than those from the entire population. Experiments are conducted using several well-known benchmark functions. The results show that the proposed strategy can efficiently improve the accuracy of solutions and reduce the computational time complexity.", "corpus_id": 4600715}, "pos": {"sha": "10fa778e675d0b6951a12b3d8160420317950608", "title": "Ant system: optimization by a colony of cooperating agents", "abstract": "An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call ant system (AS). We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation, and the use of a constructive greedy heuristic. Positive feedback accounts for rapid discovery of good solutions, distributed computation avoids premature convergence, and the greedy heuristic helps find acceptable solutions in the early stages of the search process. We apply the proposed methodology to the classical traveling salesman problem (TSP), and report simulation results. We also discuss parameter selection and the early setups of the model, and compare it with tabu search and simulated annealing using TSP. To demonstrate the robustness of the approach, we show how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling. Finally we discuss the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.", "corpus_id": 135561}, "neg": {"sha": "2f3f866dd8e4a187c033e55fc8d31b6be77afd2d", "title": "Identifying Synonyms among Distributionally Similar Words", "abstract": "There have been many proposals to compute similarities between words based on their distributions in contexts. However, these approaches do not distinguish between synonyms and antonyms. We present two methods for identifying synonyms among distributionally similar words.", "corpus_id": 2220173}}, {"query": {"sha": "03cc44aa3106062e3692144c4f07c58c606dbd5c", "title": "Bridging Paxos and Blockchain Consensus", "abstract": "The distributed consensus problem has been extensively studied in the last four decades as an important problem in distributed systems. Recent advances in decentralized consensus and blockchain technology, however, arose from a disparate model and gave rise to disjoint knowledge-base and techniques than those in the classical consensus research. In this paper we make a case for bridging these two seemingly disparate approaches in order to help transfer the lessons learned from the classical distributed consensus world to the blockchain world and vice versa. To this end, we draw parallels between blockchain consensus and a classical consensus protocol, Paxos. We also survey prominent approaches to improving the throughput and providing instant irreversibility to blockchain consensus and show analogies to the techniques from classical consensus protocols. Finally, inspired by the central role formal methods played in the success of classical consensus research, we suggest more extensive use of formal methods in modeling the blockchains and smartcontracts.", "corpus_id": 46997117}, "pos": {"sha": "5acc6e0d4011d81419b81d7cd383bed48c4cb22c", "title": "Flexible Paxos: Quorum Intersection Revisited", "abstract": "Distributed consensus is integral to modern distributed systems. The widely adopted Paxos algorithm uses two phases, each requiring majority agreement, to reliably reach consensus. In this paper, we demonstrate that Paxos, which lies at the foundation of many production systems, is conservative. Specifically, we observe that each of the phases of Paxos may use non-intersecting quorums. Majority quorums are not necessary as intersection is required only across phases. Using this weakening of the requirements made in the original formulation, we propose Flexible Paxos, which generalizes over the Paxos algorithm to provide flexible quorums. We show that Flexible Paxos is safe, efficient and easy to utilize in existing distributed systems. We discuss far reaching implications of this result. For example, improved availability results from reducing the size of second phase quorums by one when the system size is even, while keeping majority quorums in the first phase. Another example is improved throughput of replication by using much smaller phase 2 quorums, while increasing the leader election (phase 1) quorums. Finally, non intersecting quorums in either first or second phases may enhance the efficiency of both. 1998 ACM Subject Classification C.2.4 Distributed Systems", "corpus_id": 16679103}, "neg": {"sha": "130ce1bcd496a7b9192f5f53dd8d7ef626e40675", "title": "Asynchronous Consensus and Broadcast Protocols", "abstract": "A consensus protocol enables a system of n asynchronous processes, some of which are faulty, to reach agreement. There are two kinds of faulty processes: fail-stop processes that can only die and malicious processes that can also send false messages. The class of asynchronous systems with fair schedulers is defined, and consensus protocols that terminate with probability 1 for these systems are investigated. With fail-stop processes, it is shown that \u2308(n + 1)/2\u2309 correct processes are necessary and sufficient to reach agreement. In the malicious case, it is shown that \u2308(2n + 1)/3\u2309 correct processes are necessary and sufficient to reach agreement. This is contrasted with an earlier result, stating that there is no consensus protocol for the fail-stop case that always terminates within a bounded number of steps, even if only one process can fail. The possibility of reliable broadcast (Byzantine Agreement) in asynchronous systems is also investigated. Asynchronous Byzantine Agreement is defined, and it is shown that \u2308(2n + 1)/3\u2309 correct processes are necessary and sufficient to achieve it.", "corpus_id": 11234976}}, {"query": {"sha": "b34545c8948b5089867ef1eec3fb5522cef23c90", "title": "Simplifying Particle Swarm Optimization", "abstract": "The general purpose optimization method known as Particle Swarm Optimization (PSO) has received much attention in past years, with many attempts to find the variant that performs best on a wide variety of optimization problems. The focus of past research has been with making the PSO method more complex, as this is frequently believed to increase its adaptability to other optimization problems. This study takes the opposite approach and simplifies the PSO method. To compare the efficacy of the original PSO and the simplified variant here, an easy technique is presented for efficiently tuning their behavioural parameters. The technique works by employing an overlaid meta-optimizer, which is capable of simultaneously tuning parameters with regard to multiple optimization problems, whereas previous approaches to meta-optimization have tuned behavioural parameters to work well on just a single optimization problem. It is then found that the PSO method and its simplified variant not only have comparable performance for optimizing a number of Artificial Neural Network problems, but the simplified variant appears to offer a small improvement in some cases.", "corpus_id": 12065877}, "pos": {"sha": "2375f6d71ce85a9ff457825e192c36045e994bdd", "title": "Multilayer feedforward networks are universal approximators", "abstract": null, "corpus_id": 2757547}, "neg": {"sha": "49939312f2778324030f0050f112a37194b126bb", "title": "A Comparative Study of Methods for Transductive Transfer Learning", "abstract": "The problem of transfer learning, where information gained in one learning task is used to improve performance in another related task, is an important new area of research. While previous work has studied the supervised version of this problem, we study the more challenging case of unsupervised transductive transfer learning, where no labeled data from the target domain are available at training. We describe some current state-of-the-art inductive and transductive approaches and then adapt these models to the problem of transfer learning for protein name extraction. In the process, we introduce a novel maximum entropy based technique, iterative feature transformation (IFT), and show that it achieves comparable performance with state-of-the-art transductive SVMs. We also show how simple relaxations, such as providing additional information like the proportion of positive examples in the test data, can significantly improve the performance of some of the transductive transfer learners.", "corpus_id": 16151238}}, {"query": {"sha": "ecc2ea05877d720b725fb89bc3b0586a51cabdc7", "title": "Object Recognition in 3D Point Clouds Using Web Data and Domain Adaptation", "abstract": "Over the last years, object detection has become a more and more active field of research in robotics. An important problem in object detection is the need for sufficient labeled training data to learn good classifiers. In this paper we show how to significantly reduce the need for manually labeled training data by leveraging data sets available on the World Wide Web. Specifically, we show how to use objects from Google\u2019s 3D Warehouse to train an object detection system for 3D point clouds collected by robots navigating through both urban and indoor environments. In order to deal with the different characteristics of the web data and the real robot data, we additionally use a small set of labeled point clouds and perform domain adaptation. Our experiments demonstrate that additional data taken from the 3D Warehouse along with our domain adaptation greatly improves the classification accuracy on real-world environments.", "corpus_id": 14333810}, "pos": {"sha": "1db8a0b13b9561b3a5ed1c5962989199982de470", "title": "The Princeton Shape Benchmark", "abstract": "In recent years, many shape representations and geometric algorithms have been proposed for matching 3D shapes. Usually, each algorithm is tested on a different (small) database of 3D models, and thus no direct comparison is available for competing methods. We describe the Princeton Shape Benchmark (PSB), a publicly available database of polygonal models collected from the World Wide Web and a suite of tools for comparing shape matching and classification algorithms. One feature of the benchmark is that it provides multiple semantic labels for each 3D model. For instance, it includes one classification of the 3D models based on function, another that considers function and form, and others based on how the object was constructed (e.g., man-made versus natural objects). We find that experiments with these classifications can expose different properties of shape-based retrieval algorithms. For example, out of 12 shape descriptors tested, extended Gaussian images by B. Horn (1984) performed best for distinguishing man-made from natural objects, while they performed among the worst for distinguishing specific object types. Based on experiments with several different shape descriptors, we conclude that no single descriptor is best for all classifications, and thus the main contribution of this paper is to provide a framework to determine the conditions under which each descriptor performs best.", "corpus_id": 7156990}, "neg": {"sha": "5de7e3fb01812370ad558ab64c24eab37ded69a3", "title": "Knowledge Representation Concepts for Automated SLA Management", "abstract": "Outsourcing of complex IT infrastructure to IT service providers has increased substantially during the past years. IT service providers must be able to fulfil their service-quality commitments based upon pre-defined Service Level Agreements (SLAs) with the service customer. They need to manage, execute and maintain thousands of SLAs for different customers and different types of services, which needs new levels of flexibility and automation not available with the current technology. The complexity of contractual logic in SLAs requires new forms of knowledge representation to automatically draw inferences and execute contractual agreements. A logic-based approach provides several advantages including automated rule chaining allowing for compact knowledge representation as well as flexibility to adapt to rapidly changing business requirements. We suggest adequate logical formalisms for representation and enforcement of SLA rules and describe a proof-of-concept implementation. The article describes selected formalisms of the ContractLog KR and their adequacy for automated SLA management and presents results of experiments to demonstrate flexibility and scalability of the approach.", "corpus_id": 60308}}, {"query": {"sha": "9022dcc55477c54157328828ab7e037d655ba2fb", "title": "Temporal frequency probing for 5D transient analysis of global light transport", "abstract": "We analyze light propagation in an unknown scene using projectors and cameras that operate at transient timescales. In this new photography regime, the projector emits a spatio-temporal 3D signal and the camera receives a transformed version of it, determined by the set of all light transport paths through the scene and the time delays they induce. The underlying 3D-to-3D transformation encodes scene geometry and global transport in great detail, but individual transport components (e.g., direct reflections, inter-reflections, caustics, etc.) are coupled nontrivially in both space and time.\n To overcome this complexity, we observe that transient light transport is always separable in the temporal frequency domain. This makes it possible to analyze transient transport one temporal frequency at a time by trivially adapting techniques from conventional projector-to-camera transport. We use this idea in a prototype that offers three never-seen-before abilities: (1) acquiring time-of-flight depth images that are robust to general indirect transport, such as interreflections and caustics; (2) distinguishing between direct views of objects and their mirror reflection; and (3) using a photonic mixer device to capture sharp, evolving wavefronts of \"light-in-flight\".", "corpus_id": 7145007}, "pos": {"sha": "7459544af26cbe12974ca22ff31ed17eb6469b1b", "title": "Low-budget transient imaging using photonic mixer devices", "abstract": "Transient imaging is an exciting a new imaging modality that can be used to understand light propagation in complex environments, and to capture and analyze scene properties such as the shape of hidden objects or the reflectance properties of surfaces.\n Unfortunately, research in transient imaging has so far been hindered by the high cost of the required instrumentation, as well as the fragility and difficulty to operate and calibrate devices such as femtosecond lasers and streak cameras.\n In this paper, we explore the use of photonic mixer devices (PMD), commonly used in inexpensive time-of-flight cameras, as alternative instrumentation for transient imaging. We obtain a sequence of differently modulated images with a PMD sensor, impose a model for local light/object interaction, and use an optimization procedure to infer transient images given the measurements and model. The resulting method produces transient images at a cost several orders of magnitude below existing methods, while simultaneously simplifying and speeding up the capture process.", "corpus_id": 7190905}, "neg": {"sha": "31225793dee1ff82544d08cfad2eeba555fdda34", "title": "Protecting browser state from web privacy attacks", "abstract": "Through a variety of means, including a range of browser cache methods and inspecting the color of a visited hyperlink, client-side browser state can be exploited to track users against their wishes. This tracking is possible because persistent, client-side browser state is not properly partitioned on per-site basis in current browsers. We address this problem by refining the general notion of a \"same-origin\" policy and implementing two browser extensions that enforce this policy on the browser cache and visited links.We also analyze various degrees of cooperation between sites to track users, and show that even if long-term browser state is properly partitioned, it is still possible for sites to use modern web features to bounce users between sites and invisibly engage in cross-domain tracking of their visitors. Cooperative privacy attacks are an unavoidable consequence of all persistent browser state that affects the behavior of the browser, and disabling or frequently expiring this state is the only way to achieve true privacy against colluding parties.", "corpus_id": 2870926}}, {"query": {"sha": "e293a31260cf20996d12d14b8f29a9d4d99c4642", "title": "LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation", "abstract": "We present LR-GAN: an adversarial image generation model which takes scene structure and context into account. Unlike previous generative adversarial networks (GANs), the proposed GAN learns to generate image background and foregrounds separately and recursively, and stitch the foregrounds on the background in a contextually relevant manner to produce a complete natural image. For each foreground, the model learns to generate its appearance, shape and pose. The whole model is unsupervised, and is trained in an end-to-end manner with gradient descent methods. The experiments demonstrate that LR-GAN can generate more natural images with objects that are more human recognizable than DCGAN. The code is available at https://github.com/jwyang/lr-gan.pytorch.", "corpus_id": 1840346}, "pos": {"sha": "5e6f62b05fb96ce4cd78bdeabe9d8a6f5daf988b", "title": "Generating Images Part by Part with Composite Generative Adversarial Networks", "abstract": "\u2022 Images are composed of several different objects forming a hierarchical structure with various styles and shapes. \u2022 Deep learning models are used to implicitly disentangle complex underlying patterns of data, forming distributed feature representations. \u2022 Generative adversarial networks (GAN) are successful unsupervised learning models that can generate samples of natural images generalized from the training data. \u2022 It is proven that if the GAN has enough capacity, data distribution formed by GAN can converge to the distribution over real data", "corpus_id": 15807234}, "neg": {"sha": "28b718a4ca0a034c3f11e218e3a737cbf5373ab9", "title": "Global self-esteem across the life span.", "abstract": "This study provides a comprehensive picture of age differences in self-esteem from age 9 to 90 years using cross-sectional data collected from 326,641 individuals over the Internet. Self-esteem levels were high in childhood, dropped during adolescence, rose gradually throughout adulthood, and declined sharply in old age. This trajectory generally held across gender, socioeconomic status, ethnicity, and nationality (U.S. citizens vs. non-U.S. citizens). Overall, these findings support previous research, help clarify inconsistencies in the literature, and document new trends that require further investigation.", "corpus_id": 3197480}}, {"query": {"sha": "2ab464a74e0bff7ab6e84e6c7d04702548a655de", "title": "The kernel recursive least-squares algorithm", "abstract": "We present a nonlinear version of the recursive least squares (RLS) algorithm. Our algorithm performs linear regression in a high-dimensional feature space induced by a Mercer kernel and can therefore be used to recursively construct minimum mean-squared-error solutions to nonlinear least-squares problems that are frequently encountered in signal processing applications. In order to regularize solutions and keep the complexity of the algorithm bounded, we use a sequential sparsification process that admits into the kernel representation a new input sample only if its feature space image cannot be sufficiently well approximated by combining the images of previously admitted samples. This sparsification procedure allows the algorithm to operate online, often in real time. We analyze the behavior of the algorithm, compare its scaling properties to those of support vector machines, and demonstrate its utility in solving two signal processing problems-time-series prediction and channel equalization.", "corpus_id": 10220028}, "pos": {"sha": "40e5a40ae66d44e6c00d562d068d35db6922715d", "title": "Improving the Accuracy and Speed of Support Vector Machines", "abstract": "Bernhard Scholkopf\" Max-Planck-Institut fur biologische Kybernetik , Spemannstr. 38 72076 Tubingen, Germany bs@mpik-tueb.mpg.de Support Vector Learning Machines (SVM) are finding application in pattern recognition , regression estimation , and operator inversion for ill-posed problems. Against this very general backdrop , any methods for improving the generalization performance, or for improving the speed in test phase, of SVMs are of increasing interest. In this paper we combine two such techniques on a pattern recognition problem. The method for improving generalization performance (the \"virtual support vector\" method) does so by incorporating known invariances of the problem. This method achieves a drop in the error rate on 10,000 NIST test digit images of 1.4% to 1.0%. The method for improving the speed (the \"reduced set\" method) does so by approximating the support vector decision surface. We apply this method to achieve a factor of fifty speedup in test phase over the virtual support vector machine. The combined approach yields a machine which is both 22 times faster than the original machine, and which has better generalization performance, achieving 1.1 % error. The virtual support vector method is applicable to any SVM problem with known invariances. The reduced set method is applicable to any support vector machine.", "corpus_id": 9434141}, "neg": {"sha": "4f45fb2c02857afadf6d74b2591056fa5ce6a07f", "title": "The Iseult/Inumac Whole Body 11.7 T MRI Magnet Design", "abstract": "A neuroscience research center with very high field MRI equipments has been opened in November 2006 by the CEA life science division. One of the imaging systems will require a 11.75 T magnet with a 900 mm warm bore. Regarding the large aperture and field strength, this magnet is a real challenge as compared to the largest MRI systems ever built, and is then developed within an ambitious R&D program, Iseult, focus on high field MRI. The conservative MRI magnet design principles are not readily applicable and other concepts taken from high energy physics or fusion experiments, namely the Tore Supra tokamak magnet system, will be used. The coil will thus be made of a niobium-titanium conductor cooled by a He II bath at 1.8 K, permanently connected to a cryoplant. Due to the high level of stored energy, about 340 MJ, and a relatively high nominal current, about 1500 A, the magnet will be operated in a non-persistent mode with a conveniently stabilized power supply. In order to take advantage of superfluid helium properties and regarding the high electromagnetic stresses on the conductors, the winding will be made of wetted double pancakes meeting the Stekly criterion for cryostability. The magnet will be actively shielded to fulfill the specifications regarding the stray field.", "corpus_id": 26109658}}, {"query": {"sha": "47692750125d4a2e45074b755b2d462c080095f0", "title": "Leveraging Mid-Level Semantic Boundary Cues for Automated Lymph Node Detection", "abstract": "Histograms of oriented gradients (HOG) are widely employed image descriptors in modern computer-aided diagnosis systems. Built upon a set of local, robust statistics of low-level image gradients, HOG features are usually computed on raw intensity images. In this paper, we explore a learned image transformation scheme for producing higher-level inputs to HOG. Leveraging semantic object boundary cues, our methods compute data-driven image feature maps via a supervised boundary detector. Compared with the raw image map, boundary cues offer mid-level, more object-specific visual responses that can be suited for subsequent HOG encoding. We validate integrations of several image transformation maps with an application of computer-aided detection of lymph nodes on thoracoabdominal CT images. Our experiments demonstrate that semantic boundary cues based HOG descriptors complement and enrich the raw intensity alone. We observe an overall system with substantially improved results (\u223c78% versus 60% recall at 3 FP/volume for two target regions). The proposed system also moderately outperforms the state-of-the-art deep convolutional neural network (CNN) system in the mediastinum region, without relying on data augmentation and requiring significantly fewer training samples.", "corpus_id": 9938334}, "pos": {"sha": "30b339de5b12fe08418b67d532a8b43840355344", "title": "A New 2.5D Representation for Lymph Node Detection using Random Sets of Deep Convolutional Neural Network Observations", "abstract": "Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.", "corpus_id": 4236914}, "neg": {"sha": "98db91b40a4817de77565c7eacda1f264b9a0425", "title": "The three modern faces of mercury.", "abstract": "The three modern \"faces\" of mercury are our perceptions of risk from the exposure of billions of people to methyl mercury in fish, mercury vapor from amalgam tooth fillings, and ethyl mercury in the form of thimerosal added as an antiseptic to widely used vaccines. In this article I review human exposure to and the toxicology of each of these three species of mercury. Mechanisms of action are discussed where possible. Key gaps in our current knowledge are identified from the points of view both of risk assessment and of mechanisms of action.", "corpus_id": 8036205}}, {"query": {"sha": "64bd5878170bfab423bc3fc38d693202ef4ba6b6", "title": "Monocular 3D Human Pose Estimation in the Wild Using Improved CNN Supervision", "abstract": "We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.", "corpus_id": 17012729}, "pos": {"sha": "2c03df8b48bf3fa39054345bafabfeff15bfd11d", "title": "Deep Residual Learning for Image Recognition", "abstract": "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8\u00d7 deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "corpus_id": 206594692}, "neg": {"sha": "5d417b045b1d81f67d6071467dd5be2a2b504c58", "title": "Segmentation and recognition of text written in 3D using Leap motion interface", "abstract": "In this paper, we present a word extraction and recognition methodology from online cursive handwritten text-lines recorded by Leap motion controller The online text, drawn by 3D gesture in air, is distinct from usual online pen-based strokes. The 3D gestures are recorded in air, hence they produce often non-uniform text style and jitter-effect while writing. Also, due to the constraint of writing in air, the pause of stroke-flow between words is missing. Instead all words and lines are connected by a continuous stroke. In this paper, we have used a simple but effective heuristic to segment words written in air. Here, we propose a segmentation methodology of continuous 3D strokes into text-lines and words. Separation of text lines is achieved by heuristically finding the large gap-information between end and start-positions of successive text lines. Word segmentation is characterized in our system as a two class problem. In the next phase, we have used Hidden Markov Model-based approach to recognize these segmented words. Our experimental validation with a large dataset consisting with 320 sentences reveals that the proposed heuristic based word segmentation algorithm performs with accuracy as high as 80.3%c and an accuracy of 77.6% has been recorded by HMM-based word recognition when these segmented words are fed to HMM. The results show that the framework is efficient even with cluttered gestures.", "corpus_id": 4375876}}, {"query": {"sha": "654b45f0c97bd064f4ecd697e9fb1392f1862058", "title": "A new metaheuristic algorithm based on shark smell optimization", "abstract": "In this article, a new metaheuristic optimization algorithm is introduced. This algorithm is based on the ability of shark, as a superior hunter in the nature, for finding prey, which is taken from the smell sense of shark and its movement to the odor source. Various behaviors of shark within the search environment, that is, sea water, are mathematically modeled within the proposed optimization approach. The effectiveness of the suggested approach is compared with many other heuristic optimization methods based on standard benchmark functions. Also, to illustrate the efficiency of the proposed optimization method for solving real-world engineering problems, it is applied for the solution of load frequency control problem in electrical power systems. The obtained results confirm the validity of the proposed metaheuristic optimization algorithm. VC 2014 Wiley Periodicals, Inc. Complexity 21: 97\u2013116, 2016", "corpus_id": 205711669}, "pos": {"sha": "56e02786bad4cf8781950b5df615729a417b31d7", "title": "Progress in supervised neural networks", "abstract": "Theoretical results concerning the capabilities and limitations of various neural network models are summarized, and some of their extensions are discussed. The network models considered are divided into two basic categories: static networks and dynamic networks. Unlike static networks, dynamic networks have memory. They fall into three groups: networks with feedforward dynamics, networks with output feedback, and networks with state feedback, which are emphasized in this work. Most of the networks discussed are trained using supervised learning.<>", "corpus_id": 3191120}, "neg": {"sha": "467b602a67cfd7c347fe7ce74c02b38c4bb1f332", "title": "Large Margin Local Metric Learning", "abstract": "Linear metric learning is a widely used methodology to learn a dissimilarity function from a set of similar/dissimilar example pairs. Using a single metric may be a too restrictive assumption when handling heterogeneous datasets. Recently, local metric learning methods have been introduced to overcome this limitation. However, they are subjects to constraints preventing their usage in many applications. For example, they require knowledge of the class label of the training points. In this paper, we present a novel local metric learning method, which overcomes some limitations of previous approaches. The method first computes a Gaussian Mixture Model from a low dimensional embedding of training data. Then it estimates a set of local metrics by solving a convex optimization problem; finally, a dissimilarity function is obtained by aggregating the local metrics. Our experiments show that the proposed method achieves state-of-the-art results on four datasets.", "corpus_id": 1826741}}, {"query": {"sha": "9757acb688e9db550b3706a10ab1f111d2a09c5d", "title": "Network analysis of supply chain systems: A systematic review and future research", "abstract": "Supply chains are continuously evolving and adapting systems driven by complex sociotechnical interfirm interactions. Traditional engineering and operations management modeling approaches have primarily focused on technical issues and are not well suited to effectively capture the many complex structural and behavioral aspects of supply chain systems (SCSs). There is growing recognition by the supply chain community of the significant benefits a network analytic lens can provide to understand, design, and manage SCSs. We systematically review and analyze the relevant literature and, drawing on a multidisciplinary theoretical foundation, develop an integrative framework. Our framework identifies three distinct, but interdependent themes that characterize the study of SCSs: SCS network structure (i.e., system architecture), SCS network dynamics (i.e., system behavior), and SCS network strategy (i.e., system policy and control). We elaborate on these themes, review key findings, identify the current limitations and knowledge gaps, and discuss the fundamental benefits derived from adopting an integrated SCSs perspective. We conclude with future research directions for network analysis in SCS design and management, in particular, and complex enterprise systems, in general. \u00a9 2012 Wiley Periodicals, Inc. Syst Eng", "corpus_id": 14608512}, "pos": {"sha": "6a20d9097fdfc21dc7b008eb47d5c3c09d125b01", "title": "Strategic purchasing , supply management , and firm performance", "abstract": "Purchasing has increasingly assumed a pivotal strategic role in supply-chain management. Yet, claims of the strategic role of purchasing have not been fully subjected to rigorous theoretical and empirical scrutiny. Extant research has remained largely anecdotal and theoretically under-developed. In this paper, we examine the links among strategic purchasing, supply management, and firm performance. We argue that strategic purchasing can engender sustainable competitive advantage by enabling firms to: (a) foster close working relationships with a limited number of suppliers; (b) promote open communication among supply-chain partners; and (c) develop long-term strategic relationship orientation to achieve mutual gains. Using structural equation modeling, we empirically test a number of hypothesized relationships based on a sample of 221 United States manufacturing firms. Our results provide robust support for the links between strategic purchasing, supply management, customer responsiveness, and financial performance of the buying firm. Implications for future research and managerial practice in supply-chain management are also offered. # 2004 Elsevier B.V. All rights reserved.", "corpus_id": 15456563}, "neg": {"sha": "26909eb3656cb7d0b6502685a0568da5d6531668", "title": "Optimal sizing of standalone hybrid wind/PV power systems using genetic algorithms", "abstract": "Proper design of standalone renewable energy power systems is a challenging task, as the coordination among renewable energy resources, generators, energy storages and loads is very complicated. The types and sizes of wind turbine generators (WTGs), the tilt angles and sizes of photovoltaic (PV) panels and the capacity of batteries must be optimized when sizing a standalone hybrid wind/PV power system, which may be defined as a mixed multiple-criteria integer programming problem. In our research, we investigated the genetic algorithm (GA) with elitist strategy for optimally sizing a standalone hybrid wind/PV power system. Our objective is selected as minimizing the total capital cost, subject to the constraint of the loss of power supply probability (LPSP). The LPSP of every individual of the GA's population is calculated by simulation of 8760 hours in a year. Studies have proved that the genetic algorithm converges very well and the methodology proposed is feasible for optimally sizing standalone hybrid wind/PV power systems", "corpus_id": 16767618}}, {"query": {"sha": "30d6f401d915d92b7202f545b261f2ad5e89c80a", "title": "An integrated system for autonomous robotics manipulation", "abstract": "We describe the software components of a robotics system designed to autonomously grasp objects and perform dexterous manipulation tasks with only high-level supervision. The system is centered on the tight integration of several core functionalities, including perception, planning and control, with the logical structuring of tasks driven by a Behavior Tree architecture. The advantage of the implementation is to reduce the execution time while integrating advanced algorithms for autonomous manipulation. We describe our approach to 3-D perception, real-time planning, force compliant motions, and audio processing. Performance results for object grasping and complex manipulation tasks of in-house tests and of an independent evaluation team are presented.", "corpus_id": 419179}, "pos": {"sha": "6fae2aa37aa2c221af1ffcf31040e6af7e59e977", "title": "The UMass Mobile Manipulator UMan: An Experimental Platform for Autonomous Mobile Manipulation", "abstract": "Research in Autonomous Mobile Manipulation critically depends on the availability of adequate experimental platforms. In this paper, we describe an ongoing effort at the University of Massachusetts Amherst to construct a hardware platform with redundant kinematic degrees of freedom, a comprehensive sensor suite, and significant end-effector capabilities for manipulation. In our research, we pursue an end-effector centric view of autonomous mobile manipulation. In support of this view, we are developing a comprehensive software suite to provide a high level of competency in robot control and perception. This software suite is based on a multi-objective, tasklevel motion control framework. We use this control framework to integrate a variety of motion capabilities, including taskbased force or position control of the end-effector, collision-free global motion for the entire mobile manipulator, and mapping and navigation for the mobile base. We also discuss our efforts in developing perception capabilities targeted to problems in autonomous mobile manipulation. Preliminary experiments on our UMass Mobile Manipulator (UMan) are presented.", "corpus_id": 791789}, "neg": {"sha": "14eda697d5c1a9866f56a24ef3812b4a41cff5b1", "title": "A novel paradigm for calculating Ramsey number via Artificial Bee Colony Algorithm", "abstract": "The Ramsey number is of vital importance in Ramsey's theorem. This paper proposed a novel methodology for constructing Ramsey graphs about R(3, 10), which uses Artificial Bee Colony optimization(ABC) to raise the lower bound of Ramsey number R(3, 10). The r(3, 10)-graph contains two limitations, that is, neither complete graphs of order 3 nor independent sets of order 10. To resolve these limitations, a special mathematical model is put in the paradigm to convert the problems into discrete optimization whose smaller minimizers are correspondent to bigger lower bound as approximation of inf R(3, 10). To demonstrate the potential of the proposed method, simulations are done to to minimize the amount of these two types of graphs. For the first time, four r(3, 9, 39) graphs with best approximation for inf R(3, 10) are reported in simulations to support the current lower bound for R(3, 10). The experiments' results show that the proposed paradigm for Ramsey number's calculation driven by ABC is a successful method with the advantages of high precision and robustness.", "corpus_id": 8452768}}, {"query": {"sha": "a8e2809dc015a8db4a3ee442a27204d939fa55ba", "title": "Richer Convolutional Features for Edge Detection", "abstract": "In this paper, we propose an accurate edge detector using richer convolutional features (RCF). Since objects in natural images possess various scales and aspect ratios, learning the rich hierarchical representations is very critical for edge detection. CNNs have been proved to be effective for this task. In addition, the convolutional features in CNNs gradually become coarser with the increase of the receptive fields. According to these observations, we attempt to adopt richer convolutional features in such a challenging vision task. The proposed network fully exploits multiscale and multilevel information of objects to perform the image-to-image prediction by combining all the meaningful convolutional features in a holistic manner. Using VGG16 network, we achieve state-of-the-art performance on several available datasets. When evaluating on the well-known BSDS500 benchmark, we achieve ODS F-measure of 0.811 while retaining a fast speed (8 FPS). Besides, our fast version of RCF achieves ODS F-measure of 0.806 with 30 FPS.", "corpus_id": 12452972}, "pos": {"sha": "014ad0ec0fac206d5a9f02afeda047e177bf6743", "title": "Perceptual Organization and Recognition of Indoor Scenes from RGB-D Images", "abstract": "We address the problems of contour detection, bottom-up grouping and semantic segmentation using RGB-D data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU-Depth V2 (NYUD2) dataset [27]. We propose algorithms for object boundary detection and hierarchical segmentation that generalize the gPb-ucm approach of [2] by making effective use of depth information. We show that our system can label each contour with its type (depth, normal or albedo). We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We then turn to the problem of semantic segmentation and propose a simple approach that classifies super pixels into the 40 dominant object categories in NYUD2. We use both generic and class-specific features to encode the appearance and geometry of objects. We also show how our approach can be used for scene classification, and how this contextual information in turn improves object recognition. In all of these tasks, we report significant improvements over the state-of-the-art.", "corpus_id": 12061055}, "neg": {"sha": "55f2adf8b783e03d6f01824c5167ad344f23abe3", "title": "Evidence for prescribing exercise as therapy in chronic disease.", "abstract": "Considerable knowledge has accumulated in recent decades concerning the significance of physical activity in the treatment of a number of diseases, including diseases that do not primarily manifest as disorders of the locomotive apparatus. In this review we present the evidence for prescribing exercise therapy in the treatment of metabolic syndrome-related disorders (insulin resistance, type 2 diabetes, dyslipidemia, hypertension, obesity), heart and pulmonary diseases (chronic obstructive pulmonary disease, coronary heart disease, chronic heart failure, intermittent claudication), muscle, bone and joint diseases (osteoarthritis, rheumatoid arthritis, osteoporosis, fibromyalgia, chronic fatigue syndrome) and cancer, depression, asthma and type 1 diabetes. For each disease, we review the effect of exercise therapy on disease pathogenesis, on symptoms specific to the diagnosis, on physical fitness or strength and on quality of life. The possible mechanisms of action are briefly examined and the principles for prescribing exercise therapy are discussed, focusing on the type and amount of exercise and possible contraindications.", "corpus_id": 25648755}}, {"query": {"sha": "6fd31b829eb3df97aefaced8100157df193b3597", "title": "Towards a Taxonomy of Microservices Architectures", "abstract": "The microservices architectural style is gaining more and more momentum for the development of applications as suites of small, autonomous, and conversational services, which are then easy to understand, deploy and scale. However, the proliferation of approaches leveraging microservices calls for a systematic way of analyzing and assessing them as a completely new ecosystem: the first cloud-native architectural style. This paper defines a preliminary analysis framework in the form of a taxonomy of concepts, encompassing the whole microservices lifecycle, as well as organizational aspects. This framework is necessary to enable effective exploration, understanding, assessing, comparing, and selecting microservice-based models, languages, techniques, platforms, and tools. Then, we analyze state of the art approaches related to microservices using this taxonomy to provide a holistic perspective of available solutions.", "corpus_id": 27392027}, "pos": {"sha": "4400bfb2cac16bb2de0312c23337ababb1cb0d71", "title": "Security-as-a-Service for Microservices-Based Cloud Applications", "abstract": "Microservice architecture allows different parts of an application to be developed, deployed and scaled independently, therefore becoming a trend for developing cloud applications. However, it comes with challenging security issues. First, the network complexity introduced by the large number of microservices greatly increases the difficulty in monitoring the security of the entire application. Second, microservices are often designed to completely trust each other, therefore compromise of a single microservice may bring down the entire application. The problems are only exacerbated by the cloud, since applications no longer have complete control over their networks. In this paper, we propose a design for security-as-a-service for microservices-based cloud applications. By adding a new API primitive FlowTap for the network hypervisor, we build a flexible monitoring and policy enforcement infrastructure for network traffic to secure cloud applications. We demonstrate the effectiveness of our solution by deploying the Bro network monitor using FlowTap. Results show that our solution is flexible enough to support various kinds of monitoring scenarios and policies and it incurs minimal overhead (~6%) for real world usage. As a result, cloud applications can leverage our solution to deploy network security monitors to flexibly detect and block threats both external and internal to their network.", "corpus_id": 17921098}, "neg": {"sha": "57e874efb65c9c680dc5a04594fe39bfbd010ac1", "title": "BCPL: a tool for compiler writing and system programming", "abstract": "The language BCPL (Basic CPL) was originally developed as a compiler writing tool and as its name suggests it is closely related to CPL (Combined Programming Language) which was jointly developed at Cambridge and London Universities. BCPL adopted much of the syntactic richness of CPL and strived for the same high standard of linguistic elegance; however, in order to achieve the efficiency necessary for system programming its scale and complexity is far less than that of CPL. The most significant simplification is that BCPL has only one data type---the binary bit pattern---and this feature alone gives BCPL a characteristic flavour which is very different of that of CPL and most other current programming languages.", "corpus_id": 10508815}}, {"query": {"sha": "e3b4b2ce41911a02c005b9455ded1b192abbda90", "title": "Exploring motor system contributions to the perception of social information: Evidence from EEG activity in the mu/alpha frequency range.", "abstract": "Putative contributions of a human mirror neuron system (hMNS) to the perception of social information have been assessed by measuring the suppression of EEG oscillations in the mu/alpha (8-12 Hz), beta (15-25 Hz) and low-gamma (25-25 Hz) ranges while participants processed social information revealed by point-light displays of human motion. Identical dynamic displays were presented and participants were instructed to distinguish the intention, the emotion, or the gender of a moving image of a person, while they performed an adapted odd-ball task. Relative to a baseline presenting a nonbiological but meaningful motion display, all three biological motion conditions reduced the EEG amplitude in the mu/alpha and beta ranges, but not in the low-gamma range. Suppression was larger in the intention than in the emotion and gender conditions, with no difference between the latter two. Moreover, the suppression in the intention condition was negatively correlated with an accepted measure of empathy (EQ), revealing that participants high in empathy scores manifested less suppression. For intention and emotion the suppression was larger at occipital than at central sites, suggesting that factors other than motor system were in play while processing social information embedded in the motion of point-light displays.", "corpus_id": 16114148}, "pos": {"sha": "44a2c57436f4d427307bcc5bbf48458b5f51d563", "title": "EEG evidence for mirror neuron dysfunction in autism spectrum disorders.", "abstract": "Autism spectrum disorders (ASD) are largely characterized by deficits in imitation, pragmatic language, theory of mind, and empathy. Previous research has suggested that a dysfunctional mirror neuron system may explain the pathology observed in ASD. Because EEG oscillations in the mu frequency (8-13 Hz) over sensorimotor cortex are thought to reflect mirror neuron activity, one method for testing the integrity of this system is to measure mu responsiveness to actual and observed movement. It has been established that mu power is reduced (mu suppression) in typically developing individuals both when they perform actions and when they observe others performing actions, reflecting an observation/execution system which may play a critical role in the ability to understand and imitate others' behaviors. This study investigated whether individuals with ASD show a dysfunction in this system, given their behavioral impairments in understanding and responding appropriately to others' behaviors. Mu wave suppression was measured in ten high-functioning individuals with ASD and ten age- and gender-matched control subjects while watching videos of (1) a moving hand, (2) a bouncing ball, and (3) visual noise, or (4) moving their own hand. Control subjects showed significant mu suppression to both self and observed hand movement. The ASD group showed significant mu suppression to self-performed hand movements but not to observed hand movements. These results support the hypothesis of a dysfunctional mirror neuron system in high-functioning individuals with ASD.", "corpus_id": 2215535}, "neg": {"sha": "382bf4617c7732fbd9aa2b8cee442216f204d4c6", "title": "Predicting Salient Updates for Disaster Summarization", "abstract": "During crises such as natural disasters or other human tragedies, information needs of both civilians and responders often require urgent, specialized treatment. Monitoring and summarizing a text stream during such an event remains a difficult problem. We present a system for update summarization which predicts the salience of sentences with respect to an event and then uses these predictions to directly bias a clustering algorithm for sentence selection, increasing the quality of the updates. We use novel, disaster-specific features for salience prediction, including geo-locations and language models representing the language of disaster. Our evaluation on a standard set of retrospective events using ROUGE shows that salience prediction provides a significant improvement over other approaches.", "corpus_id": 17490732}}, {"query": {"sha": "3d2b79b82ea9a09e3d744e9637ea41238d728fc9", "title": "Hierarchical CNN for traffic sign recognition", "abstract": "The Convolutional Neural Network (CNN) is a breakthrough technique in object classification and pattern recognition. It has enabled computers to achieve performance superior to humans in specialized image recognition tasks. Prior art CNNs learn object features by stacking multiple convolutional/non-linear layers in sequence on top of a classifier. In this work, we propose a Hierarchical CNN (HCNN) which is inspired by a coarse-to-fine human learning methodology. For a given dataset, we introduce a CNN-oriented clustering algorithm to separate classes into K subsets, which are referred to as families. Then, the HCNN algorithm trains K+1 classification CNNs: one CNN for family classification and K dedicated CNNs corresponding to each family for member classification. We evaluate this HCNN approach on the German Traffic Sign Recognition Benchmark (GTSRB), and achieve 99.67% correct detection rate (CDR), which is superior to the best reported results (99.46%) achieved by a single network.", "corpus_id": 18413036}, "pos": {"sha": "96fda2ce5803979ba0295413b2750e9733619dd5", "title": "Fast and Balanced: Efficient Label Tree Learning for Large Scale Object Recognition", "abstract": "We present a novel approach to efficiently learn a label tree for large scale classification with many classes. The key contribution of the approach is a technique to simultaneously determine the structure of the tree and learn the classifiers for each node in the tree. This approach also allows fine grained control over the efficiency vs accuracy trade-off in designing a label tree, leading to more balanced trees. Experiments are performed on large scale image classification with 10184 classes and 9 million images. We demonstrate significant improvements in test accuracy and efficiency with less training time and more balanced trees compared to the previous state of the art by Bengio et al.", "corpus_id": 7377135}, "neg": {"sha": "5a26ec6568152731ce1667a426307ebccff5a50e", "title": "On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines", "abstract": "In this paper we describe the algorithmic implementation of multiclass kernel-based vector machines. Our starting point is a generalized notion of the margin to multiclass problems. Using this notion we cast multiclass categorization problems as a constrained optimization problem with a quadratic objective function. Unlike most of previous approaches which typically decompose a multiclass problem into multiple independent binary classification tasks, our notion of margin yields a direct method for training multiclass predictors. By using the dual of the optimization problem we are able to incorporate kernels with a compact set of constraints and decompose the dual problem into multiple optimization problems of reduced size. We describe an efficient fixed-point algorithm for solving the reduced optimization problems and prove its convergence. We then discuss technical details that yield significant running time improvements for large datasets. Finally, we describe various experiments with our approach comparing it to previously studied kernel-based methods. Our experiments indicate that for multiclass problems we attain state-of-the-art accuracy.", "corpus_id": 10151608}}, {"query": {"sha": "703e8e792d63ad9b94b76f279a36b5c845ba7c40", "title": "Real-time 3D human objects rendering based on multiple camera details", "abstract": "3D model construction techniques using RGB-D information have been gaining a great attention of the researchers around the world in recent decades. The RGB-D sensor, Microsoft Kinect is widely used in many research fields, such as in computer vision, computer graphics, and human computer interaction, due to its capacity of providing color and depth information. This paper presents our research finding on calibrating information from several Kinects in order to construct a 3D model of a human object and to render texture captured from RGB camera. We used multiple Kinect sensors, which are interconnected in a network. High bit rate streams captured at each Kinect are first sent to a centralized PC for the processing. This even can be extended to a remote PC in the Internet. Main contributions of this work include calibration of the multiple Kinects, properly aligning point clouds generated from multiple Kinects, and generation of the 3D shape of the human objects. Experimental results demonstrate that the proposed method provides a better 3D model of the human object being captured.", "corpus_id": 11576803}, "pos": {"sha": "619911778e0c6e8af2e56bab893e4a3613509317", "title": "Scanning 3D Full Human Bodies Using Kinects", "abstract": "Depth camera such as Microsoft Kinect, is much cheaper than conventional 3D scanning devices, and thus it can be acquired for everyday users easily. However, the depth data captured by Kinect over a certain distance is of extreme low quality. In this paper, we present a novel scanning system for capturing 3D full human body models by using multiple Kinects. To avoid the interference phenomena, we use two Kinects to capture the upper part and lower part of a human body respectively without overlapping region. A third Kinect is used to capture the middle part of the human body from the opposite direction. We propose a practical approach for registering the various body parts of different views under non-rigid deformation. First, a rough mesh template is constructed and used to deform successive frames pairwisely. Second, global alignment is performed to distribute errors in the deformation space, which can solve the loop closure problem efficiently. Misalignment caused by complex occlusion can also be handled reasonably by our global alignment algorithm. The experimental results have shown the efficiency and applicability of our system. Our system obtains impressive results in a few minutes with low price devices, thus is practically useful for generating personalized avatars for everyday users. Our system has been used for 3D human animation and virtual try on, and can further facilitate a range of home-oriented virtual reality (VR) applications.", "corpus_id": 5961102}, "neg": {"sha": "5fa66e8c4047fc55695f1321ed57d2c23a8bd861", "title": "Joint bilateral upsampling", "abstract": "Image analysis and enhancement tasks such as tone mapping, colorization, stereo depth, and photomontage, often require computing a solution (e.g., for exposure, chromaticity, disparity, labels) over the pixel grid. Computational and memory costs often require that a smaller solution be run over a downsampled image. Although general purpose upsampling methods can be used to interpolate the low resolution solution to the full resolution, these methods generally assume a smoothness prior for the interpolation.\n We demonstrate that in cases, such as those above, the available high resolution input image may be leveraged as a prior in the context of a joint bilateral upsampling procedure to produce a better high resolution solution. We show results for each of the applications above and compare them to traditional upsampling methods.", "corpus_id": 7241297}}, {"query": {"sha": "bd580bfcf6558a1450d3804e06d009e3e6f6b0d0", "title": "The application of internet of things in healthcare: a systematic literature review and classification", "abstract": "The Internet of Things (IoT) is an ecosystem that integrates physical objects, software and hardware to interact with each other. Aging of population, shortage of healthcare resources, and rising medical costs make IoT-based technologies necessary to be tailored to address these challenges in healthcare. This systematic literature review has been conducted to determine the main application area of IoT in healthcare, components of IoT architecture in healthcare, most important technologies in IoT, characteristics of cloud-based architecture, security and interoperability issues in IoT architecture and effects, and challenges of IoT in healthcare. Sixty relevant papers, published between 2000 and 2016, were reviewed and analyzed. This analysis revealed that home healthcare service was one of the main application areas of IoT in healthcare. Cloud-based architecture, by providing great flexibility and scalability, has been deployed in most of the reviewed studies. Communication technologies including wireless fidelity (Wi-Fi), Bluetooth, radio-frequency identification (RFID), ZigBee, and Low-Power Wireless Personal Area Networks (LoWPAN) were frequently used in different IoT models. The studies regarding the security and interoperability issues in IoT architecture in health are still low in number. With respect to the most important effects of IoT in healthcare, these included ability of information exchange, decreasing stay of hospitalization and healthcare costs. The main challenges of IoT in healthcare were security and privacy issues.", "corpus_id": 25753765}, "pos": {"sha": "8985000860dbb88a80736cac8efe30516e69ee3f", "title": "Human Activity Recognition Using Recurrent Neural Networks", "abstract": "Human activity recognition using smart home sensors is one of the bases of ubiquitous computing in smart environments and a topic undergoing intense research in the field of ambient assisted living. The increasingly large amount of data sets calls for machine learning methods. In this paper, we introduce a deep learning model that learns to classify human activities without using any prior knowledge. For this purpose, a Long Short Term Memory (LSTM) Recurrent Neural Network was applied to three real world smart home datasets. The results of these experiments show that the proposed approach outperforms the existing ones in terms of accuracy and performance.", "corpus_id": 4933123}, "neg": {"sha": "c93ad164dc13caf7d90adaa373f6ce0798994899", "title": "Optimum Design of an IE4 Line-Start Synchronous Reluctance Motor Considering Manufacturing Process Loss Effect", "abstract": "As a kind of direct-on-line motor, super premium efficiency (IE4) line-start synchronous reluctance motors (LS-SynRMs) were developed recently and are now used in many applications, including fans, pumps, and compressors. This paper presents an optimum design and comparative study of LS-SynRMs with additional losses and impact during the manufacturing process (electrical steel cutting/punching damage as well as squirrel-cage die-casting with bubble effects). The work results indicate that the LS-SynRM design with the \u201cmanufacturing process loss\u201d effect should be considered and compensated for the design in order to achieve an IE4 class efficiency and ensure synchronization. Furthermore, the LS-SynRM rotor with multilayer flux barriers and rotor slots is investigated in detail. The influences of optimum design geometrical parameters (flux barriers thickness, segments thickness, length of rotor slots, etc.) on the performances of the basic model and optimum design model are evaluated with finite-element analysis (FEA) results. For more accurate results, the effects of saturation, saliency ratio, inductance difference, and the change in the B-H/B-P curve in damaged motor core edges are considered. Meanwhile, in the squirrel cage, the porosity rate distributions are considered. The copper loss, iron loss, starting torque, power factor, efficiency, and synchronization ability are investigated. The experimental results verify the accuracy of the process presented in this paper.", "corpus_id": 21468278}}, {"query": {"sha": "9d9d33843d018a77bad7f40da8f27671d29cd776", "title": "HIN2Vec: Explore Meta-paths in Heterogeneous Information Networks for Representation Learning", "abstract": "In this paper, we propose a novel representation learning framework, namely HIN2Vec, for heterogeneous information networks (HINs). The core of the proposed framework is a neural network model, also called HIN2Vec, designed to capture the rich semantics embedded in HINs by exploiting different types of relationships among nodes. Given a set of relationships specified in forms of meta-paths in an HIN, HIN2Vec carries out multiple prediction training tasks jointly based on a target set of relationships to learn latent vectors of nodes and meta-paths in the HIN. In addition to model design, several issues unique to HIN2Vec, including regularization of meta-path vectors, node type selection in negative sampling, and cycles in random walks, are examined. To validate our ideas, we learn latent vectors of nodes using four large-scale real HIN datasets, including Blogcatalog, Yelp, DBLP and U.S. Patents, and use them as features for multi-label node classification and link prediction applications on those networks. Empirical results show that HIN2Vec soundly outperforms the state-of-the-art representation learning models for network data, including DeepWalk, LINE, node2vec, PTE, HINE and ESim, by 6.6% to 23.8% of $micro$-$f_1$ in multi-label node classification and 5% to 70.8% of $MAP$ in link prediction.", "corpus_id": 3958144}, "pos": {"sha": "c18c30b9b1090e752031d23d219c1007b9954229", "title": "Large-Scale Embedding Learning in Heterogeneous Event Data", "abstract": "Heterogeneous events, which are defined as events connecting strongly-typed objects, are ubiquitous in the real world. We propose a HyperEdge-Based Embedding (Hebe) framework for heterogeneous event data, where a hyperedge represents the interaction among a set of involving objects in an event. The Hebe framework models the proximity among objects in an event by predicting a target object given the other participating objects in the event (hyperedge). Since each hyperedge encapsulates more information on a given event, Hebe is robust to data sparseness. In addition, Hebe is scalable when the data size spirals. Extensive experiments on large-scale real-world datasets demonstrate the efficacy and robustness of Hebe.", "corpus_id": 8833697}, "neg": {"sha": "14ef5a2cab928865ca02c6366ea7cb75c35fb698", "title": "Spectral clustering for multi-type relational data", "abstract": "Clustering on multi-type relational data has attracted more and more attention in recent years due to its high impact on various important applications, such as Web mining, e-commerce and bioinformatics. However, the research on general multi-type relational data clustering is still limited and preliminary. The contribution of the paper is three-fold. First, we propose a general model, the collective factorization on related matrices, for multi-type relational data clustering. The model is applicable to relational data with various structures. Second, under this model, we derive a novel algorithm, the spectral relational clustering, to cluster multi-type interrelated data objects simultaneously. The algorithm iteratively embeds each type of data objects into low dimensional spaces and benefits from the interactions among the hidden structures of different types of data objects. Extensive experiments demonstrate the promise and effectiveness of the proposed algorithm. Third, we show that the existing spectral clustering algorithms can be considered as the special cases of the proposed model and algorithm. This demonstrates the good theoretic generality of the proposed model and algorithm.", "corpus_id": 809891}}, {"query": {"sha": "5ca41f72ba08f32e66493915dd0bbc1765272c53", "title": "Continuous word representation using neural networks for proper name retrieval from diachronic documents", "abstract": "Developing high-quality transcription systems for very large vocabulary corpora is a challenging task. Proper names are usually key to understanding the information contained in a document. One approach for increasing the vocabulary coverage of a speech transcription system is to automatically retrieve new proper names from contemporary diachronic text documents. In recent years, neural networks have been successfully applied to a variety of speech recognition tasks. In this paper, we investigate whether neural networks can enhance word representation in vector space for the vocabulary extension of a speech recognition system. This is achieved by using high-quality word vector representation of words from large amounts of unstructured text data proposed by Mikolov. This model allows to take into account lexical and semantic word relationships. Proposed methodology is evaluated in the context of broadcast news transcription. Obtained recall and ASR proper name error rate is compared to that obtained using cosine-based vector space methodology. Experimental results show a good ability of the proposed model to capture semantic and lexical information.", "corpus_id": 12770895}, "pos": {"sha": "6772164c3dd4ff6e71ba58c5c4c22fa092b9fe55", "title": "Recent advances in deep learning for speech research at Microsoft", "abstract": "Deep learning is becoming a mainstream technology for speech recognition at industrial scale. In this paper, we provide an overview of the work by Microsoft speech researchers since 2009 in this area, focusing on more recent advances which shed light to the basic capabilities and limitations of the current deep learning technology. We organize this overview along the feature-domain and model-domain dimensions according to the conventional approach to analyzing speech systems. Selected experimental results, including speech recognition and related applications such as spoken dialogue and language modeling, are presented to demonstrate and analyze the strengths and weaknesses of the techniques described in the paper. Potential improvement of these techniques and future research directions are discussed.", "corpus_id": 13412186}, "neg": {"sha": "14e6d132dcbfc2e2bdc9f146becb11b1b394245c", "title": "Evaluation of the Efficacy of Aspirin and Low Molecular Weight Heparin in Patients with Unexplained Recurrent Spontaneous Abortions", "abstract": "Background: The roles of inflammatory cytokines and local placental thrombosis in patients with unexplained recurrent spontaneous abortion (URSA) have been shown. Since low molecular weight heparin (LMWH) and acetyl salicylic acid (ASA) have both anti-inflammatory and anti-coagulant effect, we evaluated their efficacy in patients with URSA. Methods: One hundred patients with a history of URSA referring to Obstetrics Clinic affiliated to Shiraz University of Medical Sciences between 2004 and 2009 were randomly divided into two groups. Fifty patients in thromboprophylaxis group were treated with LMWH (5000 unit; twice a day), ASA (80 mg daily) and calcium supplement (500 mg daily) after detection of fetal heart beat. Another 50 patients received no thromboprophylaxis. Live birth rate, obstetrical complications, prenatal and neonatal complications and hemorrhagic side effects were recorded. Results: Both groups were matched for mean age and mean number of pervious abortions. Thromboprophylaxis group had a higher rate of live birth (83.7%) in comparison to the control group (54%). No maternal or neonatal side effects were seen. There were no differences in obstetrical complications, prenatal and neonatal complications between the two groups. Conclusion: Thromboprophylaxis with ASA and LMWH seems to be safe and effective in patients with URSA.", "corpus_id": 8899123}}, {"query": {"sha": "6a97adbfaeecd5c1eeb3ae9c76a3842d4858cc06", "title": "Online Learning and Stochastic Approximations", "abstract": "The convergence of online learning algorithms is analyzed using the tools of the stochastic approximation theory, and proved under very weak conditions. A general framework for online learning algorithms is rst presented. This framework encompasses the most common online learning algorithms in use today, as illustrated by several examples. The stochastic approximation theory then provides general results describing the convergence of all these learning algorithms at once.", "corpus_id": 2101184}, "pos": {"sha": "a718b85520bea702533ca9a5954c33576fd162b0", "title": "SOME METHODS FOR CLASSIFICATION AND ANALYSIS OF MULTIVARIATE OBSERVATIONS", "abstract": "The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special", "corpus_id": 6278891}, "neg": {"sha": "a30bf138b3af8bc3bb5ef601586a333c1c73aeb0", "title": "Color Reduction Using K-Means Clustering Tom \u00e1 \u0161 Mikolov", "abstract": "It may not be obvious, but effective color reduction is needed in many graphical applications. One well known example may be the GIF format used widely on the Internet this graphical format reduces the color space by defining a palette with size of 256 colors [5]. Another examples may be video codecs, computer games and various handheld devices like mobile phones. Since the problem of finding optimal palette is computationally intensive (it is not possible to evaluate all possible combinations), many different approaches were taken to solve it, such as using neural nets, genetic algorithms, fuzzy logic, etc. On the other hand, for many applications it is much more appropriate to use simpler algorithms, like classic K-Means clustering. The goal of this paper is to propose an easy to implement algorithm for color reduction with sufficient visual quality. The algorithm itself is described in chapters 3, 4 and the results and their comparison with output from standard programs (ACDSee 4.0 [7], Adobe Photoshop 6.0.1 [8]) are summarized in chapter 5.", "corpus_id": 14817778}}, {"query": {"sha": "41013fdee4ecf6e7ca5d407e0afc4c2195889c80", "title": "Beyond data: from user information to business value through personalized recommendations and consumer science", "abstract": "Since the Netflix $1 million Prize, announced in 2006, Netflix has been known for having personalization at the core of our product. Our current product offering is nowadays focused around instant video streaming, and our data is now many orders of magnitude larger. Not only do we have many more users in many more countries, but we also receive many more streams of data. Besides the ratings, we now also use information such as what our members play, browse, or search.\n In this paper I will discuss the different approaches we follow to deal with these large streams of user data in order to extract information for personalizing our service. I will describe some of the machine learning models used, and their application in the service. I will also describe our data-driven approach to innovation that combines rapid offline explorations as well as online A/B testing. This approach enables us to convert user information into real and measurable business value.", "corpus_id": 569633}, "pos": {"sha": "6aa1c88b810825ee80b8ed4c27d6577429b5d3b2", "title": "Evaluating collaborative filtering recommender systems", "abstract": "Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.", "corpus_id": 207731647}, "neg": {"sha": "b2e68ca577636aaa6f6241c3af7478a3ae1389a7", "title": "Transformational leadership in nursing: a concept analysis.", "abstract": "AIM\nTo analyse the concept of transformational leadership in the nursing context.\n\n\nBACKGROUND\nTasked with improving patient outcomes while decreasing the cost of care provision, nurses need strategies for implementing reform in health care and one promising strategy is transformational leadership. Exploration and greater understanding of transformational leadership and the potential it holds is integral to performance improvement and patient safety.\n\n\nDESIGN\nConcept analysis using Walker and Avant's (2005) concept analysis method.\n\n\nDATA SOURCES\nPubMed, CINAHL and PsychINFO.\n\n\nMETHODS\nThis report draws on extant literature on transformational leadership, management, and nursing to effectively analyze the concept of transformational leadership in the nursing context.\n\n\nIMPLICATIONS FOR NURSING\nThis report proposes a new operational definition for transformational leadership and identifies model cases and defining attributes that are specific to the nursing context. The influence of transformational leadership on organizational culture and patient outcomes is evident. Of particular interest is the finding that transformational leadership can be defined as a set of teachable competencies. However, the mechanism by which transformational leadership influences patient outcomes remains unclear.\n\n\nCONCLUSION\nTransformational leadership in nursing has been associated with high-performing teams and improved patient care, but rarely has it been considered as a set of competencies that can be taught. Also, further research is warranted to strengthen empirical referents; this can be done by improving the operational definition, reducing ambiguity in key constructs and exploring the specific mechanisms by which transformational leadership influences healthcare outcomes to validate subscale measures.", "corpus_id": 4645201}}, {"query": {"sha": "2dec6a802cbac1f640980b5106d88ae72c45ece4", "title": "Generating Natural Language Inference Chains", "abstract": "The ability to reason with natural language is a fundamental prerequisite for many NLP tasks such as information extraction, machine translation and question answering. To quantify this ability, systems are commonly tested whether they can recognize textual entailment, i.e., whether one sentence can be inferred from another one. However, in most NLP applications only single source sentences instead of sentence pairs are available. Hence, we propose a new task that measures how well a model can generate an entailed sentence from a source sentence. We take entailment-pairs of the Stanford Natural Language Inference corpus and train an LSTM with attention. On a manually annotated test set we found that 82% of generated sentences are correct, an improvement of 10.3% over an LSTM baseline. A qualitative analysis shows that this model is not only capable of shortening input sentences, but also inferring new statements via paraphrasing and phrase entailment. We then apply this model recursively to input-output pairs, thereby generating natural language inference chains that can be used to automatically construct an entailment graph from source sentences. Finally, by swapping source and target sentences we can also train a model that given an input sentence invents additional information to generate a new sentence.", "corpus_id": 6594581}, "pos": {"sha": "a2c2999b134ba376c5ba3b610900a8d07722ccb3", "title": "Bleu: a Method for Automatic Evaluation of Machine Translation", "abstract": null, "corpus_id": 11080756}, "neg": {"sha": "e87e162fe085ac7b88a360a55dd6a28feaf898fa", "title": "PERFORMANCE MANAGEMENT SYSTEMS : A CONCEPTUAL MODEL AND AN ANALYSIS OF THE DEVELOPMENT AND INTENSIFICATION OF \u2018 NEW PUBLIC MANAGEMENT \u2019 IN THE UK", "abstract": "This paper, builds on the view that too much attention in the management, management control and management accounting literatures has been given to ex post performance measurement as distinct from ex ante performance management. The paper builds on the conceptual models of Performance Management Systems (PMS) developed by Otley (1999) and Ferreira and Otley (2005). Three key developments are developed in this conceptualisation in relation to focus, context and culture leading to a \u2018middle range\u2019 (Laughlin, 1995, 2004; Broadbent and Laughlin, 1997) conceptual model of alternative PMS lying on a continuum from \u2018transactional\u2019 at one end to \u2018relational\u2019 at the other built on respectively instrumental and communicative rationalities. The conceptual model is then used to provide new insights into the development of the new public management (NPM) in the UK. This analysis demonstrates how the move in 1982 with the Financial Management Initiative, to the 1988 Next Steps and the recent developments through the Public Service Agreements and targets from 1997 are a progressive move from relational PMS to be increasing and progressive more transactional in form intensifying the nature, significance and power of NPM to control public services.", "corpus_id": 352654}}, {"query": {"sha": "888743eb13cd1abff002a11ebe4a7bc4b373dca4", "title": "A Primer on Autonomous Aerial Vehicle Design", "abstract": "There is a large amount of research currently being done on autonomous micro-aerial vehicles (MAV), such as quadrotor helicopters or quadcopters. The ability to create a working autonomous MAV depends mainly on integrating a simultaneous localization and mapping (SLAM) solution with the rest of the system. This paper provides an introduction for creating an autonomous MAV for enclosed environments, aimed at students and professionals alike. The standard autonomous system and MAV automation are discussed, while we focus on the core concepts of SLAM systems and trajectory planning algorithms. The advantages and disadvantages of using remote processing are evaluated, and recommendations are made regarding the viability of on-board processing. Recommendations are made regarding best practices to serve as a guideline for aspirant MAV designers.", "corpus_id": 1749906}, "pos": {"sha": "8882dc6692fd7437971116ecee3f67e166ab7c6f", "title": "Vision-Based SLAM: Stereo and Monocular Approaches", "abstract": "Building a spatially consistent model is a key functionality to endow a mobile robot with autonomy. Without an initial map or an absolute localization means, it requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides data from which stable features can be extracted and matched as the robot moves. But it does not directly provide 3D information, which is a difficulty for estimating the geometry of the environment. This article presents two approaches to the SLAM problem using vision: one with stereovision, and one with monocular images. Both approaches rely on a robust interest point matching algorithm that works in very diverse environments. The stereovision based approach is a classic SLAM implementation, whereas the monocular approach introduces a new way to initialize landmarks. Both approaches are analyzed and compared with extensive experimental results, with a rover and a blimp.", "corpus_id": 2535086}, "neg": {"sha": "b2dac341df54e5f744d5b6562d725d254aae8e80", "title": "OpenHAR: A Matlab Toolbox for Easy Access to Publicly Open Human Activity Data Sets", "abstract": "This study introduces OpenHAR, a free Matlab toolbox to combine and unify publicly open data sets. It provides an easy access to accelerometer signals of ten publicly open human activity data sets. Data sets are easy to access as OpenHAR provides all the data sets in the same format. In addition, units, measurement range and labels are unified, as well as, body position IDs. Moreover, data sets with different sampling rates are unified using downsampling. What is more, data sets have been visually inspected to find visible errors, such as sensor in wrong orientation. OpenHAR improves re-usability of data sets by fixing these errors. Altogether OpenHAR contains over 65 million labeled data samples. This is equivalent to over 280 hours of data from 3D accelerometers. This includes data from 211 study subjects performing 17 daily human activities and wearing sensors in 14 different body positions.", "corpus_id": 53219468}}, {"query": {"sha": "172d28172a1cc9379cb8a1b07ab94156def00dc3", "title": "Simultaneous Optical Flow and Intensity Estimation from an Event Camera", "abstract": "Event cameras are bio-inspired vision sensors which mimic retinas to measure per-pixel intensity change rather than outputting an actual intensity image. This proposed paradigm shift away from traditional frame cameras offers significant potential advantages: namely avoiding high data rates, dynamic range limitations and motion blur. Unfortunately, however, established computer vision algorithms may not at all be applied directly to event cameras. Methods proposed so far to reconstruct images, estimate optical flow, track a camera and reconstruct a scene come with severe restrictions on the environment or on the motion of the camera, e.g. allowing only rotation. Here, we propose, to the best of our knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene. Our approach employs minimisation of a cost function that contains the asynchronous event data as well as spatial and temporal regularisation within a sliding window time interval. Our implementation relies on GPU optimisation and runs in near real-time. In a series of examples, we demonstrate the successful operation of our framework, including in situations where conventional cameras suffer from dynamic range limitations and motion blur.", "corpus_id": 10280488}, "pos": {"sha": "2c008d50edc3cc80bcec6789b58af82fec5cfc9c", "title": "Event-based, 6-DOF pose tracking for high-speed maneuvers", "abstract": "In the last few years, we have witnessed impressive demonstrations of aggressive flights and acrobatics using quadrotors. However, those robots are actually blind. They do not see by themselves, but through the \u201ceyes\u201d of an external motion capture system. Flight maneuvers using onboard sensors are still slow compared to those attainable with motion capture systems. At the current state, the agility of a robot is limited by the latency of its perception pipeline. To obtain more agile robots, we need to use faster sensors. In this paper, we present the first onboard perception system for 6-DOF localization during high-speed maneuvers using a Dynamic Vision Sensor (DVS). Unlike a standard CMOS camera, a DVS does not wastefully send full image frames at a fixed frame rate. Conversely, similar to the human eye, it only transmits pixel-level brightness changes at the time they occur with microsecond resolution, thus, offering the possibility to create a perception pipeline whose latency is negligible compared to the dynamics of the robot. We exploit these characteristics to estimate the pose of a quadrotor with respect to a known pattern during high-speed maneuvers, such as flips, with rotational speeds up to 1,200 \u00b0/s. Additionally, we provide a versatile method to capture ground-truth data using a DVS.", "corpus_id": 11454240}, "neg": {"sha": "6f0ac0750c12863a8ac168889e9b692d73def168", "title": "The prognosis of common mental disorders in adolescents: a 14-year prospective cohort study", "abstract": "BACKGROUND\nMost adults with common mental disorders report their first symptoms before 24 years of age. Although adolescent anxiety and depression are frequent, little clarity exists about which syndromes persist into adulthood or resolve before then. In this report, we aim to describe the patterns and predictors of persistence into adulthood.\n\n\nMETHODS\nWe recruited a stratified, random sample of 1943 adolescents from 44 secondary schools across the state of Victoria, Australia. Between August, 1992, and January, 2008, we assessed common mental disorder at five points in adolescence and three in young adulthood, commencing at a mean age of 15.5 years and ending at a mean age of 29.1 years. Adolescent disorders were defined on the Revised Clinical Interview Schedule (CIS-R) at five adolescent measurement points, with a primary cutoff score of 12 or higher representing a level at which a family doctor would be concerned. Secondary analyses addressed more severe disorders at a cutoff of 18 or higher.\n\n\nFINDINGS\n236 of 821 (29%; 95% CI 25-32) male participants and 498 of 929 (54%; 51-57) female participants reported high symptoms on the CIS-R (\u226512) at least once during adolescence. Almost 60% (434/734) went on to report a further episode as a young adult. However, for adolescents with one episode of less than 6 months duration, just over half had no further common mental health disorder as a young adult. Longer duration of mental health disorders in adolescence was the strongest predictor of clear-cut young adult disorder (odds ratio [OR] for persistent young adult disorder vs none 3.16, 95% CI 1.86-5.37). Girls (2.12, 1.29-3.48) and adolescents with a background of parental separation or divorce (1.62, 1.03-2.53) also had a greater likelihood of having ongoing disorder into young adulthood than did those without such a background. Rates of adolescent onset disorder dropped sharply by the late 20s (0.57, 0.45-0.73), suggesting a further resolution for many patients whose symptoms had persisted into the early 20s.\n\n\nINTERPRETATION\nEpisodes of adolescent mental disorder often precede mental disorders in young adults. However, many such disorders, especially when brief in duration, are limited to the teenage years, with further symptom remission common in the late 20s. The resolution of many adolescent disorders gives reason for optimism that interventions that shorten the duration of episodes could prevent much morbidity later in life.\n\n\nFUNDING\nAustralia's National Health and Medical Research Council.", "corpus_id": 11156121}}, {"query": {"sha": "50e8fdb3b4c0c1c556d98a9edf43033a7a351c01", "title": "Calibration of a network of Kinect sensors for robotic inspection over a large workspace", "abstract": "This paper presents an approach for calibrating a network of Kinect devices used to guide robotic arms with rapidly acquired 3D models. The method takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy within the range of the depth measurements accuracy provided by this technology. The internal calibration of the sensor in between the color and depth measurement is also presented. The resulting system is developed to inspect large objects, such as vehicles, positioned within an enlarged field of view created by the network of RGB-D sensors.", "corpus_id": 16711576}, "pos": {"sha": "7fc62b438ca48203c7f48e216dae8633db74d2e8", "title": "A Flexible New Technique for Camera Calibration", "abstract": "We propose a flexible new technique to easily calibrate a camera. It is well suited for use without specialized knowledge of 3D geometry or computer vision. The technique only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique, and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one step from laboratory environments to real world use.", "corpus_id": 1150626}, "neg": {"sha": "8e03721ce7f138ba56747a467d65f50507561922", "title": "Use of second generation H1 antihistamines in special situations.", "abstract": "Antihistamine drugs are one of the therapeutic classes most used at world level, at all ages and in multiple situations. Although in general they have a good safety profile, only the more recent drugs (second generation antihistamines) have been studied specifically with regard to the more important safety aspects. Given the variety of antihistamine drugs, they cannot all be considered equivalent in application to various special clinical situations, so that the documented clinical experience must be assessed in each case or, in the absence of such, the particular pharmacological characteristics of each molecule for the purpose of recommendation in these special situations. In general, there are few clinical studies published for groups of patients with kidney or liver failure, with concomitant multiple pathologies (such as cardiac pathology), in extremes of age (paediatrics or geriatrics) and in natural stages such as pregnancy or lactation, but these are normal situations and it is more and more frequent (among the elderly) for antihistamine drugs to be recommended. This review sets out the more relevant details compiled on the use of antihistamines in these special situations.", "corpus_id": 6952725}}, {"query": {"sha": "0e2490a38cffa9eaf90f122155e94ddfad6d0d93", "title": "Dynamic Obstacle Avoidance with PEARL : PrEference Appraisal Reinforcement Learning", "abstract": "Manual derivation of optimal robot motions for task completion is difficult, especially when a robot is required to balance its actions between opposing preferences. One solution has been to automatically learn near optimal motions with Reinforcement Learning (RL). This has been successful for several tasks including swing-free UAV flight, table tennis, and autonomous driving. However, high-dimensional problems remain a challenge. We address this dimensionality constraint with PrEference Appraisal Reinforcement Learning (PEARL), which solves tasks with opposing preferences for acceleration controlled robots. PEARL projects the high dimensional continuous robot state space to a low dimensional preference feature space resulting in efficient and adaptable planning. We demonstrate that on a dynamic obstacle avoidance robotic task, a single learning on a much simpler problem performs realtime decision-making for significantly larger, high dimensional problems working in unbounded continuous states and actions. We trained the agent with 4 static obstacles, while the trained agent avoids up to 900 dynamic obstacles in a highly constrained space. We compare these tasks to traditional, often manually tuned solutions for these high-dimensional problems.", "corpus_id": 34654930}, "pos": {"sha": "b0f16acfa4efce9c24100ec330b82fb8a28feeec", "title": "Reinforcement Learning in Continuous State and Action Spaces", "abstract": "Many traditional reinforcement-learning algorithms have been designed for problems with small finite state and action spaces. Learn ing in such discrete problems can been difficult, due to noise and delayed reinfor cements. However, many real-world problems have continuous state or action sp aces, which can make learning a good decision policy even more involved. In this c apter we discuss how to automatically find good decision policies in continuous d omains. Because analytically computing a good policy from a continuous model c an be infeasible, in this chapter we mainly focus on methods that explicitly up date a representation of a value function, a policy or both. We discuss conside rations in choosing an appropriate representation for these functions and disc uss gradient-based and gradient-free ways to update the parameters. We show how to a pply these methods to reinforcement-learning problems and discuss many speci fic algorithms. Amongst others, we cover gradient-based temporal-difference lear ning, evolutionary strategies, policy-gradient algorithms and (natural) actor-cri ti methods. We discuss the advantages of different approaches and compare the perform ance of a state-of-theart actor-critic method and a state-of-the-art evolutiona ry strategy empirically.", "corpus_id": 21557823}, "neg": {"sha": "8a7a9672b4981e72d6e9206024c758cc047db8cd", "title": "Evolution strategies \u2013 A comprehensive introduction", "abstract": "This article gives a comprehensive introduction into one of the main branches of evolutionary computation \u2013 the evolution strategies (ES) the history of which dates back to the 1960s in Germany. Starting from a survey of history the philosophical background is explained in order to make understandable why ES are realized in the way they are. Basic ES algorithms and design principles for variation and selection operators as well as theoretical issues are presented, and future branches of ES research are discussed.", "corpus_id": 271331}}, {"query": {"sha": "88d8b8180d91351626982496b61019a76084e103", "title": "Entropy-guided Retinex anisotropic diffusion algorithm based on partial differential equations (PDE) for illumination correction", "abstract": "Abstract This report describes the experimental results obtained using a proposed variational Retinex algorithm for controlled illumination correction. Two colour restoration and enhancement schemes of the algorithm are presented for drastically improved results. The algorithm modifies the reflectance image using global and local contrast enhancement approaches and gradually removes the residual illumination to yield highly pleasing results. The proposed algorithms are optimized by way of simultaneous perceptual quality metric (PQM) stabilization and entropy maximization for fully automated processing solving the problem of determination of stopping time. The usage of the HSI or HSV colour space ensures a unique solution to the optimization problem unlike in the RGB space where there is none (forcing manual selection of number of iteration. The proposed approach preserves and enhances details in both bright and dark regions of underexposed images in addition to eliminating the colour distortion, over-exposure in bright image regions, halo effect and grey-world violations observed in Retinex-based approaches. Extensive experiments indicate consistent performance as the proposed approach exploits and augments the advantages of PDEbased formulation, performing illumination correction, colour enhancement correction and restoration, contrast enhancement and noise suppression. Comparisons shows that the proposed approach surpasses most of the other conventional algorithms found in the literature.", "corpus_id": 24674021}, "pos": {"sha": "54205667c1f65a320f667d73c354ed8e86f1b9d9", "title": "Nonlinear total variation based noise removal algorithms", "abstract": "A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t \u2192 \u221e the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.", "corpus_id": 13133466}, "neg": {"sha": "51e65677f839eb72b4e75aa4eb962bd8813f3f62", "title": "Hybrid Connectionist-Symbolic Modules: A Report from the IJCAI-95 Workshop on Connectionist-Symbolic Integration", "abstract": "need for such models has been growing slowly but steadily over the past five years. Some new, important approaches have been proposed and developed, some of which were presented at the workshop. In sum, the participants felt that it was definitely worthwhile to further pursue research in this area because it might generate important new ideas and significant new applications in the near future. The basic motivations for research in hybrid connectionist-symbolic models need to be articulated and made clear. These motivations can \u25a0 The Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches was held on 19 to 20 August 1995 in Montreal, Canada, in conjunction with the Fourteenth International Joint Conference on Artificial Intelligence. The focus of the workshop was on learning and architectures that feature hybrid representations and support hybrid learning. The general consensus was that hybrid connectionist-symbolic models constitute a promising avenue to the development of more robust, more powerful, and more versatile architectures for both cognitive modeling and intelligent systems.", "corpus_id": 10477909}}, {"query": {"sha": "a522543b54cdd34c55e9ce222553df7676d1be5a", "title": "Automatically Processing Tweets from Gang-Involved Youth: Towards Detecting Loss and Aggression", "abstract": "Violence is a serious problems for cities like Chicago and has been exacerbated by the use of social media by gang-involved youths for taunting rival gangs. We present a corpus of tweets from a young and powerful female gang member and her communicators, which we have annotated with discourse intention, using a deep read to understand how and what triggered conversations to escalate into aggression. We use this corpus to develop a part-of-speech tagger and phrase table for the variant of English that is used, as well as a classifier for identifying tweets that express grieving and aggression.", "corpus_id": 2193818}, "pos": {"sha": "aa414188b777b6f42dd9c56114b43a5dfb7420ca", "title": "Using the Revised Dictionary of Affect in Language to quantify the emotional undertones of samples of natural language.", "abstract": "Whissell's Dictionary of Affect in Language, originally designed to quantify the Pleasantness and Activation of specifically emotional words, was revised to increase its applicability to samples of natural language. Word selection for the revision privileged natural language, and the matching rate of the Dictionary, which includes 8,742 words, was increased to 90%. Dictionary scores were available for 9 of every 10 words in most language samples. A third rated dimension (Imagery) was added, and normative scores were obtained for natural English. Evidence supports the reliability and validity of ratings. Two sample applications to very disparate instances of natural language are described. The revised Dictionary, which contains ratings for words characteristic of natural language, is a portable tool that can be applied in almost any situation involving language.", "corpus_id": 25588323}, "neg": {"sha": "06e23bfd4d69fd285c6d39a3d7e36eb40e129316", "title": "Differentiation in MALDI-TOF MS and FTIR spectra between two closely related species Acidovorax oryzae and Acidovorax citrulli", "abstract": "Two important plant pathogenic bacteria Acidovorax oryzae and Acidovorax citrulli are closely related and often not easy to be differentiated from each other, which often resulted in a false identification between them based on traditional methods such as carbon source utilization profile, fatty acid methyl esters, and ELISA detection tests. MALDI-TOF MS and Fourier transform infrared (FTIR) spectra have recently been successfully applied in bacterial identification and classification, which provide an alternate method for differentiating the two species. Characterization and comparison of the 10 A. oryzae strains and 10 A. citrulli strains were performed based on traditional bacteriological methods, MALDI-TOF MS, and FTIR spectroscopy. Our results showed that the identity of the two closely related plant pathogenic bacteria A. oryzae and A. citrulli was able to be confirmed by both pathogenicity tests and species-specific PCR, but the two species were difficult to be differentiated based on Biolog and FAME profile as well as 16\u2009S rRNA sequence analysis. However, there were significant differences in MALDI-TOF MS and FTIR spectra between the two species of Acidovorax. MALDI-TOF MS revealed that 22 and 18 peaks were specific to A. oryzae and A. citrulli, respectively, while FTIR spectra of the two species of Acidovorax have the specific peaks at 1738, 1311, 1128, 1078, 989\u2009cm-1 and at 1337, 968, 933, 916, 786\u2009cm-1, respectively. This study indicated that MALDI-TOF MS and FTIR spectra may give a new strategy for rapid bacterial identification and differentiation of the two closely related species of Acidovorax.", "corpus_id": 840997}}, {"query": {"sha": "3ad5362ce81bfa1a46624b4c6642dcb7e58bd47b", "title": "A Classification Approach for Prediction of Target Events in Temporal Sequences", "abstract": "Learning to predict signiicant events from sequences of data with categorical features is an important problem in many application areas. We focus on events for system management, and formulate the problem of prediction as a classiication problem. We perform co-occurrence analysis of events by means of Singular Value Decomposition (SVD) of the examples constructed from the data. This process is combined with Support Vector Machine (SVM) classiication, to obtain eecient and accurate predictions. We conduct an analysis of statistical properties of event data, which explains why SVM classiication is suitable for such data, and perform an empirical study using real data.", "corpus_id": 16429490}, "pos": {"sha": "dd37ec08ea8706b9f65607881ae4244071a84990", "title": "Incremental and Decremental Support Vector Machine Learning", "abstract": "An on-line recursive algorithm for training support vector machines, one vector at a time, is presented. Adiabatic increments retain the KuhnTucker conditions on all previously seen training data, in a number of steps each computed analytically. The incremental procedure is reversible, and decremental \"unlearning\" offers an efficient method to exactly evaluate leave-one-out generalization performance. Interpretation of decremental unlearning in feature space sheds light on the relationship between generalization and geometry of the data.", "corpus_id": 2235233}, "neg": {"sha": "6736041d99a017aad25d4551b3f261c9634efede", "title": "A Survey of Frameworks and Game Engines for Serious Game Development", "abstract": "Given the sparsity of standard game engines and frameworks for serious game development, developers of serious games typically rely on entertainment-based game development tools. However, given the large number of game engines and frameworks dedicated to entertainment game development, deciding on which tool to employ may be difficult. A literature review that examined the frameworks and game engines used to develop serious games was recently conducted. Here, a list of the most commonly identified frameworks and game engines and a summary of their features is provided. The results presented provide insight to those seeking tools to develop serious games.", "corpus_id": 23836157}}, {"query": {"sha": "70a50bb6ec1988f0f00700263dcee069cabe9c20", "title": "Automated Anomaly Detection in Distribution Grids Using \u03bcPMU Measurements", "abstract": "The impact of Phasor Measurement Units (PMUs) for providing situational awareness to transmission system operators has been widely documented. Micro-PMUs (\u03bcPMUs) are an emerging sensing technology that can provide similar benefits to Distribution System Operators (DSOs), enabling a level of visibility into the distribution grid that was previously unattainable. In order to support the deployment of these high resolution sensors, the automation of data analysis and prioritizing communication to the DSO becomes crucial. In this paper, we explore the use of \u03bcPMUs to detect anomalies on the distribution grid. Our methodology is motivated by growing concern about failures and attacks to distribution automation equipment. The effectiveness of our approach is demonstrated through both real and simulated data.", "corpus_id": 9124827}, "pos": {"sha": "372a9edb48c6894e13bd9946ba50442b9f2f6f2c", "title": "Micro-synchrophasors for distribution systems", "abstract": "This paper describes a research project to develop a network of high-precision phasor measurement units, termed micro-synchrophasors or \u03bcPMUs, and explore the applications of \u03bcPMU data for electric power distribution systems.", "corpus_id": 337990}, "neg": {"sha": "350d1a03dec7314415cdc4a9f4af45cfb3346fc8", "title": "Document clustering by concept factorization", "abstract": "In this paper, we propose a new data clustering method called concept factorization that models each concept as a linear combination of the data points, and each data point as a linear combination of the concepts. With this model, the data clustering task is accomplished by computing the two sets of linear coefficients, and this linear coefficients computation is carried out by finding the non-negative solution that minimizes the reconstruction error of the data points. The cluster label of each data point can be easily derived from the obtained linear coefficients. This method differs from the method of clustering based on non-negative matrix factorization (NMF) \\citeXu03 in that it can be applied to data containing negative values and the method can be implemented in the kernel space. Our experimental results show that the proposed data clustering method and its variations performs best among 11 algorithms and their variations that we have evaluated on both TDT2 and Reuters-21578 corpus. In addition to its good performance, the new method also has the merit in its easy and reliable derivation of the clustering results.", "corpus_id": 14482286}}, {"query": {"sha": "14865c5f6702eb4ef35b9219f62017b7f00808e0", "title": "Digital Marketing in the Business Environment", "abstract": "Promotion of products has become an increasingly important component in the new digital age, mostly thanks to digital marketing. The traditional form of marketing is lagging behind digital marketing, which offers users new opportunities like personalized messages or answers to a search query. There are several ways to advertise on the internet, and in this paper, ways and tools will be presented that allow digital advertising as well as their advantages and disadvantages. Specifically, search engine optimization, search engine marketing, display advertising, social networking marketing and e-mail marketing will be discussed. Also, the goal of the paper is to enable more efficient creation and implementation of similar contents in new business environments through an insight into internet advertising, social and business networks.", "corpus_id": 56094012}, "pos": {"sha": "1de2b30dbe3196efd9665d562de72473af776bb3", "title": "Effects of Internet Display Advertising in the Purchase Funnel : Model-Based Insights from a Randomized Field Experiment", "abstract": "Vol. LII (June 2015), 375\u2013393 375 \u00a9 2015, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic) *Paul R. Hoban is Assistant Professor of Marketing, Wisconsin School of Business, University of Wisconsin\u2013Madison (e-mail: phoban@ bus. wisc. edu). Randolph E. Bucklin is Professor of Marketing, Peter W. Mullin Chair in Management, UCLA Anderson School of Management, University of California, Los Angeles (e-mail: randy.bucklin@anderson. ucla. edu). Avi Goldfarb served as associate editor for this article. PAUL R. HOBAN and RANDOLPH E. BUCKLIN*", "corpus_id": 11398413}, "neg": {"sha": "189cc86044cd4fe61c97a2b2de3cc60d5c7d1b8d", "title": "Evaluation of Android Dalvik virtual machine", "abstract": "More than half of the smart phones world-wide are currently employing the Android platform, which employs Java for programming its applications. The Android Java is to be executed by the Dalvik virtual machine (VM), which is quite different from the traditional Java VM such as Oracle's HotSpot VM. That is, Dalvik employs register-based bytecode while HotSpot employs stack-based bytecode, requiring a different way of interpretation. Also, Dalvik uses trace-based just-in-time compilation (JITC), while HotSpot uses method-based JITC. Therefore, it is questioned how the Dalvik VM performs compared the HotSpot VM. Unfortunately, there has been little comparative evaluation of both VMs, so the performance of the Dalvik VM is not well understood. More importantly, it is also not well understood how the performance of the Dalvik VM affects the overall performance of the Android applications (apps). In this paper, we make an attempt to evaluate the Dalvik VM. We install both VMs on the same board and compare the performance using EEMBC benchmark. Our results show that Dalvik slightly outperforms HotSpot in the interpreter mode due to its register-based bytecode. In the JITC mode, however, Dakvik is slower than HotSpot by more than 2.9 times and its generated code size is not smaller than HotSpot's due to its worse code quality and trace-chaining code. We also investigated how real Android apps are different from Java benchmarks, to understand why the slow Dalvik VM does not affect the performance of the Android apps seriously.", "corpus_id": 36316611}}, {"query": {"sha": "ebbdb8edbff4fc6f9e699bc17d672d78c12a93c0", "title": "User-Generated Content and Perceived Control : A Pilot Study of Empowering Consumer Decision Making", "abstract": "There is growing interest in understanding of how User-Generated Content (UGC) empowers online consumer behavior. In this paper, we explore the relationships between Consumer Empowerment and Perceived Control (mediated by Self-Efficacy) over the decision making process using UGC. The results of this study reveal that Perceived Control has an influence on intention to use UGC. The findings also suggest that Consumer Empowerment has the capacity to influence Perceived Control, both directly (primarily via Content Empowerment), and indirectly (via Social Empowerment and Process Empowerment, mediated by SelfEfficacy, which in turn influences Perceived Control).", "corpus_id": 39756364}, "pos": {"sha": "1beddcd2cee2a18e6875d0a624541f1c378cdde8", "title": "A study of normative and informational social influences upon individual judgement.", "abstract": "By NOW, many experimental studies (e.g., 1, 3, 6) have demonstrated that individual psychological processes are subject to social influences. Most investigators, however, have not distinguished among different kinds of social influences; rather, they have carelessly used the term \"group\" influence to characterize the impact of many different kinds of social factors. In fact, a review of the major experiments in this area\u2014e.g., those by Sherif (6), Asch (1), Bovard (3)\u2014would indicate that the subjects (5s) in these experiments as they made their judgments were not functioning as members of a group in any simple or obvious manner. The S, in the usual experiment in this area, made perceptual judgments hi the physical presence of others after hearing their judgments. Typically, the S was not given experimental instructions which made him feel that he was a member of a group faced with a common task requiring cooperative effort for its most effective solution. If \"group\" influences were at work in the foregoing experiments, they were subtly and indirectly created rather than purposefully created by the experimenter.", "corpus_id": 35785090}, "neg": {"sha": "ffba858f03403c2981a073fe2b9ff805ef0ded6f", "title": "Nightingale's environmental theory.", "abstract": "This author extracts the environmental theory from Florence Nightingale's writings and recorded experiences. As Nightingale's experiences broadened to other cultures and circumstances, she generated an ever-widening commitment to redress unjust social policies imperiling human health. She mobilized collaborators, shaped public awareness, and championed the cause of those suffering as a result of unjust policies. Nightingale challenged nurses to create environments where population health is a realistic expectation.", "corpus_id": 28429540}}, {"query": {"sha": "5a068b2ca94d344163c4522efcc22d6f7016d032", "title": "DEVELOPMENT OF CABLE CLIMBING ROBOTIC SYSTEM FOR INSPECTION OF SUSPENSION BRIDGE", "abstract": "In this paper, we propose a wheel-based cable climbing robotic system which can climb up and down the vertical cylindrical cables in the suspension bridges. Firstly, we develop climbing mechanism which includes wheels driven by motors and adhesion system.In addition,we propose a special design of adhesion mechanism which can maintain adhesion force even when the power is lost.Finally, an additional mechanism is developed for guaranteeing the safety of the robot during operations on cables.", "corpus_id": 12309218}, "pos": {"sha": "38adde4a4f02dfe4b089e9394e34acc092a7af10", "title": "Design and experiments on a new wheel-based cable climbing robot", "abstract": "This paper proposes an ameliorated wheel-based cable inspection robot, which is able to climb up a vertical cylindrical cable on the cable-stayed bridge. The newly-designed robot in this paper is composed of two equally spaced modules, which are joined by connecting bars to form a closed hexagonal body to clasp on the cable. Another amelioration is the newly-designed electric circuit, which is employed to limit the descending speed of the robot during its sliding down along the cable. For the safe landing in case of electricity broken-down, a gas damper with a slider-crank mechanism is introduced to exhaust the energy generated by the gravity when the robot is slipping down. For the present design, with payloads below 3.5 kg, the robot can climb up a cable with diameters varying from 65 mm to 205 mm. The landing system is tested experimentally and a simplified mathematical model is analyzed. Several climbing experiments performed on real cables show the capability of the proposed robot.", "corpus_id": 18481012}, "neg": {"sha": "40b6bad4f9eae6d90fe1c38a25841c45ae82080e", "title": "POS-Tagger for English-Vietnamese Bilingual Corpus", "abstract": "Corpus-based Natural Language Processing (NLP) tasks for such popular languages as English, French, etc. have been well studied with satisfactory achievements. In contrast, corpus-based NLP tasks for unpopular languages (e.g. Vietnamese) are at a deadlock due to absence of annotated training data for these languages. Furthermore, hand-annotation of even reasonably well-determined features such as part-ofspeech (POS) tags has proved to be labor intensive and costly. In this paper, we suggest a solution to partially overcome the annotated resource shortage in Vietnamese by building a POS-tagger for an automatically word-aligned English-Vietnamese parallel Corpus (named EVC). This POS-tagger made use of the Transformation-Based Learning (or TBL) method to bootstrap the POS-annotation results of the English POS-tagger by exploiting the POS-information of the corresponding Vietnamese words via their wordalignments in EVC. Then, we directly project POSannotations from English side to Vietnamese via available word alignments. This POS-annotated Vietnamese corpus will be manually corrected to become an annotated training data for Vietnamese NLP tasks such as POS-tagger, Phrase-Chunker, Parser, Word-Sense Disambiguator, etc.", "corpus_id": 8131267}}, {"query": {"sha": "6938761724b6aacee02d084e623e73dc42a35546", "title": "Data Storage in DNA", "abstract": "Encoding information into synthetic DNA is a novel approach for data storage. Due to its natural robustness and size in molecular dimensions, it can be used for long-term and very high-density archiving of data. Since the DNA molecules can be corrupted by thermal process and the writing/reading process of DNA molecules can be faulty, it is necessary to encode the data using error-correcting codes. In this thesis, the student analyzes errors that occur in such a storage system and designs coding schemes that can be used for error correction.", "corpus_id": 44372162}, "pos": {"sha": "f19a2323f85a56b0f4c27c79234df291895c42cf", "title": "Towards practical, high-capacity, low-maintenance information storage in synthesized DNA", "abstract": "Digital production, transmission and storage have revolutionized how we access and use information but have also made archiving an increasingly complex task that requires active, continuing maintenance of digital media. This challenge has focused some interest on DNA as an attractive target for information storage because of its capacity for high-density information encoding, longevity under easily achieved conditions and proven track record as an information bearer. Previous DNA-based information storage approaches have encoded only trivial amounts of information or were not amenable to scaling-up, and used no robust error-correction and lacked examination of their cost-efficiency for large-scale information archival. Here we describe a scalable method that can reliably store more information than has been handled before. We encoded computer files totalling 739 kilobytes of hard-disk storage and with an estimated Shannon information of 5.2\u2009\u00d7\u2009106 bits into a DNA code, synthesized this DNA, sequenced it and reconstructed the original files with 100% accuracy. Theoretical analysis indicates that our DNA-based storage scheme could be scaled far beyond current global information volumes and offers a realistic technology for large-scale, long-term and infrequently accessed digital archiving. In fact, current trends in technological advances are reducing DNA synthesis costs at a pace that should make our scheme cost-effective for sub-50-year archiving within a decade.", "corpus_id": 205232588}, "neg": {"sha": "209863d7248d273e673ff640b50c7ad06aa5fe74", "title": "Beyond Covariance: Feature Representation with Nonlinear Kernel Matrices", "abstract": "Covariance matrix has recently received increasing attention in computer vision by leveraging Riemannian geometry of symmetric positive-definite (SPD) matrices. Originally proposed as a region descriptor, it has now been used as a generic representation in various recognition tasks. However, covariance matrix has shortcomings such as being prone to be singular, limited capability in modeling complicated feature relationship, and having a fixed form of representation. This paper argues that more appropriate SPD-matrix-based representations shall be explored to achieve better recognition. It proposes an open framework to use the kernel matrix over feature dimensions as a generic representation and discusses its properties and advantages. The proposed framework significantly elevates covariance representation to the unlimited opportunities provided by this new representation. Experimental study shows that this representation consistently outperforms its covariance counterpart on various visual recognition tasks. In particular, it achieves significant improvement on skeleton-based human action recognition, demonstrating the state-of-the-art performance over both the covariance and the existing non-covariance representations.", "corpus_id": 5912354}}, {"query": {"sha": "64a2f8a626b3106cb39ad1b67bce77f6fa87f436", "title": "Computer Vision-Based Quality Inspection System of Transparent Gelatin Capsules in Pharmaceutical Applications", "abstract": "Real-time quality inspection of gelatin capsules in pharmaceutical applications is an important issue from the point of view of industry productivity and competitiveness. Computer vision-based automatic quality inspection is one of the solutions to this problem. Machine vision systems provide quality control and real-time feedback for industrial processes, overcoming physical limitations and subjective judgment of humans. In computer-vision based system a digital image obtained by a digital camera would usually have 24-bit color image. The analysis of an image with that many levels might require complicated image p rocessing techniques. But in real-time applicat ion, where a part has to be inspected within a few milliseconds, either we have to reduce the image to a more manageable number of gray levels, usually two levels (binary image), and at the same time retain all necessary features of the original image. A b inary image can be obtained by thresholding the original image into two levels. In this paper, we have developed an image processing system using edge-based image segmentation techniques for quality inspection that satisfy the industrial requirements in pharmaceutical applicat ions to pass the accepted and rejected capsules.", "corpus_id": 15017684}, "pos": {"sha": "5aea896df0724208ea9631099207ec7a437f55e5", "title": "Image processing techniques for quality inspection of gelatin capsules in pharmaceutical applications", "abstract": "Machine vision systems provide quality control and real-time feedback for industrial processes, overcoming physical limitations and subjective judgment of humans. In this paper, the image processing techniques for developing low-cost machine vision system for pharmaceutical capsule inspection is explored. By developing image processing techniques, and using PCs, custom USB 2.0 cameras with minimal hardware, a low-cost flexible system is developed. This paper discusses the two-part gelatin capsule inspection system that belongs to USB camera 2.0 and associated hardware, the PCs to acquire the image data of the capsule, image processing techniques using border tracing and approximating the capsule to a circle to perform inspection and a custom system controller to pass the accepted and rejected capsules to the appropriate bin.", "corpus_id": 12956584}, "neg": {"sha": "0a08a6f2017717baca895cd5cd78383df97c93d6", "title": "Anonymous credentials light", "abstract": "We define and propose an efficient and provably secure construction of blind signatures with attributes. Prior notions of blind signatures did not yield themselves to the construction of anonymous credential systems, not even if we drop the unlinkability requirement of anonymous credentials. Our new notion in contrast is a convenient building block for anonymous credential systems. The construction we propose is efficient: it requires just a few exponentiations in a prime-order group in which the decisional Diffie-Hellman problem is hard. Thus, for the first time, we give a provably secure construction of anonymous credentials that can work in the elliptic group setting without bilinear pairings and is based on the DDH assumption. In contrast, prior provably secure constructions were based on the RSA group or on groups with pairings, which made them prohibitively inefficient for mobile devices, RFIDs and smartcards. The only prior efficient construction that could work in such elliptic curve groups, due to Brands, does not have a proof of security.", "corpus_id": 15757044}}, {"query": {"sha": "70bd5a3e87147f404cd744c938a6fec121ec5ff3", "title": "Synthesizing Continuous Deployment Practices Used in Software Development", "abstract": "Continuous deployment speeds up the process of existing agile methods, such as Scrum, and Extreme Programming (XP) through the automatic deployment of software changes to end-users upon passing of automated tests. Continuous deployment has become an emerging software engineering process amongst numerous software companies, such as Facebook, Github, Netflix, and Rally Software. A systematic analysis of software practices used in continuous deployment can facilitate a better understanding of continuous deployment as a software engineering process. Such analysis can also help software practitioners in having a shared vocabulary of practices and in choosing the software practices that they can use to implement continuous deployment. The goal of this paper is to aid software practitioners in implementing continuous deployment through a systematic analysis of software practices that are used by software companies. We studied the continuous deployment practices of 19 software companies by performing a qualitative analysis of Internet artifacts and by conducting follow-up inquiries. In total, we found 11 software practices that are used by 19 software companies. We also found that in terms of use, eight of the 11 software practices are common across 14 software companies. We observe that continuous deployment necessitates the consistent use of sound software engineering practices such as automated testing, automated deployment, and code review.", "corpus_id": 17874236}, "pos": {"sha": "25f287e9014cac14994e7563a0b51cd162aa0f3a", "title": "Extreme programming explained - embrace change", "abstract": "I almost didn\u2019t write this review. Extreme Programming (XP) and the whole agile software development movement are somewhat controversial, especially around Rational where the RUP is the party line. I certainly didn\u2019t want to make a career-limiting move by advocating a software development methodology contrary to the one embraced by Rational! \u263a The perception (at least in some circles) seems to be that the RUP and XP are opposing forces in the ongoing debate over the best way to build software. This isn\u2019t completely true. After reading John Smith\u2019s excellent white paper in the RUP titled \u201cA Comparison of RUP and XP,\u201d I realized the RUP and XP have a lot in common. Still, I wanted to know about XP apart from the context of the RUP, so I thought the best place to start would be reading a book by one of the key contributors to the XP philosophy, Kent Beck.", "corpus_id": 46768313}, "neg": {"sha": "fc6d659b864496c7ea16d7e6a0aa0671ba7a9abb", "title": "Cloud Computing Reference Architecture from Different Vendor \u2019 s Perspective Demeke", "abstract": "The provision of on-demand access to Cloud computing services and infrastructure is attracting numerous consumers, as a result migrating from traditional server centric network to Cloud computing becomes inevitable to benefit from the technology through overall expense diminution. This growth of Cloud computing service consumers may influence the future data centers and operational models. The issue of inter-cloud operability due to different Cloud computing vendors Reference Architecture (RA) needs to be addressed to allow consumers to use services from any vendor. In this paper we present the Cloud computing RA of major vendors available in scientific literature and the RA of National Institute of Standard Technology(NIST) by comparing the nature of their RA (role based/layer based) and mapping activities and capabilities to the layer(s) or role(s). Keywords\u2014Cloud Computing, Cloud Computing Reference Architecture (RA), Cloud Service Consumers, Cloud Service Providers, SaaS , PaaS , IaaS", "corpus_id": 16955501}}, {"query": {"sha": "8112972b8a6e0c7f9443dbcdfb4ed65c7484f8c2", "title": "Privacy-preserving Machine Learning through Data Obfuscation", "abstract": "As machine learning becomes a practice and commodity, numerous cloud-based services and frameworks are provided to help customers develop and deploy machine learning applications. While it is prevalent to outsource model training and serving tasks in the cloud, it is important to protect the privacy of sensitive samples in the training dataset and prevent information leakage to untrusted third parties. Past work have shown that a malicious machine learning service provider or end user can easily extract critical information about the training samples, from the model parameters or even just model outputs. In this paper, we propose a novel and generic methodology to preserve the privacy of training data in machine learning applications. Specifically we introduce a obfuscate function and apply it to the training data before feeding them to the model training task. This function adds random noise to existing samples, or augments the dataset with new samples. By doing so sensitive information about the properties of individual samples, or statistical properties of a group of samples, is hidden. Meanwhile the model trained from the obfuscated dataset can still achieve high accuracy. With this approach, the customers can safely disclose the data or models to third-party providers or end users without the need to worry about data privacy. Our experiments show that this approach can effective defeat four existing types of machine learning privacy attacks at negligible accuracy cost.", "corpus_id": 49574455}, "pos": {"sha": "cbcd9f32b526397f88d18163875d04255e72137f", "title": "Gradient-based learning applied to document recognition", "abstract": null, "corpus_id": 14542261}, "neg": {"sha": "f7c6b1c80c5492f08de6d5d9ac18801bc9be829f", "title": "Clinical characterisation of 29 neurofibromatosis type-1 patients with molecularly ascertained 1.4 Mb type-1 NF1 deletions.", "abstract": "BACKGROUND\nLarge deletions of the NF1 gene region occur in approximately 5% of patients with neurofibromatosis type-1 (NF1) and are associated with particularly severe manifestations of the disease. However, until now, the genotype-phenotype relationship has not been comprehensively studied in patients harbouring large NF1 gene deletions of comparable extent (giving rise to haploinsufficiency of the same genes).\n\n\nMETHOD\nWe have performed the most comprehensive clinical/neuropsychological characterisation so far undertaken in NF1 deletion patients, involving 29 patients with precisely determined type-1 NF1 (1.4 Mb) deletions.\n\n\nRESULTS\nNovel clinical features found to be associated with type-1 NF1 deletions included pes cavus (17% of patients), bone cysts (50%), attention deficit (73%), muscular hypotonia (45%) and speech difficulties (48%). Type-1 NF1 deletions were found to be disproportionately associated with facial dysmorphic features (90% of patients), tall stature (46%), large hands and feet (46%), scoliosis (43%), joint hyperflexibility (72%), delayed cognitive development and/or learning disabilities (93%) and mental retardation (IQ<70; 38%), as compared with the general NF1 patient population. Significantly increased frequencies (relative to the general NF1 population) of plexiform neurofibromas (76%), subcutaneous neurofibromas (76%), spinal neurofibromas (64%) and MPNSTs (21%) were also noted in the type-1 deletion patients. Further, 50% of the adult patients exhibited a very high burden of cutaneous neurofibromas (N>or=1000).\n\n\nCONCLUSION\nThese findings emphasise the importance of deletion analysis in NF1 since frequent monitoring of tumour presence and growth could potentiate early surgical intervention thereby improving patient survival.", "corpus_id": 5781561}}, {"query": {"sha": "685fc17e76d457db829d55db897e504e8d16a7de", "title": "A comprehensive survey: artificial bee colony (ABC) algorithm and applications", "abstract": "Swarm intelligence (SI) is briefly defined as the collective behaviour of decentralized and self-organized swarms. The well known examples for these swarms are bird flocks, fish schools and the colony of social insects such as termites, ants and bees. In 1990s, especially two approaches based on ant colony and on fish schooling/bird flocking introduced have highly attracted the interest of researchers. Although the self-organization features are required by SI are strongly and clearly seen in honey bee colonies, unfortunately the researchers have recently started to be interested in the behaviour of these swarm systems to describe new intelligent approaches, especially from the beginning of 2000s. During a decade, several algorithms have been developed depending on different intelligent behaviours of honey bee swarms. Among those, artificial bee colony (ABC) is the one which has been most widely studied on and applied to solve the real world problems, so far. Day by day the number of researchers being interested in ABC algorithm increases rapidly. This work presents a comprehensive survey of the advances with ABC and its applications. It is hoped that this survey would be very beneficial for the researchers studying on SI, particularly ABC algorithm.", "corpus_id": 3330504}, "pos": {"sha": "3c4b0dfbba816bbf5bcfc5e6b625a799dfa97aba", "title": "Cluster based wireless sensor network routings using Artificial Bee Colony Algorithm", "abstract": "In this paper, we propose a novel hierarchical clustering approach for wireless sensor networks to maintain energy depletion of the network in minimum using Artificial Bee Colony Algorithm which is a new swarm based heuristic algorithm. We present a protocol using Artificial Bee Colony Algorithm, which tries to provide optimum cluster organization in order to minimize energy consumption. In cluster based networks, the selection of cluster heads and its members is an essential process which affects energy consumption. Simulation results demonstrate that the proposed approach provides promising solutions for the wireless sensor networks.", "corpus_id": 13944625}, "neg": {"sha": "51b332b1e42beb1e2201fce9da2866d978758e43", "title": "Novel pathogenetic mechanisms and structural adaptations in ischemic mitral regurgitation.", "abstract": "Ischemic mitral regurgitation (MR) is a common complication of myocardial infarction thought to result from leaflet tethering caused by displacement of the papillary muscles that occurs as the left ventricle remodels. The author explores the possibility that left atrial remodeling may also play a role in the pathogenesis of ischemic MR, through a novel mechanism: atriogenic leaflet tethering. When ischemic MR is hemodynamically significant, the left ventricle compensates by dilating to preserve forward output using the Starling mechanism. Left ventricular dilatation, however, worsens MR by increasing the mitral valve regurgitant orifice, leading to a vicious cycle in which MR begets more MR. The author proposes that several structural adaptations play a role in reducing ischemic MR. In contrast to the compensatory effects of left ventricular enlargement, these may reduce, rather than increase, its severity. The suggested adaptations involve the mitral valve leaflets, the papillary muscles, the mitral annulus, and the left ventricular false tendons. This review describes the potential role each may play in reducing ischemic MR. Therapies that exploit these adaptations are also discussed.", "corpus_id": 26912468}}, {"query": {"sha": "45216a5f9e61f772f874b7b0caf773451c8ed9f6", "title": "Hierarchical spike coding of sound", "abstract": "Natural sounds exhibit complex statistical regularities at multiple scales. Acoustic events underlying speech, for example, are characterized by precise temporal and frequency relationships, but they can also vary substantially according to the pitch, duration, and other high-level properties of speech production. Learning this structure from data while capturing the inherent variability is an important first step in building auditory processing systems, as well as understanding the mechanisms of auditory perception. Here we develop Hierarchical Spike Coding, a two-layer probabilistic generative model for complex acoustic structure. The first layer consists of a sparse spiking representation that encodes the sound using kernels positioned precisely in time and frequency. Patterns in the positions of first layer spikes are learned from the data: on a coarse scale, statistical regularities are encoded by a second-layer spiking representation, while fine-scale structure is captured by recurrent interactions within the first layer. When fit to speech data, the second layer acoustic features include harmonic stacks, sweeps, frequency modulations, and precise temporal onsets, which can be composed to represent complex acoustic events. Unlike spectrogram-based methods, the model gives a probability distribution over sound pressure waveforms. This allows us to use the second-layer representation to synthesize sounds directly, and to perform model-based denoising, on which we demonstrate a significant improvement over standard methods.", "corpus_id": 10947073}, "pos": {"sha": "0b6e98a6a8cf8283fd76fe1100b23f11f4cfa711", "title": "Matching pursuits with time-frequency dictionaries", "abstract": "We introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. We derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. We compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser.", "corpus_id": 14427335}, "neg": {"sha": "589b8659007e1124f765a5d1bd940b2bf4d79054", "title": "Projection Pursuit Regression", "abstract": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "corpus_id": 14183758}}, {"query": {"sha": "d965869b951722ab1ab056238e83328fd073503e", "title": "An Attention-Gated Convolutional Neural Network for Sentence Classification", "abstract": "The classification task of sentences is very challenging because of the limited contextual information that sentences contain. In this paper, we propose an Attention Gated Convolutional Neural Network (AGCNN) for sentence classification, which generates attention weights from the feature\u2019s context windows of different sizes by using specialized convolution encoders, to enhance the influence of critical features in predicting the sentence\u2019s category. Experimental results demonstrate that our model could achieve a general accuracy improvement highest up to 3.1% (compared with standard CNN models), and gain competitive results over the strong baseline methods on four out of the six tasks. Besides, we propose an activation function named Natural Logarithm rescaled Rectified Linear Unit (NLReLU). Experimental results show that NLReLU could outperform ReLU and performs comparably to other well-known activation functions on AGCNN.", "corpus_id": 52114638}, "pos": {"sha": "0a7fb47217e6d0e3b80159bc4f9e02a50ea1f391", "title": "Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales", "abstract": "We address therating-inference problem, wherein rather than simply decide whether a review is \u201cthumbs up\u201d or \u201cthumbs down\u201d, as in previous sentiment analysis work, one must determine an author\u2019s evaluation with respect to a multi-point scale (e.g., one to five \u201cstars\u201d). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, \u201cthree stars\u201d is intuitively closer to \u201cfour stars\u201d than to \u201cone star\u201d. We first evaluate human performance at the task. Then, we apply a metaalgorithm, based on a metric labelingformulation of the problem, that alters a given n-ary classifier\u2019s output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem. Publication info: Proceedings of the ACL, 2005.", "corpus_id": 3264224}, "neg": {"sha": "1e3c0c0320c3f2f9ebbe3c7341071ececbb8821d", "title": "Conneconomics : The Economics of Dense , Large-Scale , High-Resolution Neural Connectomics", "abstract": "We analyze the scaling and cost-performance characteristics of current and projected connectomics approaches, with reference to the potential implications of recent advances in diverse contributing fields. Three generalized strategies for dense connectivity mapping at the scale of whole mammalian brains are considered: electron microscopic axon tracing, optical imaging of combinatorial molecular markers at synapses, and bulk DNA sequencing of trans-synaptically exchanged nucleic acid barcode pairs. Due to advances in parallel-beam instrumentation, whole mouse brain electron microscopic image acquisition could cost less than $100 million, with total costs presently limited by image analysis to trace axons through large image stacks. It is difficult to estimate the overall cost-performance of electron microscopic approaches because image analysis costs could fall dramatically with algorithmic improvements or large-scale crowd-sourcing. Optical microscopy at 50\u2013100 nm isotropic resolution could potentially read combinatorially multiplexed molecular information from individual synapses, which could indicate the identifies of the pre-synaptic and post-synaptic cells without relying on axon tracing. An optical approach to whole mouse brain connectomics may therefore be achievable for less than $10 million and could be enabled by emerging technologies to sequence nucleic acids in-situ in fixed tissue via fluorescent microscopy. Strategies relying on bulk DNA sequencing, which would extract the connectome without direct imaging of the tissue, could produce a whole mouse brain connectome for $100k \u2013 $1 million or a mouse cortical connectome for $10k \u2013 $100k. Anticipated further reductions in the cost of DNA sequencing could lead to a $1000 mouse cortical connectome.", "corpus_id": 2568690}}, {"query": {"sha": "1f7111ec128ad8af204a4c667d69b9dac0268180", "title": "Voyager 2: Augmenting Visual Analysis with Partial View Specifications", "abstract": "Visual data analysis involves both open-ended and focused exploration. Manual chart specification tools support question answering, but are often tedious for early-stage exploration where systematic data coverage is needed. Visualization recommenders can encourage broad coverage, but irrelevant suggestions may distract users once they commit to specific questions. We present Voyager 2, a mixed-initiative system that blends manual and automated chart specification to help analysts engage in both open-ended exploration and targeted question answering. We contribute two partial specification interfaces: wildcards let users specify multiple charts in parallel, while related views suggest visualizations relevant to the currently specified chart. We present our interface design and applications of the CompassQL visualization query language to enable these interfaces. In a controlled study we find that Voyager 2 leads to increased data field coverage compared to a traditional specification tool, while still allowing analysts to flexibly drill-down and answer specific questions.", "corpus_id": 14999239}, "pos": {"sha": "2054d00fa178e8031e37ae3fdc8a60f20eca7cfd", "title": "VizDeck: Streamlining exploratory visual analytics of scientific data", "abstract": "As research becomes increasingly data-intensive, scientists are relying on visualization very early in the data analysis cycle. We find that existing tools assume a \u201cone-at-a-time\u201d workflow for creating visualizations and impose a steep learning curve that makes it difficult to rapidly create and review visualizations. At the same time, scientists are becoming more cognitively overloaded, spending an increasing proportion of time on data \u201chandling\u201d tasks rather than scientific analysis. In response, we present VizDeck, a web-based visual analytics tool for relational data that automatically recommends a set of appropriate visualizations based on the statistical properties of the data and adopts a card game metaphor to present the results to the user. We describe the design of VizDeck and discuss the results of a usability evaluation comparing VizDeck with three other popular visualization tools. We then discuss design considerations for visualization tools focused on rapid analysis based on observed sensemaking processes.", "corpus_id": 7086308}, "neg": {"sha": "29bf68560629c1a815209cb72c0c790ec6edad15", "title": "The cost structure of sensemaking", "abstract": "Making sense of a body of data is a common activity in any kind of analysis. Sensemaking is the process of searching for a representation and encoding data in that representation to answer task-specific questions. Different operations during sensemaking require different cognitive and external resources. Representations are chosen and changed to reduce the cost of operations in an information processing task. The power of these representational shifts is generally under-appreciated as is the relation between sensemaking and information retrieval.\nWe analyze sensemaking tasks and develop a model of the cost structure of sensemaking. We discuss implications for the integrated design of user interfaces, representational tools, and information retrieval systems.", "corpus_id": 207177544}}, {"query": {"sha": "48b42c1af79bdba418fa6b50e674d186eea8af01", "title": "Network security analysis SCADA system automation on industrial process", "abstract": "Supervisory Control and Data Acquisition (SCADA) is a unit control system that has been used almost in various industries around the world in terms of process automation. This system delineates the real infrastructure and provides ease in operations and monitoring, but this system has vulnerability in the security aspects of data communications connected between SCADA support devices. This can have a major impact on industry and the economy. This research was conducted by designing and building SCADA infrastructure and analyzing vulnerability threats to SCADA network security. This research penetrates SCADA network using Kali Linux and data traffic analysis on SCADA network using Wireshark. From the results of Wireshark analysis got the attacker with User Anonymous. Analysis performed with normal and abnormal data traffic conditions. The result of this research is penetration of SCADA network using Kali Linux, which is used to attack and make data traffic between Programable Logic Controller (PLC) with Human Machine Interface (HMI) becomes solid, and result from penetration testing, SCADA system become down due of data traffic on a dense network, thereby indicating that SCADA networks are vulnerable to malware threats and attacks, the results of this study are recommendations and network security strategy SCADA system.", "corpus_id": 29829641}, "pos": {"sha": "c8a6706546ec0a010659147668c078dfc50be038", "title": "PLC Forensics Based on Control Program Logic Change Detection", "abstract": "Supervisory Control and Data Acquisition (SCADA) system is an industrial control automated system. It is built with multiple Programmable Logic Controllers (PLCs). PLC is a special form of microprocessor-based controller with proprietary operating system. Due to the unique architecture of PLC, traditional digital forensic tools are difficult to be applied. In this paper, we propose a program called Control Program Logic Change Detector (CPLCD), which works with a set of Detection Rules (DRs) to detect and record undesired incidents on interfering normal operations of PLC. In order to prove the feasibility of our solution, we set up two experiments for detecting two common PLC attacks. Moreover, we illustrate how CPLCD and network analyzer Wireshark could work together for performing digital forensic investigation on PLC.", "corpus_id": 4112762}, "neg": {"sha": "bee609ea6e71aba9b449731242efdb136d556222", "title": "Multi-Target Tracking in Multiple Non-Overlapping Cameras using Constrained Dominant Sets", "abstract": "In this paper, a unified three-layer hierarchical approach for solving tracking problems in multiple non-overlapping cameras is proposed. Given a video and a set of detections (obtained by any person detector), we first solve within-camera tracking employing the first two layers of our framework and, then, in the third layer, we solve across-camera tracking by merging tracks of the same person in all cameras in a simultaneous fashion. To best serve our purpose, a constrained dominant sets clustering (CDSC) technique, a parametrized version of standard quadratic optimization, is employed to solve both tracking tasks. The tracking problem is caste as finding constrained dominant sets from a graph. That is, given a constraint set and a graph, CDSC generates cluster (or clique), which forms a compact and coherent set that contains a subset of the constraint set. The approach is based on a parametrized family of quadratic programs that generalizes the standard quadratic optimization problem. In addition to having a unified framework that simultaneously solves withinand across-camera tracking, the third layer helps link broken tracks of the same person occurring during within-camera tracking. A standard algorithm to extract constrained dominant set from a graph is given by the so-called replicator dynamics whose computational complexity is quadratic per step which makes it handicapped for large-scale applications. In this work, we propose a fast algorithm, based on dynamics from evolutionary game theory, which is efficient and salable to large-scale real-world applications. We have tested this approach on a very large and challenging dataset (namely, MOTchallenge DukeMTMC) and show that the proposed framework outperforms the current state of the art. Even though the main focus of this paper is on multi-target tracking in non-overlapping cameras, proposed approach can also be applied to solve re-identification problem. Towards that end, we also have performed experiments on MARS, one of the largest and challenging video-based person re-identification dataset, and have obtained excellent results. These experiments demonstrate the general applicability of the proposed framework for non-overlapping across-camera tracking and person re-identification tasks.", "corpus_id": 1415165}}, {"query": {"sha": "1d021bae2e694f33d514f6aa7db82443e52cdc85", "title": "Diminishable visual markers on fabricated projection object for dynamic spatial augmented reality", "abstract": "Spatial augmented reality (SAR) is a projection technology to add optical illusion onto static objects. Generally, in SAR, images are projected on complex everyday surfaces other than a flat projection screen. Thus, geometric correction of images is essential. Many studies have examined geometric correction of projection images on nonplanar surfaces. If a projection surface shape is known, it is possible to correct images geometrically by calibrating intrinsic parameters of the projector and extrinsic parameters (position and pose relationships) between the projector and surfaces. However, it is difficult to directly apply previous geometric correction methods to dynamically moving surfaces, because these methods generally assumed only static surfaces.", "corpus_id": 14528777}, "pos": {"sha": "cb0ce8399d14fd3317294e9a7944809f8790ff33", "title": "Projected augmentation - augmented reality using rotatable video projectors", "abstract": "In this paper, we propose a new way of augmenting our environment with information without making the user carry any devices. We propose the use of video projection to display the augmentation on the objects directly. We use a projector that can be rotated and in other ways controlled remotely by a computer, to follow objects carrying a marker. The main contribution of this paper is a system that keeps the augmentation displayed in the correct place while the object or the projector moves. We describe the hardware and software design of our system, the way certain functions such as following the marker or keeping it in focus are implemented and how to calibrate the multitude of parameters of all the subsystems.", "corpus_id": 27981421}, "neg": {"sha": "369fed886a50abb9d173dad807bb3002af23d0c0", "title": "FAST Approaches to Scalable Similarity-Based Test Case Prioritization", "abstract": "Many test case prioritization criteria have been proposed for speeding up fault detection. Among them, similarity-based approaches give priority to the test cases that are the most dissimilar from those already selected. However, the proposed criteria do not scale up to handle the many thousands or even some millions test suite sizes of modern industrial systems and simple heuristics are used instead. We introduce the FAST family of test case prioritization techniques that radically changes this landscape by borrowing algorithms commonly exploited in the big data domain to find similar items. FAST techniques provide scalable similarity-based test case prioritization in both white-box and black-box fashion. The results from experimentation on real world C and Java subjects show that the fastest members of the family outperform other black-box approaches in efficiency with no significant impact on effectiveness, and also outperform white-box approaches, including greedy ones, if preparation time is not counted. A simulation study of scalability shows that one FAST technique can prioritize a million test cases in less than 20 minutes.", "corpus_id": 49671633}}, {"query": {"sha": "0f4eec7673e6e030ee91c92e91e751d1870d79b4", "title": "Sample Evaluation for Action Selection in Monte Carlo Tree Search", "abstract": "Building sophisticated computer players for games has been of interest since the advent of artificial intelligence research. Monte Carlo tree search (MCTS) techniques have led to recent advances in the performance of computer players in a variety of games. Without any refinements, the commonly-used upper confidence bounds applied to trees (UCT) selection policy for MCTS performs poorly on games with high branching factors, because an inordinate amount of time is spent performing simulations from each sibling of a node before that node can be further investigated. Move-ordering heuristics are usually proposed to address this issue, but when the branching factor is large, it can be costly to order candidate actions. We propose a technique combining sampling from the action space with a na\u00efve evaluation function for identifying nodes to add to the tree when using MCTS in cases where the branching factor is large. The approach is evaluated on a restricted version of the board game Risk with promising results.", "corpus_id": 14142312}, "pos": {"sha": "0eafee24cb65ce4f95a1392ba1398547111a2188", "title": "PROGRESSIVE STRATEGIES FOR MONTE-CARLO TREE SEARCH", "abstract": "Two-person zero-sum games with perfect information have been addressed by many AI researchers with great success for fifty years [van den Herik et al. (2002)]. The classical approach is to use the alpha-beta framework combined with a dedicated static evaluation function. This evaluation function is applied to the leaf nodes of a tree. If the node represents a terminal position (or a databased position) it produces an exact value. Otherwise heuristic knowledge is used to estimate the value of the leaf node. This technique led to excellent results in many games (e.g., Chess and Checkers). However, building an evaluation function based on heuristic knowledge for a non-terminal position is a difficult and time-consuming issue in several games; the most notorious example is the game of Go [Bouzy and Cazenave", "corpus_id": 1719063}, "neg": {"sha": "5acbb3f169bc13a0e6b3848adabf856c20edf9c2", "title": "World-championship-caliber Scrabble", "abstract": "Computer Scrabble programs have achieved a level of performance that exceeds that of the strongest human players. MAVEN was the first program to demonstrate this against human opposition. Scrabble is a game of imperfect information with a large branching factor. The techniques successfully applied in two-player games such as chess do not work here. MAVEN combines a selective move generator, simulations of likely game scenarios, and the B\u2217 algorithm to produce a world-championship-caliber Scrabble-playing program. \uf6d9 2001 Published by Elsevier Science B.V.", "corpus_id": 2850073}}, {"query": {"sha": "67c3812cafa2ebf1910c2cf7e0518c71be743514", "title": "Multi-view Domain Generalization for Visual Recognition", "abstract": "In this paper, we propose a new multi-view domain generalization (MVDG) approach for visual recognition, in which we aim to use the source domain samples with multiple types of features (i.e., multi-view features) to learn robust classifiers that can generalize well to any unseen target domain. Considering the recent works show the domain generalization capability can be enhanced by fusing multiple SVM classifiers, we build upon exemplar SVMs to learn a set of SVM classifiers by using one positive sample and all negative samples in the source domain each time. When the source domain samples come from multiple latent domains, we expect the weight vectors of exemplar SVM classifiers can be organized into multiple hidden clusters. To exploit such cluster structure, we organize the weight vectors learnt on each view as a weight matrix and seek the low-rank representation by reconstructing this weight matrix using itself as the dictionary. To enforce the consistency of inherent cluster structures discovered from the weight matrices learnt on different views, we introduce a new regularizer to minimize the mismatch between any two representation matrices on different views. We also develop an efficient alternating optimization algorithm and further extend our MVDG approach for domain adaptation by exploiting the manifold structure of unlabeled target domain samples. Comprehensive experiments for visual recognition clearly demonstrate the effectiveness of our approaches for domain generalization and domain adaptation.", "corpus_id": 4613549}, "pos": {"sha": "df95629f0ec384f445a3c7e3272defe0d4be735b", "title": "Exploiting Low-Rank Structure from Latent Domains for Domain Generalization", "abstract": "In this paper, we propose a new approach for domain generalization by exploiting the low-rank structure from multiple latent source domains. Motivated by the recent work on exemplar-SVMs, we aim to train a set of exemplar classifiers with each classifier learnt by using only one positive training sample and all negative training samples. While positive samples may come from multiple latent domains, for the positive samples within the same latent domain, their likelihoods from each exemplar classifier are expected to be similar to each other. Based on this assumption, we formulate a new optimization problem by introducing the nuclear-norm based regularizer on the likelihood matrix to the objective function of exemplar-SVMs. We further extend Domain Adaptation Machine (DAM) to learn an optimal target classifier for domain adaptation. The comprehensive experiments for object recognition and action recognition demonstrate the effectiveness of our approach for domain generalization and domain adaptation.", "corpus_id": 13777916}, "neg": {"sha": "3732e4ca8c7419c7c45df28b9a272ad9899c0db6", "title": "Automated skin lesion segmentation via image-wise supervised learning and multi-scale superpixel based cellular automata", "abstract": "Segmentation of skin lesions is considered as an important step in computer aided diagnosis (CAD) for automated melanoma diagnosis. Existing methods however have problems with over-or under-segmentation and do not perform well when a lesion is partially connected to the background or when the image contrast is low. To overcome these limitations, we propose a new automated skin lesion segmentation method via image-wise supervised learning (ISL) and multi-scale superpixel based cellular automata (MSCA). We propose using ISL to derive a probabilistic map for automated seeds selection, which removes the reliance on user-defined seeds as in conventional methods. The probabilistic map is then further used with the MSCA model for skin lesion segmentation. This map enables the inclusion of additional structural information and when compared to single-scale pixel-based CA model, it produces higher capacity to segment skin lesions with various sizes and contrast. We evaluated our method on two public skin lesion datasets and showed that it was more accurate and robust when compared to the state-of-the-art skin lesion segmentation methods.", "corpus_id": 206951584}}, {"query": {"sha": "9096ac4c40191263e888d745f92a6530a17b7134", "title": "Automatic Extraction of Cause-Effect Relations in Natural Language Text", "abstract": "The discovery of causal relations from text has been studied adopting various approaches based on rules or Machine Learning (ML) techniques. The approach proposed joins both rules and ML methods to combine the advantage of each one. In particular, our approach first identifies a set of plausible cause-effect pairs through a set of logical rules based on dependencies between words then it uses Bayesian inference to reduce the number of pairs produced by ambiguous patterns. The SemEval-2010 task 8 dataset challenge has been used to evaluate our model. The results demonstrate the ability of the rules for the relation extraction and the improvements made by the filtering process.", "corpus_id": 12735697}, "pos": {"sha": "1a5fce3d8746885251bd412ad137273ec771a314", "title": "Learning to Predict from Textual Data", "abstract": "Given a current news event, we tackle the problem of generating plausible predictions of future events it might cause. We present a new methodology for modeling and predicting such future news events using machine learning and data mining techniques. Our Pundit algorithm generalizes examples of causality pairs to infer a causality predictor. To obtain precisely labeled causality examples, we mine 150 years of news articles and apply semantic natural language modeling techniques to headlines containing certain predefined causality patterns. For generalization, the model uses a vast number of world knowledge ontologies. Empirical evaluation on real news articles shows that our Pundit algorithm performs as well as non-expert humans.", "corpus_id": 17220503}, "neg": {"sha": "9865636a6f9ae844dafadc8611c06be10c32abc9", "title": "Autonomous active recognition and unfolding of clothes using random decision forests and probabilistic planning", "abstract": "We present a novel approach to the problem of autonomously recognizing and unfolding articles of clothing using a dual manipulator. The problem consists of grasping an article from a random point, recognizing it and then bringing it into an unfolded state. We propose a data-driven method for clothes recognition from depth images using Random Decision Forests. We also propose a method for unfolding an article of clothing after estimating and grasping two key-points, using Hough forests. Both methods are implemented into a POMDP framework allowing the robot to interact optimally with the garments, taking into account uncertainty in the recognition and point estimation process. This active recognition and unfolding makes our system very robust to noisy observations. Our methods were tested on regular-sized clothes using a dual-arm manipulator and an Xtion depth sensor. We achieved 100% accuracy in active recognition and 93.3% unfolding success rate, while our system operates faster compared to the state of the art.", "corpus_id": 4643910}}, {"query": {"sha": "20286dfd510e6cb5c374384f415aab12b4b87132", "title": "Customer churn prediction for an insurance company", "abstract": "Dutch health insurance company CZ operates in a highly competitive and dynamic environment, dealing with over three million customers and a large, multi-aspect data structure. Because customer acquisition is considerably more expensive than customer retention, timely prediction of churning customers is highly beneficial. In this work, prediction of customer churn from objective variables at CZ is systematically investigated using data mining techniques. To identify important churning variables and characteristics, experts within the company were interviewed, while the literature was screened and analysed. Additionally, four promising data mining techniques for prediction modeling were identified, i.e. logistic regression, decision tree, neural networks and support vector machine. Data sets from 2013 were cleaned, corrected for imbalanced data and subjected to prediction models using data mining software KNIME. It was found that age, the number of times a customer is insured at CZ and the total health consumption are the most important characteristics for identifying churners. After performance evaluation, logistic regression with a 50:50 (non-churn:churn) training set and neural networks with a 70:30 (non-churn:churn) distribution performed best. In the ideal case, 50% of the churners can be reached when only 20% of the population is contacted, while costbenefit analysis indicated a balance between the costs of contacting these customers and the benefits of the resulting customer retention. The models were robust and could be applied on data sets from other years with similar results. Finally, homogeneous profiles were created using K-means clustering to reduce noise and increase the prediction power of the models. Promising results were obtained using four profiles, but a more thorough investigation on model performance still needs to be conducted. Using this data mining approach, we show that the predicted results can have direct implications for the marketing department of CZ, while the models are expected to be readily applicable in other environments.", "corpus_id": 51807872}, "pos": {"sha": "916ceefae4b11dadc3ee754ce590381c568c90de", "title": "A direct adaptive method for faster backpropagation learning: the RPROP algorithm", "abstract": "A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.", "corpus_id": 16848428}, "neg": {"sha": "b0c512fcfb7bd6c500429cbda963e28850f2e948", "title": "A Fast and Accurate Unconstrained Face Detector", "abstract": "We propose a method to address challenges in unconstrained face detection, such as arbitrary pose variations and occlusions. First, a new image feature called Normalized Pixel Difference (NPD) is proposed. NPD feature is computed as the difference to sum ratio between two pixel values, inspired by the Weber Fraction in experimental psychology. The new feature is scale invariant, bounded, and is able to reconstruct the original image. Second, we propose a deep quadratic tree to learn the optimal subset of NPD features and their combinations, so that complex face manifolds can be partitioned by the learned rules. This way, only a single soft-cascade classifier is needed to handle unconstrained face detection. Furthermore, we show that the NPD features can be efficiently obtained from a look up table, and the detection template can be easily scaled, making the proposed face detector very fast. Experimental results on three public face datasets (FDDB, GENKI, and CMU-MIT) show that the proposed method achieves state-of-the-art performance in detecting unconstrained faces with arbitrary pose variations and occlusions in cluttered scenes.", "corpus_id": 10867640}}, {"query": {"sha": "c76737f624697b379f78759c1c778b7007422a14", "title": "Software-Defined Mobile Networks Security", "abstract": "The future 5G wireless is triggered by the higher demand on wireless capacity. With Software Defined Network (SDN), the data layer can be separated from the control layer. The development of relevant studies about Network Function Virtualization (NFV) and cloud computing has the potential of offering a quicker and more reliable network access for growing data traffic. Under such circumstances, Software Defined Mobile Network (SDMN) is presented as a promising solution for meeting the wireless data demands. This paper provides a survey of SDMN and its related security problems. As SDMN integrates cloud computing, SDN, and NFV, and works on improving network functions, performance, flexibility, energy efficiency, and scalability, it is an important component of the next generation telecommunication networks. However, Yongfeng Qian yongfeng.hust@gmail.com Min Chen minchen@ieee.org Shiwen Mao smao@ieee.org Wan Tang tangwan@scuec.edu.cn Ximin Yang yangximin@scuec.edu.cn 1 Embedded and Pervasive Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China 2 Department of Electrical & Computer Engineering, Auburn University, 200 Broun Hall, Auburn, AL, 36849-5201, USA 3 College of Computer Science, South-Central University for Nationalities, Wuhan 430074, China the SDMN concept also raises new security concerns. We explore relevant security threats and their corresponding countermeasures with respect to the data layer, control layer, application layer, and communication protocols. We also adopt the STRIDE method to classify various security threats to better reveal them in the context of SDMN. This survey is concluded with a list of open security challenges in SDMN.", "corpus_id": 8141341}, "pos": {"sha": "31edc7932b5b6f5495a87c7b439bd3d4f31f6080", "title": "SDN and NFV integration in generalized mobile network architecture", "abstract": "The main drivers for the mobile core network evolution is to serve the future challenges and set the way to 5G networks with need for high capacity and low latency. Different technologies such as Network Functions Virtualization (NFV) and Software Defined Networking (SDN) are being considered to address the future needs of 5G networks. However, future applications such as Internet of Things (IoT), video services and others still unveiled will have different requirements, which emphasize the need for the dynamic scalability of the network functionality. The means for efficient network resource operability seems to be even more important than the future network element costs. This paper provides the analysis of different technologies such as SDN and NFV that offer different architectural options to address the needs of 5G networks. The options under consideration in this paper may differ mainly in the extent of what SDN principles are applied to mobile specific functions or to transport network functions only.", "corpus_id": 2453962}, "neg": {"sha": "d90193d2be26a9bf4b187763ee620dd4100d406a", "title": "Generalizing and Improving Bilingual Word Embedding Mappings with a Multi-Step Framework of Linear Transformations", "abstract": "Using a dictionary to map independently trained word embeddings to a shared space has shown to be an effective approach to learn bilingual word embeddings. In this work, we propose a multi-step framework of linear transformations that generalizes a substantial body of previous work. The core step of the framework is an orthogonal transformation, and existing methods can be explained in terms of the additional normalization, whitening, re-weighting, de-whitening and dimensionality reduction steps. This allows us to gain new insights into the behavior of existing methods, including the effectiveness of inverse regression, and design a novel variant that obtains the best published results in zero-shot bilingual lexicon extraction. The corresponding software is released as an open source project.", "corpus_id": 4334731}}, {"query": {"sha": "b3676e959149e194e45f9f41615f085aed722a66", "title": "2 D and 3 D Face Recognition using Close-Range RGB-D Camera", "abstract": "Having a depth information of object (Face) surface available along with RGB image, face recognition algorithm can be made more efficient and robust as compared to the conventional algorithms, where only RGB image is employed. Non-uniform illumination and pose deviation are major obstacles in making ideal face recognition system. In this paper, we used close-range (of depth) camera for developing face recognition system, which can provide high recognition rate even in low-resolution images. To best of our knowledge, there is no experimental study available for the face recognition system with close range RGB-D (Creative Labs 3D RealSense) in literature so far. We analysed different feature representations like PCA, LDA, SIFT, Gabor, LBP and HOG for RGB and depth images. Experiments were performed on the in-house RGB-D dataset, captured by Creative Labs 3D camera with 20 individuals. The experimental result shows that depth image can also be used for recognition. It is also found that the fusion of RGB and Depth images improve the recognition rate. HOG and LBP feature descriptors are superior to the other feature subspaces. With HOG as a feature descriptor, face recognition rate for frontal image dataset in proper illumination has reached to 96.21%, while for pose-deviated samples, it is 75.12 %. Keywords\u2014 2D Face; 3D Face, RGB-D camera, Face Recognition, Fusion", "corpus_id": 221208300}, "pos": {"sha": "36c2cafdf3a0d39931e3ee46d7665270fca42350", "title": "Robust motion detection using histogram of oriented gradients for illumination variations", "abstract": "This paper proposes a robust motion detection method for illumination variations which uses histogram of oriented gradients. The detection process is divided into two phases: coarse detection and refinement. In the coarse detection phase, first, a texture-based background model is built which implements a group of adaptive histograms of oriented gradients; then, by comparing the histogram of oriented gradients of each pixel between current frame and background model, a foreground is segmented based on texture feature which is not susceptible to illumination variations, whereas some missing foreground regions exist; finally, the result based on texture is optimized by combining the pixel-wise detection result produced by Gaussian Mixture Model (GMM) algorithm, which greatly improves the detection performance by incorporating efficient morphological operations. In the refinement phase, the above detection result is refined based on the distinction in color feature to eliminate errors like shadows, noises, redundant contour, etc. Experimental results show the effectiveness and robustness of our approach in detecting moving objects in varying illumination conditions.", "corpus_id": 17721802}, "neg": {"sha": "08639cd6b89ac8f375cdc1076b9485ac9d657083", "title": "Multi-Core, Main-Memory Joins: Sort vs. Hash Revisited", "abstract": "In this paper we experimentally study the performance of main-memory, parallel, multi-core join algorithms, focusing on sort-merge and (radix-)hash join. The relative performance of these two join approaches have been a topic of discussion for a long time. With the advent of modern multicore architectures, it has been argued that sort-merge join is now a better choice than radix-hash join. This claim is justified based on the width of SIMD instructions (sort-merge outperforms radix-hash join once SIMD is sufficiently wide), and NUMA awareness (sort-merge is superior to hash join in NUMA architectures). We conduct extensive experiments on the original and optimized versions of these algorithms. The experiments show that, contrary to these claims, radixhash join is still clearly superior, and sort-merge approaches to performance of radix only when very large amounts of data are involved. The paper also provides the fastest implementations of these algorithms, and covers many aspects of modern hardware architectures relevant not only for joins but for any parallel data processing operator.", "corpus_id": 5398477}}, {"query": {"sha": "b172d56803adc0ada9cbb2cb6a32ee22fc6cdc9d", "title": "A Comparison of Task Parallel Frameworks based on Implicit Dependencies in Multi-core Environments", "abstract": "The larger flexibility that task parallelism offers with respect to data parallelism comes at the cost of a higher complexity due to the variety of tasks and the arbitrary patterns of dependences that they can exhibit. These dependencies should be expressed not only correctly, but optimally, i.e. avoiding over-constraints, in order to obtain the maximum performance from the underlying hardware. There have been many proposals to facilitate this non-trivial task, particularly within the scope of nowadays ubiquitous multi-core architectures. A very interesting family of solutions because of their large scope of application, ease of use and potential performance are those in which the user declares the dependences of each task, and lets the parallel programming framework figure out which are the concrete dependences that appear at runtime and schedule accordingly the parallel tasks. Nevertheless, as far as we know, there are no comparative studies of them that help users identify their relative advantages. In this paper we describe and evaluate four tools of this class discussing the strengths and weaknesses we have found in their use. Keywords-programmability; task parallelism; dependencies; programming models", "corpus_id": 19626354}, "pos": {"sha": "3340775f557ecad8b9f33dfae41f10dec96a3315", "title": "Jade: a high-level, machine-independent language for parallel programming", "abstract": "Jade, a high-level parallel programming language for managing coarse-grained parallelism, is discussed. Jade simplifies programming by providing sequential-execution and shared-address-space abstractions. It is also platform-independent; the same Jade program runs on uniprocessors, multiprocessors, and heterogeneous networks of machines. An example that illustrates how Jade programmers express irregular, dynamically determined concurrency and how the implementation exploits this source of concurrency is presented. A digital video imaging program that runs on a high-resolution video system and several other examples of Jade applications are described.<>", "corpus_id": 8604956}, "neg": {"sha": "43241fb8d231c9b44eacd2a3d33e5b4fea99332e", "title": "Tunable SIW bandpass filters with PIN diodes", "abstract": "This paper introduces a novel tunable SIW filter implemented using PIN diode switching elements. The two-pole filter provides six states ranging from 1.55 GHz to 2.0 GHz (25% tuning). Fractional bandwidth ranges from 2.3% \u2013 3.0% with insertion loss less than 5.4 dB and return loss greater than 14 dB over the entire tuning range. Each SIW cavity is tuned by perturbing via posts connecting or disconnecting to/from the cavity's top metal layer. In order to separate the biasing network from the SIW filter, a three-layer PCB is fabricated using Rogers RT/duroid substrates.", "corpus_id": 12317232}}, {"query": {"sha": "490a024bc918713dacb4bd3f036897745278f973", "title": "RFID-Cloud smart cart system", "abstract": "The main purpose of this work is in reducing the queuing delays in major supermarkets or other shopping centers by means of an Electronic Smart Cart System which will introduce an intellectual approach to billing process through RFID technology. Smart Cart System is a cooperative performance of three separate systems: a website developed for the shopping market, electronic smart cart device and anti-theft RFID gates. This project focuses on developing the electronic smart cart device itself. It involves an embedded electronic hardware that consists of an OLED display, Arduino Mega 2560 board, a specifically designed PCB, a Wi-Fi module, 13.56 MHz HF RFID reader, a power supply and a shopping cart.", "corpus_id": 2122898}, "pos": {"sha": "3bd900ff258023e3fddbcb8d4c4c931087ee1db8", "title": "The working principle of an Arduino", "abstract": "In this paper, we analyze the working principle of an arduino. These days many people try to use the arduino because it makes things easier due to the simplified version of C++ and the already made Arduino microcontroller(atmega328 microcontroller [1]) that you can programme, erase and reprogramme at any given time. In this paper we will discuss the hardware components used in the arduino board, the software used to programme it (Arduino board) with the guide on how to write and construct your own projects, and a couple of examples of an arduino project, This will give you the overall view of an arduino uno, that after reading this paper you will get the basic concept and use of an arduino uno.", "corpus_id": 9338390}, "neg": {"sha": "08d354d27463922ac5ae02f6f93f7eec98c40dd8", "title": "Complexity of and Algorithms for Borda Manipulation", "abstract": "We prove that it is NP-hard for a coalition of two manipulators to compute how to manipulate the Borda voting rule. This resolves one of the last open problems in the computational complexity of manipulating common voting rules. Because of this NP-hardness, we treat computing a manipulation as an approximation problem where we try to minimize the number of manipulators. Based on ideas from bin packing and multiprocessor scheduling, we propose two new approximation methods to compute manipulations of the Borda rule. Experiments show that these methods significantly outperform the previous best known approximation method. We are able to find optimal manipulations in almost all the randomly generated elections tested. Our results suggest that, whilst computing a manipulation of the Borda rule by a coalition is NP-hard, computational complexity may provide only a weak barrier against manipulation in practice.", "corpus_id": 12093099}}, {"query": {"sha": "0efc78ad3e33a13fb332eb4175d0c581d3ba5448", "title": "Planar building facade segmentation and mapping using appearance and geometric constraints", "abstract": "Segmentation and mapping of planar building facades (PBFs) can increase a robot's ability of scene understanding and localization in urban environments which are often quasi-rectilinear and GPS-challenged. PBFs are basic components of the quasi-rectilinear environment. We propose a passive vision-based PBF segmentation and mapping algorithm by combining both appearance and geometric constraints. We propose a rectilinear index which allows us to segment out planar regions using appearance data. Then we combine geometric constraints such as reprojection errors, orientation constraints, and coplanarity constraints in an optimization process to improve the mapping of PBFs. We have implemented the algorithm and tested it in comparison with state-of-the-art. The results show that our method can reduce the angular error of scene structure by an average of 82.82%.", "corpus_id": 14735357}, "pos": {"sha": "22f7fe1ea5e983aee091e75ae13be1e832222c51", "title": "A two-view based multilayer feature graph for robot navigation", "abstract": "To facilitate scene understanding and robot navigation in a modern urban area, we design a multilayer feature graph (MFG) based on two views from an on-board camera. The nodes of an MFG are features such as scale invariant feature transformation (SIFT) feature points, line segments, lines, and planes while edges of the MFG represent different geometric relationships such as adjacency, parallelism, collinearity, and coplanarity. MFG also connects the features in two views and the corresponding 3D coordinate system. Building on SIFT feature points and line segments, MFG is constructed using feature fusion which incrementally, iteratively, and extensively verifies the aforementioned geometric relationships using random sample consensus (RANSAC) framework. Physical experiments show that MFG can be successfully constructed in urban area and the construction method is demonstrated to be very robust in identifying feature correspondence.", "corpus_id": 16449381}, "neg": {"sha": "a6c691c2ca0f9d5760753e432f86b0ed862e2bab", "title": "Feature learning with deep scattering for urban sound analysis", "abstract": "In this paper we evaluate the scattering transform as an alternative signal representation to the mel-spectrogram in the context of unsupervised feature learning for urban sound classification. We show that we can obtain comparable (or better) performance using the scattering transform whilst reducing both the amount of training data required for feature learning and the size of the learned codebook by an order of magnitude. In both cases the improvement is attributed to the local phase invariance of the representation. We also observe improved classification of sources in the background of the auditory scene, a result that provides further support for the importance of temporal modulation in sound segregation.", "corpus_id": 17717707}}, {"query": {"sha": "90aaef1f5fb98262d6f2174fa75903b63b0fbe9c", "title": "Pretraining Deep Actor-Critic Reinforcement Learning Algorithms With Expert Demonstrations", "abstract": "Pretraining with expert demonstrations have been found useful in speeding up the training process of deep reinforcement learning algorithms since less online simulation data is required. Some people use supervised learning to speed up the process of feature learning, others pretrain the policies by imitating expert demonstrations. However, these methods are unstable and not suitable for actor-critic reinforcement learning algorithms. Also, some existing methods rely on the global optimum assumption, which is not true in most scenarios. In this paper, we employ expert demonstrations in a actor-critic reinforcement learning framework, and meanwhile ensure that the performance is not affected by the fact that expert demonstrations are not global optimal. We theoretically derive a method for computing policy gradients and value estimators with only expert demonstrations. Our method is theoretically plausible for actor-critic reinforcement learning algorithms that pretrains both policy and value functions. We apply our method to two of the typical actor-critic reinforcement learning algorithms, DDPG and ACER, and demonstrate with experiments that our method not only outperforms the RL algorithms without pretraining process, but also is more simulation efficient.", "corpus_id": 31802360}, "pos": {"sha": "340f48901f72278f6bf78a04ee5b01df208cc508", "title": "Human-level control through deep reinforcement learning", "abstract": "The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.", "corpus_id": 205242740}, "neg": {"sha": "9ec60920ee588ea064aeadc765b1205af015384b", "title": "Learning Semantic Representations for Novel Words: Leveraging Both Form and Context", "abstract": "Word embeddings are a key component of high-performing natural language processing (NLP) systems, but it remains a challenge to learn good representations for novel words on the fly, i.e., for words that did not occur in the training data. The general problem setting is that word embeddings are induced on an unlabeled training corpus and then a model is trained that embeds novel words into this induced embedding space. Currently, two approaches for learning embeddings of novel words exist: (i) learning an embedding from the novel word\u2019s surface-form (e.g., subword n-grams) and (ii) learning an embedding from the context in which it occurs. In this paper, we propose an architecture that leverages both sources of information \u2013 surface-form and context \u2013 and show that it results in large increases in embedding quality. Our architecture obtains state-of-the-art results on the Definitional Nonce and Contextual Rare Words datasets. As input, we only require an embedding set and an unlabeled corpus for training our architecture to produce embeddings appropriate for the induced embedding space. Thus, our model can easily be integrated into any existing NLP system and enhance its capability to handle novel words.", "corpus_id": 53249780}}, {"query": {"sha": "4eaaa587189782ced1c19a7f6f4b45619aa91d13", "title": "Testdroid: automated remote UI testing on Android", "abstract": "Open mobile platforms such as Android currently suffer from the existence of multiple versions, each with its own peculiarities. This makes the comprehensive testing of interactive applications challenging. In this paper we present Testdroid, an online platform for conducting scripted user interface tests on a variety of physical Android handsets. Testdroid allows developers and researchers to record test scripts, which along with their application are automatically executed on a variety of handsets in parallel. The platform reports the outcome of these tests, enabling developers and researchers to quickly identify platforms where their systems may crash or fail. At the same time the platform allows us to identify more broadly the various problems associated with each handset, as well as frequent programming mistakes.", "corpus_id": 5924654}, "pos": {"sha": "b0d7a25e7d7c805606fe8b414e43fbb8084504f4", "title": "Mobile-D: an agile approach for mobile application development", "abstract": "Mobile phones have been closed environments until recent years. The change brought by open platform technologies such as the Symbian operating system and Java technologies has opened up a significant business opportunity for anyone to develop application software such as games for mobile terminals. However, developing mobile applications is currently a challenging task due to the specific demands and technical constraints of mobile development. Furthermore, at the moment very little is known about the suitability of the different development processes for mobile application development. Due to these issues, we have developed an agile development approach called Mobile-D. The Mobile-D approach is briefly outlined here and the experiences gained from four case studies are discussed.", "corpus_id": 1330851}, "neg": {"sha": "99aca56263051ba58197944d10f390c24ce66608", "title": "ApDeepSense: Deep Learning Uncertainty Estimation without the Pain for IoT Applications", "abstract": "Recent advances in deep-learning-based applications have attracted a growing attention from the IoT community. These highly capable learning models have shown significant improvements in expected accuracy of various sensory inference tasks. One important and yet overlooked direction remains to provide uncertainty estimates in deep learning outputs. Since robustness and reliability of sensory inference results are critical to IoT systems, uncertainty estimates are indispensable for IoT applications. To address this challenge, we develop ApDeepSense, an effective and efficient deep learning uncertainty estimation method for resource-constrained IoT devices. ApDeepSense leverages an implicit Bayesian approximation that links neural networks to deep Gaussian processes, allowing output uncertainty to be quantified. Our approach is shown to significantly reduce the execution time and energy consumption of uncertainty estimation thanks to a novel layer-wise approximation that replaces the traditional computationally intensive sampling-based uncertainty estimation methods. ApDeepSense is designed for neural net-works trained using dropout; one of the most widely used regularization methods in deep learning. No additional training is needed for uncertainty estimation purposes. We evaluate ApDeepSense using four IoT applications on Intel Edison devices. Results show that ApDeepSense can reduce around 88.9% of the execution time and 90.0% of the energy consumption, while producing more accurate uncertainty estimates compared with state-of-the-art methods.", "corpus_id": 50780556}}, {"query": {"sha": "4f20b28c1fd68ba18dd4985291fb12d13d3da53d", "title": "Safe and Efficient Intersection Control of Connected and Autonomous Intersection Traffic", "abstract": "In this dissertation, we address a problem of safe and efficient intersection crossing traffic management of autonomous and connected ground traffic. Toward this objective, an algorithm that is called the Discrete-time occupancies trajectory based Intersection traffic Coordination Algorithm (DICA) is proposed. All vehicles in the system are Connected and Autonomous Vehicles (CAVs) and capable of wireless Vehicle-to-Intersection communication. The main advantage of the proposed DTOT-based intersection management is that it enables us to utilize the space within an intersection more efficiently resulting in less delay for vehicles to cross the intersection. In the proposed framework, an intersection coordinates the motions of CAVs based on their proposed DTOTs to let them cross the intersection efficiently while avoiding collisions. In case when there is a collision between vehicles\u2019 DTOTs, the intersection modifies conflicting DTOTs to avoid the collision and requests CAVs to approach and cross the intersection according to the modified DTOTs. We then prove that the basic DICA is deadlock free and also starvation free. We also show that the basic DICA has a computational complexity of O(nLm) where n is the number of vehicles granted to cross an intersection and Lm is the maximum length", "corpus_id": 9405935}, "pos": {"sha": "aa4571229a76a28cffcc8653af48ded9efd71b78", "title": "Cooperative driving: an ant colony system for autonomous intersection management", "abstract": "Autonomous intersection management (AIM) is an innovative concept for directing vehicles through the intersections. AIM assumes that the vehicles negotiate the right-of-way. This assumption makes the problem of the intersection management significantly different from the usually studied ones such as the optimization of the cycle time, splits, and offsets. The main difficulty is to define a strategy that improves the traffic efficiency. Indeed, due to the fact that each vehicle is considered individually, AIM faces a combinatorial optimization problem that needs quick and efficient solutions for a real time application. This paper proposes a strategy that evacuates vehicles as soon as possible for each sequence of vehicle arrivals. The dynamic programming (DP) that gives the optimal solution is shown to be greedy. A combinatorial explosion is observed if the number of lanes rises. After evaluating the time complexity of the DP, the paper proposes an ant colony system (ACS) to solve the control problem for large number of vehicles and lanes. The complete investigation shows that the proposed ACS algorithm is robust and efficient. Experimental results obtained by the simulation of different traffic scenarios show that the AIM based on ACS outperforms the traditional traffic lights and other recent traffic control strategies.", "corpus_id": 18796303}, "neg": {"sha": "241da26da6530c2cf2ecc59060000e2a902201b6", "title": "Comparative evaluation of microscopic car-following behavior", "abstract": "Microscopic traffic-simulation tools are increasingly being applied to evaluate the impacts of a wide variety of intelligent transport systems (ITS) applications and other dynamic problems that are difficult to solve using traditional analytical models. The accuracy of a traffic-simulation system depends highly on the quality of the traffic-flow model at its core, with the two main critical components being the car-following and lane-changing models. This paper presents findings from a comparative evaluation of car-following behavior in a number of traffic simulators [advanced interactive microscopic simulator for urban and nonurban networks (AIMSUN), parallel microscopic simulation (PARAMICS), and Verkehr in Stadten-simulation (VISSIM)]. The car-following algorithms used in these simulators have been developed from a variety of theoretical backgrounds and are reported to have been calibrated on a number of different data sets. Very few independent studies have attempted to evaluate the performance of the underlying algorithms based on the same data set. The results reported in this study are based on a car-following experiment that used instrumented vehicles to record the speed and relative distance between follower and leader vehicles on a one-lane road. The experiment was replicated in each tool and the simulated car-following behavior was compared to the field data using a number of error tests. The results showed lower error values for the Gipps-based models implemented in AIMSUN and similar error values for the psychophysical spacing models used in VISSIM and PARAMICS. A qualitative \"drift and goal-seeking behavior\" test, which essentially shows how the distance headway between leader and follower vehicles should oscillate around a stable distance, also confirmed the findings.", "corpus_id": 13793829}}, {"query": {"sha": "a98a36979f56220bf8539012e3add572fc2c55cc", "title": "Analysis and Design of Millimeter-Wave Circularly Polarized Substrate Integrated Travelling-Wave Antennas", "abstract": "Circularly polarized millimeter-wave travelling-wave antennas, using substrate integrated circuits (SICs) technology, are designed, fabricated and tested. By using the SICs technology, compact antennas with low losses in the feeding structure and with good design accuracy are obtained. The elementary antenna which is composed of two inclined slots is characterized by full-wave simulations. This characterization is used for the design and development of linear antenna arrays with above 16 dB gain and low side lobe level (< \u221225 dB), using different power aperture distributions, namely uniform, Tchebychev and Taylor. Experimental results are presented at 77 GHz showing that the proposed antennas present good performances in terms of impedance matching, gain and axial ratio. These antennas have potential applications in integrated transceivers for communication and radar systems at millimeter-wave frequencies.", "corpus_id": 18807579}, "pos": {"sha": "584066f943acd881018387bf9ba751a76bdbcc73", "title": "Substrate Integrated Waveguide (SIW) Leaky-Wave Antenna With Transverse Slots", "abstract": "A novel slotted substrate integrated waveguide (SIW) leaky-wave antenna is proposed. This antenna works in the TE10 mode of the SIW. Leakage is obtained by introducing a periodic set of transverse slots on the top of the SIW, which interrupt the current flow on the top wall. It is seen that three modes (a leaky mode, a proper waveguide mode, and a surface-wave-like mode) can all propagate on this structure. The wavenumbers of the modes are calculated theoretically and are numerically evaluated by HFSS simulation. The leakage loss, dielectric loss, and conductor loss are also analyzed. A uniform slotted SIW leaky-wave antenna is designed that has good beam scanning from near broadside (though not exactly at broadside) to forward endfire. This type of SIW leaky-wave antenna has a wide impedance bandwidth and a narrow beam that scans with frequency. Measured results are consistent with the simulation and the theoretical analysis.", "corpus_id": 45010045}, "neg": {"sha": "39d75b96f3454bea4e8860270b99a637d96019e8", "title": "A Framework for Modeling the Appearance of 3D Articulated Figures", "abstract": "This paper describes a framework for constructing a linear subspace model of image appearance for complex articulated 3D figures such as humans and other animals. A commercial motion capture system provides 3D data that is aligned with images of subjects performing various activities. Portions of a limb\u2019s image appearance are seen from multiple views and for multiple subjects. From these partial views, weighted principal component analysis is used to construct a linear subspace representation of the \u201cunwrapped\u201d image appearance of each limb. The linear subspaces provide a generative model of the object appearance that is exploited in a Bayesian particle filtering tracking system. Results of tracking single limbs and walking humans are presented.", "corpus_id": 5476898}}, {"query": {"sha": "a0a7ca8cd3448c5996df79f9badd202c1295cb20", "title": "A Short Introduction to Learning to Rank", "abstract": "Learning to rank refers to machine learning techniques for training the model in a ranking task. Learning to rank is useful for many applications in Information Retrieval, Natural Language Processing, and Data Mining. Intensive studies have been conducted on the problem and significant progress has been made [1], [2]. This short paper gives an introduction to learning to rank, and it specifically explains the fundamental problems, existing approaches, and future work of learning to rank. Several learning to rank methods using SVM techniques are described in details. key words: Learning to rank, information retrieval, natural language processing, SVM", "corpus_id": 9997448}, "pos": {"sha": "3d663af94807663c5df519da8792720321efa11f", "title": "SoftRank: optimizing non-smooth rank metrics", "abstract": "We address the problem of learning large complex ranking functions. Most IR applications use evaluation metrics that depend only upon the ranks of documents. However, most ranking functions generate document scores, which are sorted to produce a ranking. Hence IR metrics are innately non-smooth with respect to the scores, due to the sort. Unfortunately, many machine learning algorithms require the gradient of a training objective in order to perform the optimization of the model parameters,and because IR metrics are non-smooth,we need to find a smooth proxy objective that can be used for training. We present a new family of training objectives that are derived from the rank distributions of documents, induced by smoothed scores. We call this approach SoftRank. We focus on a smoothed approximation to Normalized Discounted Cumulative Gain (NDCG), called SoftNDCG and we compare it with three other training objectives in the recent literature. We present two main results. First, SoftRank yields a very good way of optimizing NDCG. Second, we show that it is possible to achieve state of the art test set NDCG results by optimizing a soft NDCG objective on the training set with a different discount function", "corpus_id": 5496423}, "neg": {"sha": "713f2d35868c1fe82c903ca8878baf1fe26f96ae", "title": "Exoshoe: A sensory system to measure foot pressure in industrial exoskeleton", "abstract": "This paper presents a novel sensor fusion methodology to dynamically detect weight variations and the position of an exoskeleton system. The proposed methodology is intended for tasks of lifting and lowering heavy weights with an industrial exoskeleton to substantially reduce spinal loads during these activities.", "corpus_id": 1454901}}, {"query": {"sha": "74948e2c2c5727525803257910ed980ac53de69e", "title": "Local part chamfer matching for shape-based object detection", "abstract": "Chamfer matching is one of the elegant and powerful tools for shape-based detection in cluttered images. However, the chamfer matching methods, including oriented chamfer matching (OCM) and directional chamfer matching (DCM), tend to produce bad detections due to deformation of object shapes and cluttering in the scene. To improve detection accuracy of these chamfer matching methods, we propose local part oriented chamfer matching (LPOCM) and local part directional chamfer matching (LPDCM). First, shape templates and discriminative contour fragments are learned, and then a shape representation is built using a Markov random field (MRF). Finally, the template detection in an input image is formulated as an inference in the MRF. Experimental results for benchmark datasets including ETHZ Shape Classes, INRIA Horses and Weizmann Horses clearly demonstrate that the proposed LPOCM and LPDCM significantly improve the detection accuracy of OCM and DCM without sacrificing much time efficiency.", "corpus_id": 12773785}, "pos": {"sha": "2c8593180a92708f1392cf434e85c798cd929390", "title": "Object Detection by Contour Segment Networks", "abstract": "We propose a method for object detection in cluttered real im ges, given a single hand-drawn example as model. The image edges a re partitioned into contour segments and organized in an image representat ion which encodes their interconnections: the Contour Segment Network. The o bj ct detection problem is formulated as finding paths through the network resemb ling the model outlines, and a computationally efficient detection techni que is presented. An extensive experimental evaluation on detecting five diverse o bj ct classes over hundreds of images demonstrates that our method works in very cl uttered images, allows for scale changes and considerable intra-class shap e variation, is robust to interrupted contours, and is computationally efficient.", "corpus_id": 7149126}, "neg": {"sha": "33a7a59f785ef46091c30c4c85ef88c6bdabab51", "title": "Learning to detect natural image boundaries using local brightness, color, and texture cues", "abstract": "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.", "corpus_id": 8165754}}, {"query": {"sha": "cecf980058d139031e03b943dd153833afb43e2a", "title": "Automated Generation of Road Marking Maps from Street-level Panoramic Images", "abstract": "Accurate maps of road markings are useful for many applications, such as road maintenance, improving navigation, and prediction of upcoming road situations within autonomously driving vehicles. This paper introduces a generic and learning-based system for the recognition of road markings from street-level panoramic images. This system starts with an Inverse Perspective Mapping, followed by segmentation to retrieve road marking candidates. The contours of all found segments are classified, after which a Markov Random Field is applied to adjust the resulting probabilities based on the surrounding context. Finally, the spatial placement of the found individual markings (e.g. shark teeth) is analyzed to retrieve the traffic situations (e.g. priority situations). This system is evaluated for priority, block, striped lines and pedestrian crossing markings, and is able to recognize 80-95% of the individual markings, and about 90% of the occurring situations (e.g. pedestrian crossings).", "corpus_id": 16029195}, "pos": {"sha": "7783fd2984ac139194d21c10bd83b4c9764826a3", "title": "Probabilistic reasoning in intelligent systems - networks of plausible inference", "abstract": "Probabilistic methods to create the areas, of computational tools. But I needed to get canned, bayesian networks worked recently strongly. Recently I tossed this book was published. In intelligent systems is researchers in, ai operations research excellence award for graduate. Too concerned about how it i've been. Apparently daphne koller and learning structures evidential reasoning. Pearl is a language for i've. Despite its early publication date it, is not great give the best references.", "corpus_id": 32583695}, "neg": {"sha": "3c8f4bfeb0665af3e19764e587af1bbb14646395", "title": "An Analogy Ontology for Integrating Analogical Processing and First-Principles Reasoning", "abstract": "This paper describes an analogy ontology, a formal representation of some key ideas in analogical processing, that supports the integration of analogical processing with first-principles reasoners. The ontology is based on Gentner's structure-mapping theory, a psychological account of analogy and similarity. The semantics of the ontology are enforced via procedural attachment, using cognitive simulations of structure-mapping to provide analogical processing services. Queries that include analogical operations can be formulated in the same way as standard logical inference, and analogical processing systems in turn can call on the services of first-principles reasoners for creating cases and validating their conjectures. We illustrate the utility of the analogy ontology by demonstrating how it has been used in three systems: A crisis management analogical reasoner that answers questions about international incidents, a course of action analogical critiquer that provides feedback about military plans, and a comparison question-answering system for knowledge capture. These systems rely on large, general-purpose knowledge bases created by other research groups, thus demonstrating the generality and utility of these ideas.", "corpus_id": 5284550}}, {"query": {"sha": "bf0121f325cff44af03edb2c45c8ea4206693803", "title": "Multi-label learning with millions of labels: recommending advertiser bid phrases for web pages", "abstract": "Recommending phrases from web pages for advertisers to bid on against search engine queries is an important research problem with direct commercial impact. Most approaches have found it infeasible to determine the relevance of all possible queries to a given ad landing page and have focussed on making recommendations from a small set of phrases extracted (and expanded) from the page using NLP and ranking based techniques. In this paper, we eschew this paradigm, and demonstrate that it is possible to efficiently predict the relevant subset of queries from a large set of monetizable ones by posing the problem as a multi-label learning task with each query being represented by a separate label.\n We develop Multi-label Random Forests to tackle problems with millions of labels. Our proposed classifier has prediction costs that are logarithmic in the number of labels and can make predictions in a few milliseconds using 10 Gb of RAM. We demonstrate that it is possible to generate training data for our classifier automatically from click logs without any human annotation or intervention. We train our classifier on tens of millions of labels, features and training points in less than two days on a thousand node cluster. We develop a sparse semi-supervised multi-label learning formulation to deal with training set biases and noisy labels harvested automatically from the click logs. This formulation is used to infer a belief in the state of each label for each training ad and the random forest classifier is extended to train on these beliefs rather than the given labels. Experiments reveal significant gains over ranking and NLP based techniques on a large test set of 5 million ads using multiple metrics.", "corpus_id": 9628024}, "pos": {"sha": "234f11713077aa09179533a1f37c075662e25b0f", "title": "Incremental Algorithms for Hierarchical Classification", "abstract": "We study the problem of hierarchical classification when labels corresponding to partial and/or multiple paths in the underlying taxonomy are allowed. We introduce a new hierarchical loss function, the H-loss, implementing the simple intuition that additional mistakes in the subtree of a mistaken class should not be charged for. Based on a probabilistic data model introduced in earlier work, we derive the Bayes-optimal classifier for the H-loss. We then empirically compare two incremental approximations of the Bayes-optimal classifier with a flat SVM classifier and with classifiers obtained by using hierarchical versions of the Perceptron and SVM algorithms. The experiments show that our simplest incremental approximation of the Bayes-optimal classifier performs, after just one training epoch, nearly as well as the hierarchical SVM classifier (which performs best). For the same incremental algorithm we also derive an H-loss bound showing, when data are generated by our probabilistic data model, exponentially fast convergence to the H-loss of the hierarchical classifier based on the true model parameters.", "corpus_id": 6642735}, "neg": {"sha": "a98a528a50ba8075c1f7b64df24afeb2071ebe9c", "title": "A systematic derivation of the STG machine verified in Coq", "abstract": "Shared Term Graph (STG) is a lazy functional language used as an intermediate language in the Glasgow Haskell Compiler (GHC). In this article, we present a natural operational semantics for STG and we mechanically derive a lazy abstract machine from this semantics, which turns out to coincide with Peyton-Jones and Salkild's Spineless Tagless G-machine (STG machine) used in GHC. Unlike other constructions of STG-like machines present in the literature, ours is based on a systematic and scalable derivation method (inspired by Danvy et al.'s functional correspondence between evaluators and abstract machines) and it leads to an abstract machine that differs from the original STG machine only in inessential details. In particular, it handles non-trivial update scenarios and partial applications identically as the STG machine.\n The entire derivation has been formalized in the Coq proof assistant. Thus, in effect, we provide a machine checkable proof of the correctness of the STG machine with respect to the natural semantics.", "corpus_id": 15894992}}, {"query": {"sha": "0ef33966bf72bee871e1ba70a31863f551b42a0f", "title": "Preserving Multi-party Machine Learning with Homomorphic Encryption", "abstract": "Privacy preserving multi-party machine learning approaches enable multiple parties to train a machine learning model from aggregate data while ensuring the privacy of their individual datasets is preserved. In this paper, we propose a privacy preserving multi-party machine learning approach based on homomorphic encryption where the machine learning algorithm of choice is deep neural networks. We develop theoretical foundation for implementing deep neural networks over encrypted data and utilize it in developing efficient and practical algorithms in encrypted domain.", "corpus_id": 1749639}, "pos": {"sha": "1276f304b52faae10438bde5da3ae88fcc33dd62", "title": "Crypto-Nets: Neural Networks over Encrypted Data", "abstract": "The problem we address is the following: how can a user employ a predictive model that is held by a third party, without compromising private information. For example, a hospital may wish to use a cloud service to predict the readmission risk of a patient. However, due to regulations, the patient\u2019s medical files cannot be revealed. The goal is to make an inference using the model, without jeopardizing the accuracy of the prediction or the privacy of the data. To achieve high accuracy, we use neural networks, which have been shown to outperform other learning models for many tasks. To achieve the privacy requirements, we use homomorphic encryption in the following protocol: the data owner encrypts the data and sends the ciphertexts to the third party to obtain a prediction from a trained model. The model operates on these ciphertexts and sends back the encrypted prediction. In this protocol, not only the data remains private, even the values predicted are available only to the data owner. Using homomorphic encryption and modifications to the activation functions and training algorithms of neural networks, we present crypto-nets and prove that they can be constructed and may be feasible. This method paves the way to build a secure cloud-based neural network prediction services without invading users\u2019 privacy.", "corpus_id": 5787871}, "neg": {"sha": "12a1cff6164e08f08828942639a9ea766ff768c1", "title": "on Chinese Orientation Analysis", "abstract": "\u5f20\u731b,\u5f6d\u4e00\u51e1,\u6a0a\u626c,\u674e\u4e39,\u6797\u5c0f\u4fca,\u5434\u73ba\u5b8f \u5317\u4eac\u5927\u5b66\u8a00\u8bed\u542c\u89c9\u7814\u7a76\u4e2d\u5fc3,\u5317\u4eac,100871 E-mail: {zhangm, pengyf, fanyang, lidan, linxj, wxh}@cis.pku.edu.cn \u6458 \u8981:\u6587\u672c\u503e\u5411\u6027\u5206\u6790\u662f\u81ea\u7136\u8bed\u8a00\u5904\u7406\u4e2d\u7684\u4e00\u4e2a\u70ed\u70b9\u95ee\u9898\u3002\u672c\u6587\u4ecb\u7ecd\u4e86\u4e00\u5957\u4e2d\u6587\u6587\u672c\u503e\u5411\u6027\u5206\u6790\u7684\u65b9\u6cd5, \u5b83\u5305\u62ec\u8bcd\u6cd5\u5206\u6790\u548c\u503e\u5411\u6027\u5224\u522b\u4e24\u4e2a\u6b65\u9aa4\u3002\u5728\u8bcd\u6cd5\u5206\u6790\u4e2d,\u57fa\u4e8e\u6761\u4ef6\u968f\u673a\u573a\u6a21\u578b,\u5bf9\u8f93\u5165\u7684\u6587\u672c\u8fdb\u884c\u5206\u8bcd\u548c \u547d\u540d\u5b9e\u4f53\u8bc6\u522b\u7684\u4e00\u4f53\u5316\u5904\u7406,\u4ece\u800c\u6709\u6548\u5730\u63d0\u9ad8\u4e86\u5206\u6790\u6027\u80fd\u3002\u5728\u503e\u5411\u6027\u5224\u522b\u4e2d,\u4ece\u8bcd\u6c47\u3001\u53e5\u5b50\u548c\u7bc7\u7ae0\u4e09\u4e2a\u4e0d \u540c\u5c42\u6b21\u8fdb\u884c\u5206\u6790\u3002\u5176\u4e2d\u5728\u8bcd\u6c47\u5c42\u6b21\u4e0a\u91c7\u7528\u6700\u5927\u71b5\u6a21\u578b,\u6839\u636e\u4e0a\u4e0b\u6587\u4fe1\u606f\u8fdb\u884c\u60c5\u611f\u8bcd\u8bc6\u522b\u548c\u6781\u6027\u5224\u522b\u3002\u5728\u53e5 \u5b50\u5c42\u6b21\u4e0a\u6839\u636e\u6784\u5efa\u7684\u5c5e\u6027\u5217\u8868\u62bd\u53d6\u8bc4\u4ef7\u5bf9\u8c61,\u5e76\u901a\u8fc7\u4fee\u9970\u8bcd\u5224\u65ad\u5176\u503e\u5411\u6027\u3002\u5728\u7bc7\u7ae0\u5c42\u6b21\u4e0a,\u4ee5\u8bcd\u6c47\u5224\u522b\u7ed3 \u679c\u4e3a\u57fa\u7840,\u91c7\u7528\u652f\u6301\u5411\u91cf\u673a\u6a21\u578b,\u878d\u5408\u591a\u79cd\u4fe1\u606f\u5bf9\u6587\u672c\u7684\u4e3b\u5ba2\u89c2\u548c\u6781\u6027\u8fdb\u884c\u5224\u522b\u3002\u6700\u540e,\u672c\u6587\u5728\u641c\u7d22\u5f15\u64ce \u4e2d\u52a0\u5165\u6587\u672c\u503e\u5411\u6027\u5206\u6790\u529f\u80fd,\u5728\u68c0\u7d22\u5230\u76f8\u5173\u6587\u6863\u7684\u540c\u65f6,\u5f97\u5230\u5176\u8912\u8d2c\u503e\u5411\u3002 \u5173\u952e\u8bcd:\u8bcd\u6cd5\u5206\u6790\u4e00\u4f53\u5316,\u60c5\u611f\u8bcd,\u503e\u5411\u6027\u5206\u6790", "corpus_id": 50820529}}, {"query": {"sha": "4a4a2ccc19d41e2642fe797b303b2f398e93a912", "title": "Adopting a management innovation in a professional organization: The case of improvement knowledge in healthcare", "abstract": "Adopting a management innovation in a professional organization the case of improvement knowledge in healthcare Andreas Hellstr\u00f6m Svante Lifvergren Susanne Gustavsson Ida Gremyr Article information: To cite this document: Andreas Hellstr\u00f6m Svante Lifvergren Susanne Gustavsson Ida Gremyr , (2015),\"Adopting a management innovation in a professional organization the case of improvement knowledge in healthcare\", Business Process Management Journal, Vol. 21 Iss 5 pp. Permanent link to this document: http://dx.doi.org/10.1108/BPMJ-05-2014-0041", "corpus_id": 34551974}, "pos": {"sha": "b3d51cac8ffdecdf851febda356a2382ab8c083d", "title": "Management innovation", "abstract": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "corpus_id": 212718}, "neg": {"sha": "8569fc88a3d1ac8b873872becb2ee8bc01dc73bc", "title": "Deep-Person: Learning Discriminative Deep Features for Person Re-Identification", "abstract": "Person re-identification (Re-ID) requires discriminative features focusing on the full person to cope with inaccurate person bounding box detection, background clutter, and occlusion. Many recent person Re-ID methods attempt to learn such features describing full person details via part-based feature representation. However, the spatial context between these parts is ignored for the independent extractor on each separate part. In this paper, we propose to apply Long Short-Term Memory (LSTM) in an end-to-end way to model the pedestrian, seen as a sequence of body parts from head to foot. Integrating the contextual information strengthens the discriminative ability of local feature aligning better to full person. We also leverage the complementary information between local and global feature. Furthermore, we integrate both identification task and ranking task in one network, where a discriminative embedding and a similarity measurement are learned concurrently. This results in a novel three-branch framework named Deep-Person, which learns highly discriminative features for person Re-ID. Experimental results demonstrate that Deep-Person outperforms the state-of-the-art methods by a large margin on three challenging datasets including Market-1501, CUHK03, and DukeMTMC-reID. \u2217Corresponding author Email address: {xbai,yangmingkun,tengtenghuang,zydou}@hust.edu.cn,yurui.thu@gmail.com, yongchaoxu@hust.edu.cn (Xiang Bai, Mingkun Yang, Tengteng Huang, Zhiyong Dou, Rui Yu, Yongchao Xu) Preprint submitted to Journal of LTEX Templates July 25, 2018 ar X iv :1 71 1. 10 65 8v 3 [ cs .C V ] 2 4 Ju l 2 01 8", "corpus_id": 7953821}}, {"query": {"sha": "42f80d9186370cb9d21d7b244051e0b08dd51372", "title": "Saliency estimation using a non-parametric low-level vision model", "abstract": "Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.", "corpus_id": 11010048}, "pos": {"sha": "4f847b4ddc105d73bc78f3e7220e6c1f71a7dfb6", "title": "Saliency Based on Information Maximization", "abstract": "A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in the primate visual cortex. It is further shown that the proposed saliency measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts.", "corpus_id": 18236666}, "neg": {"sha": "be465218317c82e88ce3280de04642b017f9ac86", "title": "Changed nursing scheduling for improved safety culture and working conditions - patients' and nurses' perspectives.", "abstract": "AIM\nTo evaluate fixed scheduling compared with self-scheduling for nursing staff in oncological inpatient care with regard to patient and staff outcomes.\n\n\nBACKGROUND\nVarious scheduling models have been tested to attract and retain nursing staff. Little is known about how these schedules affect staff and patients. Fixed scheduling and self-scheduling have been studied to a small extent, solely from a staff perspective.\n\n\nMETHOD\nWe implemented fixed scheduling on two of four oncological inpatient wards. Two wards kept self-scheduling. Through a quasi-experimental design, baseline and follow-up measurements were collected among staff and patients. The Safety Attitudes Questionnaire was used among staff, as well as study-specific questions for patients and staff.\n\n\nRESULTS\nFixed scheduling was associated with less overtime and fewer possibilities to change shifts. Self-scheduling was associated with more requests from management for short notice shift changes. The type of scheduling did not affect patient-reported outcomes.\n\n\nCONCLUSIONS\nFixed scheduling should be considered in order to lower overtime. Further research is necessary and should explore patient outcomes to a greater extent.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nScheduling is a core task for nurse managers. Our study suggests fixed scheduling as a strategy for managers to improve the effective use of resources and safety.", "corpus_id": 25637884}}, {"query": {"sha": "bfb784cd2e487a6966ade05c6739ead5412f4257", "title": "Adaptive Security for Multi-layer Ad-hoc Networks", "abstract": "Secure communication is critical in military environments where the network infrastructure is vulnerable to various attacks and compromises. A conventional centralized solution breaks down when the security servers are destroyed by the enemies. In this paper we design and evaluate a security framework for multi-layer ad-hoc wireless networks with unmanned aerial vehicles (UAVs). In battlefields, the framework adapts to the contingent damages on the network infrastructure. Depending on the availability of the network infrastructure, our design is composed of two modes. In infrastructure mode, security services, specifically the authentication services, are implemented on UAVs that feature low overhead and flexible managements. When the UAVs fail or are destroyed, our system seamlessly switches to infrastructureless mode, a backup mechanism that maintains comparable security services among the surviving units. In the infrastructureless mode, the security services are localized to each node\u2019s vicinity to comply with the ad-hoc communication mechanism in the scenario. We study the instantiation of these two modes and the transitions between them. Our implementation and simulation measurements confirm the effectiveness of our design.", "corpus_id": 14759007}, "pos": {"sha": "185aa7675f17b3aef06358c591a3cfe5f8266209", "title": "UAV aided intelligent routing for ad-hoc wireless network in single-area theater", "abstract": "Large homogeneous ad hoc wireless networks have a problem: the bandwidth available to an mobile user decreases as the number of nodes in the network increases. Using the embedded ad-hoc networking mechanism, nodes are able to transport packets across the network in a multihop fashion. An embedded mobile backbone is dynamically constructed to form 2-level physical heterogeneous multihop wireless net work. These backbone nodes provide two critical functions: (1) direct communication between neighboring cluster head s. (2) efficient route discovery in HSR. With the broadcast feature of UAV, Link state can be broadcasted to backbone nodes instead of \u201cflooding\u201d on the level 2. Thus, routing overhead can be tremendously reduced, throughput will be improved. We modified Hierarchical State Routing to have an intelligent selection algorithm to reduce the system latency cause d by long propagation delay of UAV channel. The performance of the system is evaluated through simulation experiments.", "corpus_id": 14738913}, "neg": {"sha": "20ffcde31cb03e92f85d3509d2b979706685055f", "title": "C-ICAMA, a centralized intelligent channel assigned multiple access for multi-layer ad-hoc wireless networks with UAVs", "abstract": "Multi-layer ad hoc wireless networks with UAVs is an ideal infrastructure to establish a rapidly deployable wireless communication system any time any where in the world for military applications. In this tactical environment, information traflc is quite asynimetric. Ground jighting units are information consumers and receive jar more data than they transmit. The up-link is used for sending requests for information and some networking configuration overhead with a f a 0 kilobits, while the down-link is used to return the data requested with megabits size (e.g. multimedia file of images and charts). Centralized Intelligent Channel Assigned Multiple Access(C-ICAMA) is a MAC layer protocol proposed for ground backbone nodes to access UAV (Unmanned Aerie1 Vehicle) to solve the highly asymmetric data trafic in this tactical environment. With it\u2019s intelligent scheduling algorithm, it can dynamically allocate bandwidth for up-link and downlink to jit the instantaneous status of symmetric trafJic. The results of C-ICAMA is very promising, due to the dynamic bandwidth allocation of asymmetric i.ip-link and down-link, the access delay is tremendously reduced,", "corpus_id": 14442961}}, {"query": {"sha": "92dd54f88976a3b37f335b18218c6d53ceeb09f1", "title": "Feature extraction of epilepsy EEG using discrete wavelet transform", "abstract": "Epilepsy is one of the most common a chronic neurological disorders of the brain that affect millions of the world's populations. It is characterized by recurrent seizures, which are physical reactions to sudden, usually brief, excessive electrical discharges in a group of brain cells. Hence, seizure identification has great importance in clinical therapy of epileptic patients. Electroencephalogram (EEG) is most commonly used in epilepsy detection since it includes precious physiological information of the brain. However, it could be a challenge to detect the subtle but critical changes included in EEG signals. Feature extraction of EEG signals is core trouble on EEG-based brain mapping analysis. This paper will extract ten features from EEG signal based on discrete wavelet transform (DWT) for epilepsy detection. These numerous features will help the classifiers to achieve a good accuracy when utilize to classify EEG signal to detect epilepsy. Subsequently, the results have illustrated that DWT has been adopted to extract various features i.e., Entropy, Min, Max, Mean, Median, Standard deviation, Variance, Skewness, Energy and Relative Wave Energy (RWE).", "corpus_id": 5802425}, "pos": {"sha": "696fdd1ba1b8520731b00cc3a45dfbb504a3d93f", "title": "Best basis-based wavelet packet entropy feature extraction and hierarchical EEG classification for epileptic detection", "abstract": "In this study, a hierarchical electroencephalogram (EEG) classification system for epileptic seizure detection is proposed. The system includes the following three stages: (i) original EEG signals representation by wavelet packet coefficients and feature extraction using the best basis-based wavelet packet entropy method, (ii) cross-validation (CV) method together with k-Nearest Neighbor (k-NN) classifier used in the training stage to hierarchical knowledge base (HKB) construction, and (iii) in the testing stage, computing classification accuracy and rejection rate using the top-ranked discriminative rules from the HKB. The data set is taken from a publicly available EEG database which aims to differentiate healthy subjects and subjects suffering from epilepsy diseases. Experimental results show the efficiency of our proposed system. The best classification accuracy is about 100% via 2-, 5-, and 10-fold cross-validation, which indicates the proposed method has potential in designing a new intelligent EEG-based assistance diagnosis system for early detection of the electroencephalographic changes. 2011 Elsevier Ltd. All rights reserved.", "corpus_id": 14462439}, "neg": {"sha": "d2579be067acdccf3cf452b5ed824fd0a39257f5", "title": "Enabling Quality Control for Entity Resolution: A Human and Machine Cooperation Framework", "abstract": "Even though many machine algorithms have been proposed for entity resolution, it remains very challenging to find a solution with quality guarantees. In this paper, we propose a novel HUman and Machine cOoperation (HUMO) framework for entity resolution (ER), which divides an ER workload between the machine and the human. HUMO enables a mechanism for quality control that can flexibly enforce both precision and recall levels. We introduce the optimization problem of HUMO, minimizing human cost given a quality requirement, and then present three optimization approaches: a conservative baseline one purely based on the monotonicity assumption of precision, a more aggressive one based on sampling and a hybrid one that can take advantage of the strengths of both previous approaches. Finally, we demonstrate by extensive experiments on real and synthetic datasets that HUMO can achieve high-quality results with reasonable return on investment (ROI) in terms of human cost, and it performs considerably better than the state-of-the-art alternatives in quality control.", "corpus_id": 1230508}}, {"query": {"sha": "e1167e0f5dae02d254af60825be6f493814ee074", "title": "Privacy Preserving Payments in Credit Networks: Enabling trust with privacy in online marketplaces", "abstract": "A credit network models trust between agents in a distributed environment and enables payments between arbitrary pairs of agents. With their flexible design and robustness against intrusion, credit networks form the basis of several Sybil-tolerant social networks, spam-resistant communication protocols, and payment systems. Existing systems, however, expose agents\u2019 trust links as well as the existence and volumes of payment transactions, which is considered sensitive information in social environments or in the financial world. This raises a challenging privacy concern, which has largely been ignored by the research on credit networks so far. This paper presents PrivPay, the first provably secure privacypreserving payment protocol for credit networks. The distinguishing feature of PrivPay is the obliviousness of transactions, which entails strong privacy guarantees for payments. PrivPay does not require any trusted third party, maintains a high accuracy of the transactions, and provides an economical solution to network service providers. It is also general-purpose trusted hardwarebased solution applicable to all credit network-based systems. We implemented PrivPay and demonstrated its practicality by privately emulating transactions performed in the Ripple payment system over a period of four months.", "corpus_id": 6062549}, "pos": {"sha": "19c83d150727f832362103ff4b7551356abaa69f", "title": "Sharing graphs using differentially private graph models", "abstract": "Continuing success of research on social and computer networks requires open access to realistic measurement datasets. While these datasets can be shared, generally in the form of social or Internet graphs, doing so often risks exposing sensitive user data to the public. Unfortunately, current techniques to improve privacy on graphs only target specific attacks, and have been proven to be vulnerable against powerful de-anonymization attacks.\n Our work seeks a solution to share meaningful graph datasets while preserving privacy. We observe a clear tension between strength of privacy protection and maintaining structural similarity to the original graph. To navigate the tradeoff, we develop a differentially-private graph model we call Pygmalion. Given a graph G and a desired level of e-differential privacy guarantee, Pygmalion extracts a graph's detailed structure into degree correlation statistics, introduces noise into the resulting dataset, and generates a synthetic graph G'. G' maintains as much structural similarity to G as possible, while introducing enough differences to provide the desired privacy guarantee. We show that simply applying differential privacy to graphs results in the addition of significant noise that may disrupt graph structure, making it unsuitable for experimental study. Instead, we introduce a partitioning approach that provides identical privacy guarantees using much less noise. Applied to real graphs, this technique requires an order of magnitude less noise for the same privacy guarantees. Finally, we apply our graph model to Internet, web, and Facebook social graphs, and show that it produces synthetic graphs that closely match the originals in both graph structure metrics and behavior in application-level tests.", "corpus_id": 1905609}, "neg": {"sha": "56898ef9db374843fbd69f8209ca9515cddf7e3d", "title": "Physics of high-current interruption of vacuum circuit breakers", "abstract": "The present state of knowledge concerning the physical phenomena of high-current interruption with vacuum interrupters (VI) is reviewed. Two arc control methods, application of externally applied axial magnetic field (AMF) or transverse magnetic field (TMF), are available to distribute the heat flux from arc to contacts homogeneously over contact surface, to avoid local overheating. AMF spreads the arc at fixed location. TMF moves the constricted arc over contact surface. Change from diffuse to constricted arcing mode results from superposition of two effects: \"instability of anode sheath\" and \"influence of magneto-gas-dynamic\", when no AMF component exists. Conditions of arc memory at current zero determine the process of current extinction and of recovery of breakdown strength to its ultimate value. Evaporation of metal vapor continues. Charge exchange between fast ions and slow vapor atoms increases the residual charge, left in the switching gap at current zero. Post arc current prolongs and increases consequently. Breakdown during recovery of dielectric strength occurs instantaneously or sporadically delayed. Behavior of breakdown is essentially determined by vapor density. Breakdown mechanism of delayed breakdown is still unresolved. Vapor density is too low to initiate breakdown alone. Lack of fundamental knowledge in combination with complexity hampers numerical treatment of arc behavior, as well as heat flux to contact during arcing and process of interruption presently, as needed for interpretation of experimental results and prediction purposes.", "corpus_id": 30212903}}, {"query": {"sha": "90900a4ba47e1ec9b1c4325f312dd6725f3cc258", "title": "Nature-Inspired Computation and Machine Learning", "abstract": "Modelling the behaviour of algorithms is the realm of Evolutionary Algorithm theory. From a practitioner\u2019s point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. Recently, there have been works that addressed this problem by proposing models of performance of different Genetic Programming Systems. In this work, we complement previous approaches by proposing a scheme capable of classifying the hardness of optimization problems based on different difficulty measures such as Negative Slope Coefficient, Fitness Distance Correlation, Neutrality, Ruggedness, Basins of Attraction, and Epistasis. The results indicate that this procedure is able to accurately classify the performance of the GA over a set of benchmark problems.", "corpus_id": 1924795}, "pos": {"sha": "ae3ebe6c69fdb19e12d3218a5127788fae269c10", "title": "A Literature Survey of Benchmark Functions For Global Optimization Problems", "abstract": "Test functions are important to validate and compare the performance of optimization algorithms. There have been many test or benchmark functions reported in the literature; however, there is no standard list or set of benchmark functions. Ideally, test functions should have diverse properties so that can be truly useful to test new algorithms in an unbiased way. For this purpose, we have reviewed and compiled a rich set of 175 benchmark functions for unconstrained optimization problems with diverse properties in terms of modality, separability, and valley landscape. This is by far the most complete set of functions so far in the literature, and tt can be expected this complete set of functions can be used for validation of new optimization in the future.", "corpus_id": 19502816}, "neg": {"sha": "1cf87af22b3b4dd0ff1144d861e0573121d8de2e", "title": "Private Information Retrieval", "abstract": "Publicly accessible databases are an indispensable resource for retrieving up-to-date information. But they also pose a significant risk to the privacy of the user, since a curious database operator can follow the user's queries and infer what the user is after. Indeed, in cases where the users' intentions are to be kept secret, users are often cautious about accessing the database. It can be shown that when accessing a single database, to completely guarantee the privacy of the user, the whole database should be down-loaded; namely n bits should be communicated (where n is the number of bits in the database).\nIn this work, we investigate whether by replicating the database, more efficient solutions to the private retrieval problem can be obtained. We describe schemes that enable a user to access k replicated copies of a database (k\u22652) and privately retrieve information stored in the database. This means that each individual server (holding a replicated copy of the database) gets no information on the identity of the item retrieved by the user. Our schemes use the replication to gain substantial saving. In particular, we present a two-server scheme with communication complexity O(n1/3).", "corpus_id": 544823}}, {"query": {"sha": "e5249740022e9ce756f415399986e2deb663ece0", "title": "The Rise of Social Robots : A Review of the Recent Literature", "abstract": "In this article I explore the most recent literature on social robotics and argue that the field of robotics is evolving in a direction that will soon require a systematic collaboration between engineers and sociologists. After discussing several problems relating to social robotics, I emphasize that two key concepts in this research area are scenario and persona. These are already popular as design tools in Human-Computer Interaction (HCI), and an approach based on them is now being adopted in Human-Robot Interaction (HRI). As robots become more and more sophisticated, engineers will need the help of trained sociologists and psychologists in order to create personas and scenarios and to \u201cteach\u201d humanoids how to behave in various circumstances. 1. Social robots and social work The social consequences of robotics depend to a significant degree on how robots are employed by humans, and to another compelling degree on how robotics evolves from a technical point of view. That is why it could be instructive for engineers interested in cooperating with sociologists to get acquainted with the problems of social work and other social services, and for sociologists interested in the social dimensions of robotics to have a closer look at technical aspects of new generation robots. Regrettably, engineers do not typically read sociological literature, and sociologists and social workers do not regularly read engineers\u2019 books and articles. In what follows, I break this unwritten rule by venturing into an analysis of both types of literature. This type of interdisciplinary approach is particularly necessary after the emergence of so-called \u201csocial robots.\u201d A general definition of social robot is provided by social scientist Kate Darling: A social robot is a physically embodied, autonomous agent that communicates and interacts with humans on an emotional level. For the purposes of this Article, it is important to distinguish social robots from inanimate computers, as well as from industrial or service robots that are not designed to elicit human feelings and mimic social cues. Social robots also follow social behavior patterns, have various \u201cstates of mind,\u201d and adapt to what they learn through their interactions.", "corpus_id": 41569204}, "pos": {"sha": "3bde350d084990554b343c49e7734997a1a7f916", "title": "Pneumatic Artificial Muscles : actuators for robotics and automation", "abstract": "This article is intended as an introduction to and an overview of Pneumatic Artificial Muscles (PAMs). These are pneumatic actuators made mainly of a flexible and inflatable membrane. First, their concept and way of operation are explained. Next, the properties of these actuators are given, the most important of which are the compliant behavior and extremely low weight. A classification and review is following this section. Typical applications are dealt with in the last but one section and, finally, some concluding remarks are made.", "corpus_id": 14944063}, "neg": {"sha": "449bf3d0cdb94ed77d6ddedfcd69619617777d2a", "title": "Enhanced flexible LoRaWAN node for industrial IoT", "abstract": "The Industrial Internet of Things (IIoT) is introducing the IoT approach in the industrial automation world, paving the way to innovative services for improving efficiency, reliability and availability of industrial processes and products. The IIoT takes advantage of the collection of large amount of data by means of (wireless) links connecting smart sensors attached to the system of interest. Low Power Wide Area Networks emerged as a viable solution for implementing private cellular like communications. In this paper, the LoRaWAN technology is addressed, thanks to the wide acceptance it received in both industrial and academic worlds. In particular, an enhanced node is proposed as a building block of IIoT-enabled industrial wireless networks. It offers new features: it behaves as a regular node; it can act as a gateway toward legacy/different (wired) networks; and it can extend LoRaWAN coverage acting as a range extender (i.e. a single hop forwarder). After a brief overview of LoRa and LoRaWAN, the paper deals with the features of the realized node, exploiting commercially available hardware. The experimental results show the feasibility of the proposed approach. In particular, the range extender capability of transmitting replicas of an incoming messages is tested for different transmission delays.", "corpus_id": 49570616}}, {"query": {"sha": "810ae28e7de5208d7ac77de9cb7c02176f68c05c", "title": "Improved novel view synthesis from depth image with large baseline", "abstract": "In this paper, a new algorithm is developed for recovering the large disocclusion regions in depth image based rendering (DIBR) systems on 3DTV. For the DIBR systems, undesirable artifacts occur in the disocclusion regions by using the conventional view synthesis techniques especially with large baseline. Three techniques are proposed to improve the view synthesis results. The first is the preprocessing of the depth image by using the bilateral filter, which helps to sharpen the discontinuous depth changes as well as to smooth the neighboring depth of similar color, thus restraining noises from appearing on the warped images. Secondly, on the warped image of a new viewpoint, we fill the disocclusion regions on the depth image with the background depth levels to preserve the depth structure. For the color image, we propose the depth-guided exemplar-based image inpainting that combines the structural strengths of the color gradient to preserve the image structure in the restored regions. Finally, a trilateral filter, which simultaneous combines the spatial location, the color intensity, and the depth information to determine the weighting, is applied to enhance the image synthesis results. Experimental results are shown to demonstrate the superior performance of the proposed novel view synthesis algorithm compared to the traditional methods.", "corpus_id": 13957714}, "pos": {"sha": "3711625f7f22a59a9ac5251a99bb8e3298048ae4", "title": "Image inpainting", "abstract": "Inpainting, the technique of modifying an image in an undetectable form, is as ancient as art itself. The goals and applications of inpainting are numerous, from the restoration of damaged paintings and photographs to the removal/replacement of selected objects. In this paper, we introduce a novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators. After the user selects the regions to be restored, the algorithm automatically fills-in these regions with information surrounding them. The fill-in is done in such a way that isophote lines arriving at the regions' boundaries are completed inside. In contrast with previous approaches, the technique here introduced does not require the user to specify where the novel information comes from. This is automatically done (and in a fast way), thereby allowing to simultaneously fill-in numerous regions containing completely different structures and surrounding backgrounds. In addition, no limitations are imposed on the topology of the region to be inpainted. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like dates, subtitles, or publicity; and the removal of entire objects from the image like microphones or wires in special effects.", "corpus_id": 308278}, "neg": {"sha": "5a4d306052867e035b1751833e108657dbffb106", "title": "Servant Leadership , Employee Satisfaction , and Organizational Performance in Rural Community Hospitals", "abstract": "Servant leadership in today\u2019s healthcare settings provides a unique avenue through which to assess leadership behaviors and the relationship to employee satisfaction and healthcare patient satisfaction measures. This study sought to determine the degree that leaders in community hospitals were perceived as servant leaders and the level of employee satisfaction at these rural community hospitals. Two hundred nineteen surveys were completed from 10 community hospitals. This research revealed that servant leadership and employee satisfaction are strongly correlated. In addition, servant leadership has a significant correlation between intrinsic satisfaction and HCAHPS scores. Further research can be extended to additional categories and geographic areas of the United States to determine how servant leadership, employee satisfaction, and HCAHPS are related. Hospital administrators should examine the findings of this study for possible implications to their leadership style and practice in determining how it may impact the organization they lead.", "corpus_id": 45728514}}, {"query": {"sha": "80552167584a1e883a853bd570f3b8fb586b8094", "title": "QU-RPL: Queue utilization based RPL for load balancing in large scale industrial applications", "abstract": "RPL is an IPv6 routing protocol for low-power and lossy networks (LLNs) designed to meet the requirements of a wide range of LLN applications including smart grid AMIs, industrial and environmental monitoring, and wireless sensor networks. RPL allows bi-directional end-to-end IPv6 communication on resource constrained LLN devices, leading to the concept of the Internet of Things (IoT) with thousands and millions of devices interconnected through multihop mesh networks. In this paper, we investigate the load balancing and congestion problem of RPL. Specifically, we show that most of packet losses under heavy traffic are due to congestion, and a serious load balancing problem exists in RPL in terms of routing parent selection. To overcome this problem, this paper proposes a simple yet effective queue utilization based RPL (QU-RPL) that significantly improves end-to-end packet delivery performance compared to the standard RPL. QU-RPL is designed for each node to select its parent node considering the queue utilization of its neighbor nodes as well as their hop distances to an LLN border router (LBR). Owing to its load balancing capability, QU-RPL is very effective in lowering the queue losses and increasing the packet delivery ratio. We verify all our findings through experimental measurements on a real testbed of a multihop LLN over IEEE 802.15.4.", "corpus_id": 4649946}, "pos": {"sha": "2577f910134a07940c4c3505b19a56e153afe26f", "title": "DualMOP-RPL: Supporting Multiple Modes of Downward Routing in a Single RPL Network", "abstract": "RPL is an IPv6 routing protocol for low-power and lossy networks (LLNs) designed to meet the requirements of a wide range of LLN applications including smart grid AMIs, home and building automation, industrial and environmental monitoring, health care, wireless sensor networks, and the Internet of Things (IoT) in general with thousands and millions of nodes interconnected through multihop mesh networks. RPL constructs tree-like routing topology rooted at an LLN border router (LBR) and supports bidirectional IPv6 communication to and from the mesh devices by providing both upward and downward routing over the routing tree. In this article, we focus on the interoperability of downward routing and supporting its two modes of operations (MOPs) defined in the RPL standard (RFC 6550). Specifically, we show that there exists a serious connectivity problem in RPL protocol when two MOPs are mixed within a single network, even for standard-compliant implementations, which may result in network partitions. To address this problem, this article proposes DualMOP-RPL, an enhanced version of RPL, which supports nodes with different MOPs for downward routing to communicate gracefully in a single RPL network while preserving the high bidirectional data delivery performance. DualMOP-RPL allows multiple overlapping RPL networks in the same geographical regions to cooperate as a single densely connected network even if those networks are using different MOPs. This will not only improve the link qualities and routing performances of the networks but also allow for network migrations and alternate routing in the case of LBR failures. We evaluate DualMOP-RPL through extensive simulations and testbed experiments and show that our proposal eliminates all the problems we have identified.", "corpus_id": 10613656}, "neg": {"sha": "55afadb62e3a7e29078d03ad7d8d1cf09d24da16", "title": "Sensing as a Service (S2aaS): Buying and Selling IoT Data", "abstract": "Over the past few years, a large number of IoT solutions have come to the IoT marketplace [2]. Typically, each solution, consisting of one or more Internet Connected Objects (ICO), is designed to perform a single or minimal number of tasks (primary usage). For example, a smart sprinkler may only be activated if the soil moisture falls below a certain level in a garden. Further, smart plugs allow users to control electronic appliances (including legacy appliances) remotely or create automated schedules. Such automation not only brings convenience to users but also reduces resource wastage (e.g. through efficient planning and predictions).", "corpus_id": 2051243}}, {"query": {"sha": "23d2a2d2c37dee239726326466ef7ce4520065cd", "title": "Representing Web Graphs", "abstract": "A Web repository is a large special-purpose collection of Web pages and associated indexes. Many useful queries and computations over such repositories involve traversal and navigation of the Web graph. However, efficient traversal of huge Web graphs containing several hundred million vertices and a few billion edges is a challenging problem. An additional complication is the lack of a schema to describe the structure of Web graphs. As a result, naive graph representation schemes can significantly increase query ex ecution time and limit the usefulness of Web repositories. In this paper, we propose a novel representation for Web graphs, called an S-Node representation. We demonstrate that S-Node representations are highly space-efficient, en abling in-memory processing of very large Web graphs. In addition, we present detailed experiments that show that SNode representations can significantly reduce query execution times when compared with other schemes for represent-", "corpus_id": 291219}, "pos": {"sha": "44632ddf66c516b07b17c4fa195bc7731a091cb4", "title": "Trawling the Web for Emerging Cyber-Communities", "abstract": "The web harbors a large number of communities -groups of content-creators sharing a common interest -each of which manifests itself as a set of interlinked web pages. Newgroups and commercial web directories together contain of the order of 20000 such communities; our particular interest here is on emerging communities -those that have little or no representation in such fora. The subject of this paper is the systematic enumeration of over 100,000 such emerging communities from a web crawl: we call our process trawling. We motivate a graph-theoretic approach to locating such communities, and describe the algorithms, and the algorithmic engineering necessary to find structures that subscribe to this notion, the challenges in handling such a huge data set, and the results of our experiment.", "corpus_id": 7069190}, "neg": {"sha": "69d18d2a845c99155609298b582e19037807a567", "title": "The visual perception of 3D shape", "abstract": "A fundamental problem for the visual perception of 3D shape is that patterns of optical stimulation are inherently ambiguous. Recent mathematical analyses have shown, however, that these ambiguities can be highly constrained, so that many aspects of 3D structure are uniquely specified even though others might be underdetermined. Empirical results with human observers reveal a similar pattern of performance. Judgments about 3D shape are often systematically distorted relative to the actual structure of an observed scene, but these distortions are typically constrained to a limited class of transformations. These findings suggest that the perceptual representation of 3D shape involves a relatively abstract data structure that is based primarily on qualitative properties that can be reliably determined from visual information.", "corpus_id": 395877}}, {"query": {"sha": "16b1adad7f7126b5c5fe5360df134a6586086621", "title": "Health Monitoring of Civil Infrastructures Using Wireless Sensor Networks", "abstract": "A Wireless Sensor Network (WSN) for Structural Health Monitoring (SHM) is designed, implemented, deployed and tested on the 4200ft long main span and the south tower of the Golden Gate Bridge (GGB). Ambient structural vibrations are reliably measured at a low cost and without interfering with the operation of the bridge. Requirements that SHM imposes on WSN are identified and new solutions to meet these requirements are proposed and implemented. In the GGB deployment, 64 nodes are distributed over the main span and the tower, collecting ambient vibrations synchronously at 1kHz rate, with less than 10\u03bcs jitter, and with an accuracy of 30\u03bcG. The sampled data is collected reliably over a 46-hop network, with a bandwidth of 441B/s at the 46th hop. The collected data agrees with theoretical models and previous studies of the bridge. The deployment is the largest WSN for SHM.", "corpus_id": 2355810}, "pos": {"sha": "2530079d98f216a88dd5d91be12a48c6e39d143e", "title": "A macroscope in the redwoods", "abstract": "The wireless sensor network \"macroscope\" offers the potential to advance science by enabling dense temporal and spatial monitoring of large physical volumes. This paper presents a case study of a wireless sensor network that recorded 44 days in the life of a 70-meter tall redwood tree, at a density of every 5 minutes in time and every 2 meters in space. Each node measured air temperature, relative humidity, and photosynthetically active solar radiation. The network captured a detailed picture of the complex spatial variation and temporal dynamics of the microclimate surrounding a coastal redwood tree. This paper describes the deployed network and then employs a multi-dimensional analysis methodology to reveal trends and gradients in this large and previously-unobtainable dataset. An analysis of system performance data is then performed, suggesting lessons for future deployments.", "corpus_id": 1233150}, "neg": {"sha": "9561babe4c4934bf00484f8b243717c582b23665", "title": "Detection and Modeling of Buildings from Multiple Aerial Images", "abstract": "Automatic detection and description of cultural features, such as buildings, from aerial images is becoming increasingly important for a number of applications. This task also offers an excellent domain for studying the general problems of scene segmentation, 3-D inference and shape description under highly challenging conditions. We describe a system that detects and constructs 3-D models for rectilinear buildings with either flat or symmetric gable roofs from multiple aerial images; the multiple images, however, need not be stereo pairs (i.e. they may be acquired at different times). Hypotheses for rectangular roof components are generated by grouping lines in the images hierarchically, the hypotheses are verified by searching for presence of predicted walls and shadows. The hypothesis generation process combines the tasks of hierarchical grouping with matching at successive stages. Overlap and containment relations between 3-D structures are analyzed to resolve conflicts. This system has been tested on a large number of real examples with good result, some of which, and their evaluation, are included in the paper. Detection and Modeling of Buildings from Multiple Aerial Images", "corpus_id": 9290548}}, {"query": {"sha": "b134a7e710704cc328b7f55853d9821dcab6ea17", "title": "Enriched LDA (ELDA): Combination of latent Dirichlet allocation with word co-occurrence analysis for aspect extraction", "abstract": "Aspect extraction is one of the fundamental steps in analyzing the characteristics of opinions, feelings and emotions expressed in textual data provided for a certain topic. Current aspect extraction techniques are mostly based on topic models; however, employing only topic models causes incoherent aspects to be generated. Therefore, this paper aims to discover more precise aspects by incorporating co-occurrence relations as prior domain knowledge into the Latent Dirichlet Allocation (LDA) topic model. In the proposed method, first, the preliminary aspects are generated based on LDA. Then, in an iterative manner, the prior knowledge is extracted automatically from cooccurrence relations and similar aspects of relevant topics. Finally, the extracted knowledge is incorporated into the LDA model. The iterations improve the quality of the extracted aspects. The competence of the proposed ELDA for the aspect extraction task is evaluated through experiments on two datasets in the English and Persian languages. The experimental results indicate that ELDA not only outperforms the state-of-the-art alternatives in terms of topic coherence and precision, but also has no particular dependency on the written language and can be applied to all languages with reasonable accuracy. Thus, ELDA can impact natural language processing applications, particularly in languages with limited linguistic resources.", "corpus_id": 2444212}, "pos": {"sha": "2d5e004b36eaf5e019c334f589b2ca423e9d2d2e", "title": "Exploiting Domain Knowledge in Aspect Extraction", "abstract": "Aspect extraction is one of the key tasks in sentiment analysis. In recent years, statistical models have been used for the task. However, such models without any domain knowledge often produce aspects that are not interpretable in applications. To tackle the issue, some knowledge-based topic models have been proposed, which allow the user to input some prior domain knowledge to generate coherent aspects. However, existing knowledge-based topic models have several major shortcomings, e.g., little work has been done to incorporate the cannot-link type of knowledge or to automatically adjust the number of topics based on domain knowledge. This paper proposes a more advanced topic model, called MC-LDA (LDA with m-set and c-set), to address these problems, which is based on an Extended generalized P\u00f3lya urn (E-GPU) model (which is also proposed in this paper). Experiments on real-life product reviews from a variety of domains show that MCLDA outperforms the existing state-of-the-art", "corpus_id": 961871}, "neg": {"sha": "dd596f9da673fd7b8af9a8bfaac7a1f617086fe6", "title": "Bigrams of Syntactic Labels for Authorship Discrimination of Short Texts", "abstract": "We present a method for authorship discrimination that is based on the frequency of bigrams of syntactic labels that arise from partial parsing of the text. We show that this method, alone or combined with other classification features, achieves a high accuracy on discrimination of the work of Anne and Charlotte Bront\u00eb, which is very difficult to do by traditional methods. Moreover, high accuracies are achieved even on fragments of text little more than 200 words long. .................................................................................................................................................................................", "corpus_id": 13111019}}, {"query": {"sha": "26a5d97eb1cbc33f044390b4300a44bca0c84052", "title": "Human Factors in Agile Software Development", "abstract": "Through our four years experiments on students' Scrum based agile software development (ASD) process, we have gained deep understanding into the human factors of agile methodology. We designed an agile project management tool - the HASE collaboration development platform to support more than 400 students self-organized into 80 teams to practice ASD. In this thesis, Based on our experiments, simulations and analysis, we contributed a series of solutions and insights in this researches, including 1) a Goal Net based method to enhance goal and requirement management for ASD process, 2) a novel Simple Multi-Agent Real-Time (SMART) approach to enhance intelligent task allocation for ASD process, 3) a Fuzzy Cognitive Maps (FCMs) based method to enhance emotion and morale management for ASD process, 4) the first large scale in-depth empirical insights on human factors in ASD process which have not yet been well studied by existing research, and 5) the first to identify ASD process as a human-computation system that exploit human efforts to perform tasks that computers are not good at solving. On the other hand, computers can assist human decision making in the ASD process.", "corpus_id": 5270509}, "pos": {"sha": "e26285200097971c79dbcb5da0c30b12f512e250", "title": "U-SCRUM: An Agile Methodology for Promoting Usability", "abstract": "SCRUM poses key challenges for usability (Baxter et al., 2008). First, product goals are set without an adequate study of the userpsilas needs and context. The user stories selected may not be good enough from the usability perspective. Second, user stories of usability import may not be prioritized high enough. Third, given the fact that a product owner thinks in terms of the minimal marketable set of features in a just-in-time process, it is difficult for the development team to get a holistic view of the desired product or features. This experience report proposes U-SCRUM as a variant of the SCRUM methodology. Unlike typical SCRUM, where at best a team member is responsible for usability, U-SCRUM is based on our experience with having two product owners, one focused on usability and the other on the more conventional functions. Our preliminary result is that U-SCRUM yields improved usability than SCRUM.", "corpus_id": 23088110}, "neg": {"sha": "be588555329cdccac5a5845e217401194717cd47", "title": "A Survey on Stock Market Prediction Techniques", "abstract": "Different techniques are available for the prediction of stock market. Very popular some of these are Neural Network, Data Mining, Hidden Markov Model(HMM) And Neuro-Fuzzy system. From these Neural Network and Neuro-Fuzzy Systems are the most leading machine learning techniques in stock market index prediction area. Other traditional methods do not cover all possible relation of stock price movements. Neural Network and Markov Model can be used exclusively in the financial markets and forecasting of stock price. Neural Networks discovers the non linear relationship in the input data set without knowing the relation between input and output. For the sample data which contain noisy information with least principle ANN can generalize and correctly infer the unseen part of data. Hence ANN suits well than any other models in the prediction of stock markets.", "corpus_id": 15301355}}, {"query": {"sha": "81824cf4fe77ee1416321904954885c0cf51b746", "title": "A Review on Efficient Temperature Prediction System Using Back Propagation Neural Network", "abstract": "This paper presents a review of applications of artificial neural networks in weather forecasting area. Artificial neural networks in general are explained; some limitations and some proven benefits of neural networks are discussed. Accurate weather forecasting has been one of the most challenging problems around the world. The technical milestones, that have been achieved by the researchers in this field has been reviewed and presented in this survey paper. This paper also contains a proposed approach of artificial neural network that uses analysis of data and learn from it for future predictions of temperature, with the combination of wireless technology and statistica software. Keywords\u2014 Artificial neural network, Artificial intelligence, Back propagation neural network, Heavy weather software, Statistica software, Wireless technology.", "corpus_id": 7579324}, "pos": {"sha": "67874c7313339e65aeb8ed90c5fc73f8c68bbf5a", "title": "Intelligent weather forecast", "abstract": "In recent years, many solutions to intelligent weather forecast have been proposed, especially on temperature and rainfall, however, it is difficult to simulate the meteorological phenomena and the corresponding characters of weather when some complex differential equations and computational algorithms are merely piled up. On the basis of the review of researches on the non-linear characters of meteorology, This work describes a methodology to short-term temperature and rainfall forecasting over the east coast of China based on some necessary data preprocessing technique and the dynamic weighted time-delay neural networks (DWTDNN), in which each neuron in the input layer is scaled by a weighting function that captures the temporal dynamics of the biological task. This network is a simplified version of the focused gamma network and an extension of TDNN as it incorporates a priori knowledge available about the task into the network architecture. As an example, the estimations produced by the methodology were applied on 8 different weather forecasting data provided by the Shanghai Meteorology Centre to make the result more practical. The results confirm that proposed solutions have the potential for successful application to the problem of temperature and rainfall estimation, and the relationships between the factors that contribute to certain weather conditions can be estimated at a certain extent.", "corpus_id": 23989928}, "neg": {"sha": "c7233266c0d367b5c3fee49492337badf9547863", "title": "A survey of channel models for underwater optical wireless communication", "abstract": "This paper describes and assesses underwater channel models for optical wireless communication. Models considered are: inherent optical properties; vector radiative transfer theory with the small-angle analytical solution and numerical solutions of the vector radiative transfer equation (Monte Carlo, discrete ordinates and invariant imbedding). Variable composition and refractive index, in addition to background light, are highlighted as aspects of the channel which advanced models must represent effectively. Models are assessed against these aspects in terms of their ability to predict transmitted power and spatial and temporal distributions of light a specified distance from a transmitter. Monte Carlo numerical methods are found to be the most versatile but are compromised by long computational time and greater errors than other methods.", "corpus_id": 21967292}}, {"query": {"sha": "599d0462bb6894243bc098a1993d68d38ad7db27", "title": "Designing reconfigurable large-scale deep learning systems using stochastic computing", "abstract": "Deep Learning, as an important branch of machine learning and neural network, is playing an increasingly important role in a number of fields like computer vision, natural language processing, etc. However, large-scale deep learning systems mainly operate in high-performance server clusters, thus restricting the application extensions to personal or mobile devices. The solution proposed in this paper is taking advantage of the fantastic features of stochastic computing methods. Stochastic computing is a type of data representation and processing technique, which uses a binary bit stream to represent a probability number (by counting the number of ones in this bit stream). In the stochastic computing area, some key arithmetic operations such as additions or multiplications can be implemented with very simple components like AND gates or multiplexers, respectively. Thus it provides an immense design space for integrating a large amount of neurons and enabling fully parallel and scalable hardware implementations of large-scale deep learning systems. In this paper, we present a reconfigurable large-scale deep learning system based on stochastic computing technologies, including the design of the neuron, the convolution function, the back-propagation function and some other basic operations. And the network-on-chip technique is also proposed in this paper to achieve the goal of implementing a large-scale hardware system. Our experiments validate the functionality of reconfigurable deep learning systems using stochastic computing, and demonstrate that when the bit streams are set to be 8192 bits, classification of MNIST digits by stochastic computing can perform as low error rate as that by normal arithmetic operations.", "corpus_id": 738155}, "pos": {"sha": "ad0fac81d56f4609bb47fa923a4ea782614ac5dd", "title": "An Efficient Hardware Architecture for a Neural Network Activation Function Generator", "abstract": "This paper proposes an efficient hardware architecture for a function generator suitable for an artificial neural network (ANN). A spline-based approximation function is designed that provides a good trade-off between accuracy and silicon area, whilst also being inherently scalable and adaptable for numerous activation functions. This has been achieved by using a minimax polynomial and through optimal placement of the approximating polynomials based on the results of a genetic algorithm. The approximation error of the proposed method compares favourably to all related research in this field. Efficient hardware multiplication circuitry is used in the implementation, which reduces the area overhead and increases the throughput.", "corpus_id": 12737006}, "neg": {"sha": "f264e8b33c0d49a692a6ce2c4bcb28588aeb7d97", "title": "Recurrent Neural Network Regularization", "abstract": "We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, and machine translation.", "corpus_id": 17719760}}, {"query": {"sha": "b1eac2ca5c03ae34a8decce19dacbdd66ef092b6", "title": "Beyond Fano's inequality: bounds on the optimal F-score, BER, and cost-sensitive risk and their implications", "abstract": "Fano\u2019s inequality lower bounds the probability of transmission error through a communication channel. Applied to classification problems, it provides a lower bound on the Bayes error rate and motivates the widely used Infomax principle. In modern machine learning, we are often interested in more than just the error rate. In medical diagnosis, different errors incur different cost; hence, the overall risk is cost-sensitive. Two other popular criteria are balanced error rate (BER) and F-score. In this work, we focus on the two-class problem and use a general definition of conditional entropy (including Shannon\u2019s as a special case) to derive upper/lower bounds on the optimal F-score, BER and cost-sensitive risk, extending Fano\u2019s result. As a consequence, we show that Infomax is not suitable for optimizing F-score or cost-sensitive risk, in that it can potentially lead to low F-score and high risk. For cost-sensitive risk, we propose a new conditional entropy formulation which avoids this inconsistency. In addition, we consider the common practice of using a threshold on the posterior probability to tune performance of a classifier. As is widely known, a threshold of 0.5, where the posteriors cross, minimizes error rate\u2014we derive similar optimal thresholds for F-score and BER.", "corpus_id": 1945434}, "pos": {"sha": "5264ae4ea4411426ddd91dc780c2892c3ff933d3", "title": "An Introduction to Variable and Feature Selection", "abstract": "Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variabl es are available. These areas include text processing of internet documents, gene expression arr ay nalysis, and combinatorial chemistry. The objective of variable selection is three-fold: improvi ng the prediction performance of the predictors, providing faster and more cost-effective predict ors, and providing a better understanding of the underlying process that generated the data. The contrib utions of this special issue cover a wide range of aspects of such problems: providing a better definit ion of the objective function, feature construction, feature ranking, multivariate feature sele ction, efficient search methods, and feature validity assessment methods.", "corpus_id": 379259}, "neg": {"sha": "81b869b02f8e1dfb0e1f5491ef0944937c1dc8a6", "title": "EvoSuite: On the Challenges of Test Case Generation in the Real World", "abstract": "Test case generation is an important but tedious task, such that researchers have devised many different prototypes that aim to automate it. As these are research prototypes, they are usually only evaluated on a few hand-selected case studies, such that despite great results there remains the question of usability in the \u201creal world\u201d. EVOSUITE is such a research prototype, which automatically generates unit test suites for classes written in the Java programming language. In our ongoing endeavour to achieve real-world usability, we recently passed the milestone success of applying EVOSUITE on hundred projects randomly selected from the SourceForge open source platform. This paper discusses the technical challenges that a testing tool like EVOSUITE needs to address when handling Java classes coming from real-world open source projects, and when producing JUnit test suites intended for real users.", "corpus_id": 17988395}}, {"query": {"sha": "86d4ea9d82b4e4e14e54e6bd09e329b244fdfe3b", "title": "The flooding time synchronization protocol", "abstract": "Wireless sensor network applications, similarly to other distributed systems, often require a scalable time synchronization service enabling data consistency and coordination. This paper describes the Flooding Time Synchronization Protocol (FTSP), especially tailored for applications requiring stringent precision on resource limited wireless platforms. The proposed time synchronization protocol uses low communication bandwidth and it is robust against node and link failures. The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update. The unique high precision performance is reached by utilizing MAC-layer time-stamping and comprehensive error compensation including clock skew estimation. The sources of delays and uncertainties in message transmission are analyzed in detail and techniques are presented to mitigate their effects. The FTSP was implemented on the Berkeley Mica2 platform and evaluated in a 60-node, multi-hop setup. The average per-hop synchronization error was in the one microsecond range, which is markedly better than that of the existing RBS and TPSN algorithms.", "corpus_id": 9897231}, "pos": {"sha": "35b225cf1cb2ff030eff1ffd9c554a87418b16ee", "title": "Sensor network-based countersniper system", "abstract": "An ad-hoc wireless sensor network-based system is presented that detects and accurately locates shooters even in urban environments. The system consists of a large number of cheap sensors communicating through an ad-hoc wireless network, thus it is capable of tolerating multiple sensor failures, provides good coverage and high accuracy, and is capable of overcoming multipath effects. The performance of the proposed system is superior to that of centralized countersniper systems in such challenging environment as dense urban terrain. In this paper, in addition to the overall system architecture, the acoustic signal detection, the most important middleware services and the unique sensor fusion algorithm are also presented. The system performance is analyzed using real measurement data obtained at a US Army MOUT (Military Operations in Urban Terrain) facility.", "corpus_id": 734039}, "neg": {"sha": "34200d9fc5843237c2df7c364afe2c6a4e740a66", "title": "Algorithm of a Perspective Transform-Based PDF417 Barcode Recognition", "abstract": "When a PDF417 barcode are recognized, there are major recognition processes such as segmentation, normalization, and decoding. Among them, the segmentation and normalization steps are very important because they have a strong influence on the rate of barcode recognition. There are also previous segmentation and normalization techniques of processing barcode image, but some issues as follows. First, the previous normalization techniques need an additional restoration process and apply an interpolation process. Second, the previous recognition algorithms recognize a barcode image well only when it is placed in the predefined rectangular area. Therefore, we propose a novel segmentation and normalization method in PDF417 with the aims of improving its recognition rate and precision. The segmentation process to detect the barcode area in an image uses the conventional morphology and Hough transformmethods. The normalization process of the bar code region is based on the conventional perspective transformation and warping algorithms. In addition, we perform experiments using both experimental and actual data for evaluating our algorithms. Consequently, our experimental results can be summarized as follows. First, our method showed a stable performance over existing PDF417 barcode detection and recognition. Second, it overcame the limitation problem where the location of an input image should locate in a predefined rectangle area. Finally, it is expected that our result can be used as a restoration tool of printed images such as documents and pictures.", "corpus_id": 207263094}}, {"query": {"sha": "acb2d62bdb0a63e4137839964d814be16e54c9e2", "title": "Similarity in Semantic Graphs: Combining Structural, Literal, and Ontology-based Measures", "abstract": "Semantic graphs provide a valuable way to represent data while preserving real world meaning. As these graphs become more popular for storing large quantities of data, it is important to have methods of determining similarity between nodes in the graph. This paper extends previous structural similarity algorithms by taking advantage of meaning contained in a graph\u2019s literals and the graph\u2019s ontology and allowing users to control how much each type of similarity effects overall scores. Preliminary tests indicate that including these sources of similarity increases scores in way that is better aligned with human intuition.", "corpus_id": 12953296}, "pos": {"sha": "009dbf3187862352aac542bf7d61e27bce6b27f5", "title": "SimRank: a measure of structural-context similarity", "abstract": "The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects:\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach.", "corpus_id": 5704492}, "neg": {"sha": "c015eccc2fb60ec0d4793ca6743dbc8400356a7f", "title": "Optimizing Artificial Neural Networks using Cat Swarm Optimization Algorithm", "abstract": "An Artificial Neural Network (ANN) is an abstract representation of the biological nervous system which has the ability to solve many complex problems. The interesting attributes it exhibits makes an ANN capable of \u2015learning\u2016. ANN learning is achieved by training the neural network using a training algorithm. Aside from choosing a training algorithm to train ANNs, the ANN structure can also be optimized by applying certain pruning techniques to reduce network complexity. The Cat Swarm Optimization (CSO) algorithm, a swarm intelligence-based optimization algorithm mimics the behavior of cats, is used as the training algorithm and the Optimal Brain Damage (OBD) method as the pruning algorithm. This study suggests an approach to ANN training through the simultaneous optimization of the connection weights and ANN structure. Experiments performed on benchmark datasets taken from the UCI machine learning repository show that the proposed CSONNOBD is an effective tool for training neural networks.", "corpus_id": 30155362}}, {"query": {"sha": "bba8810ee6dafd858a7a277e00a83f1ffd94a373", "title": "A review of the mandarin-english code-switching corpus: SEAME", "abstract": "In this paper, we report the development of the South East Asia Mandarin-English (SEAME) corpus, including 63 hours of transcribed spontaneous Mandarin-English code-switching speech in its first release, and an update of additional 129 transcribed hours of speech. The corpus was developed for code-switching speech recognition research, such as LVCSR, language recognition, and language segmentation. It was made publicly available through LDC since 2015. The corpus was recorded under unscripted interview and conversation settings, therefore, consisting of spontaneous speech. This paper seeks to present a comprehensive statistics and analysis of the corpus after the update in term of its composition, speaker profile and code-switch characteristics. This paper will also review its suitability for various code-switch related researches and possible further developments.", "corpus_id": 3410408}, "pos": {"sha": "301eb1c203cd1467ada5282e1503c38a547e744e", "title": "Developing Language-tagged Corpora for Code-switching Tweets", "abstract": "Code-switching, where a speaker switches between languages mid-utterance, is frequently used by multilingual populations worldwide. Despite its prevalence, limited effort has been devoted to develop computational approaches or even basic linguistic resources to support research into the processing of such mixedlanguage data. We present a user-centric approach to collecting code-switched utterances from social media posts, and develop language universal guidelines for the annotation of codeswitched data. We also present results for several baseline language identification models on our corpora and demonstrate that language identification in code-switched text is a difficult task that calls for deeper investigation.", "corpus_id": 10074346}, "neg": {"sha": "f28cb37e0f1a225f0d4f27f43ef4e05eee8b321c", "title": "SEAME: a Mandarin-English code-switching speech corpus in south-east asia", "abstract": "In Singapore and Malaysia, people often speak a mixture of Mandarin and English within a single sentence. We call such sentences intra-sentential code-switch sentences. In this paper, we report on the development of a Mandarin-English codeswitching spontaneous speech corpus: SEAME. The corpus is developed as part of a multilingual speech recognition project and will be used to examine how Mandarin-English codeswitch speech occurs in the spoken language in South-East Asia. Additionally, it can provide insights into the development of large vocabulary continuous speech recognition (LVCSR) for code-switching speech. The corpus collected consists of intra-sentential code-switching utterances that are recorded under both interview and conversational settings. This paper describes the corpus design and the analysis of collected corpus.", "corpus_id": 5631708}}, {"query": {"sha": "a08576b8439cf948cd7a78451ae887ddbfaaabba", "title": "Automating role-based provisioning by learning from examples", "abstract": "Role-based provisioning has been adopted as a standard component in leading Identity Management products due to its low administration cost. However, the cost of adjusting existing roles to entitlements from newly deployed applications is usually very high. In this paper, a learning-based approach to automate the provisioning process is proposed and its effectiveness is verified by real provisioning data. Specific learning issues related to provisioning are identified and relevant solutions are presented.", "corpus_id": 18843029}, "pos": {"sha": "e95647e0d3ebae2683059e3c2c4a3bc10580374a", "title": "Mining roles with semantic meanings", "abstract": "With the growing adoption of role-based access control (RBAC) in commercial security and identity management products, how to facilitate the process of migrating a non-RBAC system to an RBAC system has become a problem with significant business impact. Researchers have proposed to use data mining techniques to discover roles to complement the costly top-down approaches for RBAC system construction. A key problem that has not been adequately addressed by existing role mining approaches is how to discover roles with semantic meanings. In this paper, we study the problem in two settings with different information availability. When the only information is user-permission relation, we propose to discover roles whose semantic meaning is based on formal concept lattices. We argue that the theory of formal concept analysis provides a solid theoretical foundation for mining roles from userpermission relation. When user-attribute information is also available, we propose to create roles that can be explained by expressions of user-attributes. Since an expression of attributes describes a real-world concept, the corresponding role represents a real-world concept as well. Furthermore, the algorithms we proposed balance the semantic guarantee of roles with system complexity. Our experimental results demonstrate the effectiveness of our approaches.", "corpus_id": 1981026}, "neg": {"sha": "fa1db61ea65c9f478c85e757f68642aee21a776a", "title": "A Data Sorting and Searching Scheme Based on Distributed Asymmetric Searchable Encryption", "abstract": "Searchable encryption algorithm is a hot issue nowadays. It can sort the results of searching and return the optimal matching files. The essence of Asymmetric searchable encryption is that users exchange the data of encryption, one party sends a ciphertext with key encryption, the other party with another key receives the ciphertext. Encryption key is not the same as the decryption key, and cannot deduce another key from any one of the key, thus it greatly enhances the information protection, and can prevent leakage the user\u2019s search pattern. In order to get higher efficiency and security in information retrieval, in this paper we introduce the concept of distributed Searchable asymmetric encryption, which is useful for security and can enable search operations on encrypted data. Moreover, we give the proof of security. Finally, experiments results show that our method has better retrieval efficiency.", "corpus_id": 3992072}}, {"query": {"sha": "39f6148d95daf9818d8ba6ede7e619e3f9c035bc", "title": "Incentive Mechanisms for Crowdsourcing Platforms", "abstract": "Crowdsourcing emerged with the development of Web 2.0 technologies as a distributed online practice that harnesses the collective aptitudes and skills of the crowd in order to reach specific goals. The success of crowdsourcing systems is influenced by the users\u2019 levels of participation and interactions on the platform. Therefore, there is a need for the incorporation of appropriate incentive mechanisms that would lead to sustained user engagement and quality contributions. Accordingly, the aim of the particular paper is threefold: first, to provide an overview of user motives and incentives, second, to present the corresponding incentive mechanisms used to trigger these motives, alongside with some indicative examples of successful crowdsourcing platforms that incorporate these incentive mechanisms, and third, to provide recommendations on their careful design in order to cater to the context and goal of the platform.", "corpus_id": 27408882}, "pos": {"sha": "2fda75479692808771fafece53625c3582f08f22", "title": "Performing a check-in: emerging practices, norms and 'conflicts' in location-sharing using foursquare", "abstract": "Location-sharing services have a long history in research, but have only recently become available for consumers. Most popular commercial location-sharing services differ from previous research efforts in important ways: they use manual 'check-ins' to pair user location with semantically named venues rather than tracking; venues are visible to all users; location is shared with a potentially very large audience; and they employ incentives. By analysis of 20 in-depth interviews with foursquare users and 47 survey responses, we gained insight into emerging social practices surrounding location-sharing. We see a shift from privacy issues and data deluge, to more performative considerations in sharing one's location. We discuss performance aspects enabled by check-ins to public venues, and show emergent, but sometimes conflicting norms (not) to check-in.", "corpus_id": 3045455}, "neg": {"sha": "1c8e841d9ea4f82f05707358cf79302fece9e721", "title": "The Ties That Bond: Re-Examining the Relationship between Facebook Use and Bonding Social Capital", "abstract": "Research has established a positive relationship between measures of Facebook use and perceptions of social capital. Like other social network sites, Facebook is especially well-positioned to enhance users' bridging social capital because it lowers coordination costs associated with maintaining a large, potentially diverse network of Friends. The relationship between Facebook use and perceived bonding social capital, however, is not as clear. Previous studies have found a positive relationship between Facebook Intensity (FBI) and a measure of bonding social capital that focuses on benefits accrued locally, i.e., within a university context. This study looks at the relationship between Facebook use, offline behaviors, and social provisions, a broad-based measure of social support that taps into a dimension of bonding. Findings suggest that while FBI no longer predicts bonding, specific behaviors on Facebook are positively linked to perceptions of three social provisions related to one's closest friends and family.", "corpus_id": 2712626}}, {"query": {"sha": "b1334838e9fa1909bc9c55e697681c60cf7ecf8a", "title": "How Content Volume on Landing Pages Influences Consumer Behavior", "abstract": "Accepting Editor: Eli Cohen \u2502 Received: December 2, 2017 \u2502 Revised: February 28, 2018 \u2502 Accepted: March 3, 2018. Cite as: Gafni, R. & Dvir, N. (2018). How content volume on landing pages influences consumer behavior: empirical evidence. Proceedings of the Informing Science and Information Technology Education Conference, La Verne, California, 35-53. Santa Rosa, CA: Informing Science Institute. https://doi.org/10.28945/4016", "corpus_id": 46930837}, "pos": {"sha": "64c1767a569571cc5071e40472a8b9ae04e3d860", "title": "E-commerce : the role of familiarity and trust", "abstract": "Familiarity is a precondition for trust, claims Luhmann [28: Luhmann N. Trust and power. Chichester, UK: Wiley, 1979 (translation from German)], and trust is a prerequisite of social behavior, especially regarding important decisions. This study examines this intriguing idea in the context of the E-commerce involved in inquiring about and purchasing books on the Internet. Survey data from 217 potential users support and extend this hypothesis. The data show that both familiarity with an Internet vendor and its processes and trust in the vendor in \u0304uenced the respondents' intentions to inquire about books, and their intentions to purchase them. Additionally, the data show that while familiarity indeed builds trust, it is primarily people's disposition to trust that a\u0080ected their trust in the vendor. Implications for research and practice are discussed. 7 2000 Elsevier Science Ltd. All rights reserved.", "corpus_id": 15411698}, "neg": {"sha": "08062eef23eddac42fdef148d603f87d7cd20e17", "title": "Architecting a Software-Defined Storage Platform for Cloud Storage Service", "abstract": "The advent of cloud, big data, and mobile creates fast-growing demand of storage. Cloud service providers and data centers are looking for cost-effective storage solution alternative to traditional high-cost embedded-system based storages to meet the need of newly emerging applications, such as messaging, video streaming, data analytics, etc. In particular, they are facing the challenge of lowering cost by accommodating multi-workload on a single instance of storage without compromising workload performance requirements. Software-defined storage (SDS) is a new generation of storage system. Unlike the traditional embedded-system based storages, the SDS uses a software-stack above commodity hardware to provide more valuable and cost-effective features. To meet the challenge the cloud service providers and the data centers are facing, the architecture of a new SDS platform called Federator is proposed in this paper. This paper argues that the architecture of a SDS platform should have three main characteristics: 1. The separation of the control and data path, 2. Self-configuration of storage resources, and 3. Restful APIs for new business extension. A new approach for self-configurable SDS is designed within Federator. This approach includes two types of neural network, which provides optimal storage resource configuration for any type of application. With the clear separation of the control and the data path, the intelligent self-configuration technologies, and the standard Restful API, Federator is expected to better meet the requirements of the new applications in ever-changing computing environments.", "corpus_id": 7022185}}, {"query": {"sha": "e0df1c7c407e856e03f30fb8506daeb986388cff", "title": "On the electrical characteristics of complementary metamaterial resonators", "abstract": "In this letter, a method to obtain the electrical characteristics of complementary split ring resonators (CSRRs) coupled to planar transmission lines is presented. CSRRs have been recently proposed by some of the authors as new constitutive elements for the synthesis of metamaterials with negative effective permittivity, and they have been applied to the fabrication of metamaterial-based circuits in planar technology. The method provides the electrical characteristics of CSRRs (including the intrinsic resonant frequency and the unloaded Q-factor), as well as the coupling capacitance between line and CSRRs, and the parameters of the host line. Parameter extraction from the proposed method is applied to two different structures corresponding to the basic cells of left handed (LH) and negative permittivity lines. The method is of actual interest for the design of microwave circuits and metamaterials based on these complementary resonant particles", "corpus_id": 26282801}, "pos": {"sha": "9f8b4730db6aba5839566647e43d75d19e7bd3f2", "title": "Babinet principle applied to the design of metasurfaces and metamaterials.", "abstract": "The electromagnetic theory of diffraction and the Babinet principle are applied to the design of artificial metasurfaces and metamaterials. A new particle, the complementary split rings resonator, is proposed for the design of metasurfaces with high frequency selectivity and planar metamaterials with a negative dielectric permittivity. Applications in the fields of frequency selective surfaces and polarizers, as well as in microwave antennas and filter design, can be envisaged. The tunability of all these devices by an applied dc voltage is also achievable if these particles are etched on the appropriate substrate.", "corpus_id": 13159877}, "neg": {"sha": "93639bd1435e8c6ad0da1dbce79c9dc61930c833", "title": "A Uniplanar Compact Photonic-Bandgap ( UC-PBG ) Structure and Its Applications for Microwave Circuits", "abstract": "This paper presents a novel photonic bandgap (PBG) structure for microwave integrated circuits. This new PBG structure is a two-dimensional square lattice with each element consisting of a metal pad and four connecting branches. Experimental results of a microstrip on a substrate with the PBG ground plane displays a broad stopband, as predicted by finite-difference time-domain simulations. Due to the slow-wave effect generated by this unique structure, the period of the PBG lattice is only 0:1 0 at the cutoff frequency, resulting in the most compact PBG lattice ever achieved. In the passband, the measured slowwave factor ( =k0) is 1.2\u20132.4 times higher and insertion loss is at the same level compared to a conventional 50line. This uniplanar compact PBG (UC-PBG) structure can be built using standard planar fabrication techniques without any modification. Several application examples have also been demonstrated, including a nonleaky conductor-backed coplanar waveguide and a compact spurious-free bandpass filter. This UC-PBG structure should find wide applications for high-performance and compact circuit components in microwave and millimeter-wave integrated circuits.", "corpus_id": 16476099}}, {"query": {"sha": "89a37349688b49bbfc9fd643db5a41b9071f9ca2", "title": "Multi-Class Support Vector Machines", "abstract": null, "corpus_id": 7359186}, "pos": {"sha": "7ec8029e5855b6efbac161488a2e68f83298091c", "title": "Extracting Support Data for a Given Task", "abstract": "We report a novel possibility for extracting a small subset of a data base which contains all the information necessary to solve a given classification task: using the Support Vector Algo rithm to train three different types of handwritten digit classifiers, we observed that these types of classifiers construct their decision surface from strongly overlapping small (k: 4%) subsets of the data base. This finding opens up the possibiiity of compressing data bases significantly by disposing of the data which is not important for the solution of a given task. In addition, we show that the theory allows us to predict the classifier that will have the best generalization ability, based solely on performance on the training set and characteristics of the learning machines. This finding is important for cases where the amount of available data is limited. Introduction Learning can be viewed as inferring regularities from a set of training examples. Much research has been devoted to the study of various learning algorithms which allow the extraction of these underlying regularities. No matter how different the outward appearance of these algorithms is, they all must rely on intrinsic regularities of the data. If the learning has been successful, these intrinsic regularities will be captured in the values of some parameters of a learning machine; for a polynomial classifier, these parameters will be the coefficients of a polynomial, for a neural net they will be the weights and biases, and for a radial basis function classifier they will be weights and centers. This variety of different representations of the intrinsic regularities, however, conceals the fact that they all stem I?--^ --------A M\u201dlll a C\u201d,ll\u2018ll\u201dll T\u201d\u201cb. In the present study, we explore the Support Vector Algorithm, an algorithm which gives rise to a number *permanent address: Max-Planck-Institut fiir biologische Kybernetik, Spemannstrafle 38, 72076 Tiibingen, Germany \u2018supported by ARPA under ONR contract number N00014-94-G-0186 252 KDD-95 of different types of pattern classifiers. We show that the algorithm allows us to construct different classifiers (polynomial classifiers, radial basis function classifiers, and neural networks) exhibiting similar performance and relying on almost identical subsets of the training set, their support vector seZs. In this sense, the support vector set is a stable characteristic of the data. In the csse where the available training data is limited, it is important to have a means for achieving the best possible generalization by controlling characteristics of the learning machine. We use a bound of statistical learning theory (Vapnik, 1995) to predict the degree which yields the best generalization for polynomial classifiers. In the next Section, we follow Vapnik (1995), Baser, Guyon & Vapnik (1992), and Cortes & Vapnik (1995) in briefly recapitulating this algorithm and the idea of Structural Risk Minimization that it is based on. Following that, we will present experimental results obtained with support vector machines. The Support Vector Machine Structural Risk Minimization For the case of two-class pattern recognition, the task of learning from examples can be formulated in the following way: given a set of functions {ja : a E A}, ja : RN + (-l,+l} (the index set A not necessarily being a subset of R\u201d) and a set of examples (x1,Yl),...,(w,w)~ xi E RN, w E (-1, +l}, each one generated from an unknown probability distribution Hr. ul. we want to find a function f.4 which -. ----~ ~---,\u201d provides th&&ll&t p&ible value for the risk R(a) = J KY(x) YI dP(x, Y). The problem is that R(a) is unknown, since P(x, y) is unknown. Therefore an induction principle for risk minimization is necessary. From: KDD-95 Proceedings. Copyright \u00a9 1995, AAAI (www.aaai.org). All rights reserved.", "corpus_id": 6636078}, "neg": {"sha": "81e0f458a894322baf170fa4d6fa8099bd055c39", "title": "Statistical Decision Theory and Bayesian Analysis, 2nd Edition", "abstract": null, "corpus_id": 198169059}}, {"query": {"sha": "093f81431a5bd5f32a49203603d123d5fe30d306", "title": "An Overview of the Development of Safety-critical Software", "abstract": "Safety-critical systems are an important part of our daily life. We depend on them in many situations and if a safety-critical system fails it can result in tragic events. It is not acceptable under any circumstances that a safety-critical system malfunction with the results of human lives being lost, and for this reason he development of such systems is a sensitive process. This article is an overview of different strategies for the development process, and parts that are important to design and build a reliable safetycritical system are identified, with the focus being on the software development. In this survey paper we have collected much information taken primarily from current research, but other sources are also represented. We start by defining what a safetycritical system is and why special care should be taken during development. We then describe our findings about strategies and technologies for guaranteeing safety in these kinds of systems, primarily from a development point of view, but ways of controlling running systems online are also described. We end the survey by talking about the future of the research and development of the field.", "corpus_id": 40512885}, "pos": {"sha": "34f9b101578503c86819292b148181e236c0033b", "title": "Design patterns for safety-critical embedded systems", "abstract": "Over the last few years, embedded systems have been increasingly used in safetycritical applications where failure can have serious consequences. The design of these systems is a complex process, which is requiring the integration of common design methods both in hardware and software to fulfill functional and non-functional requirements for these safety-critical applications. Design patterns, which give abstract solutions to commonly recurring design problems, have been widely used in the software and hardware domain. In this thesis, the concept of design patterns is adopted in the design of safetycritical embedded system. A catalog of design patterns was constructed to support the design of safety-critical embedded systems. This catalog includes a set of hardware and software design patterns which cover common design problems such as handling of random and systematic faults, safety monitoring, and sequence control. Furthermore, the catalog provides a decision support component that supports the decision process of choosing a suitable pattern for a particular problem based on the available resources and the requirements of the applicable patterns. As non-functional requirements are an important aspect in the design of safety-critical embedded systems, this work focuses on the integration of implications on non-functional properties in the existing design pattern concept. A pattern representation is proposed for safety-critical embedded application design methods by including fields for the implications and side effects of the represented design pattern on the non-functional requirements of the systems. The considered requirements include safety, reliability, modifiability, cost, and execution time. Safety and reliability represent the main non-functional requirements that should be provided in the design of safety-critical applications. Thus, reliability and safety assessment methods are proposed to show the relative safety and reliability improvement which can be achieved when using the design patterns under consideration. Moreover, a Monte Carlo based simulation method is used to illustrate the proposed assessment method which allows comparing different design patterns with respect to their impact on safety and reliability.", "corpus_id": 5728233}, "neg": {"sha": "8213dbed4db44e113af3ed17d6dad57471a0c048", "title": "The Nature of Statistical Learning Theory", "abstract": null, "corpus_id": 7138354}}, {"query": {"sha": "4e9f83aab2b1eab3183530b0597ca2f7b18406df", "title": "Automatic end-to-end De-identification: Is high accuracy the only metric?", "abstract": "De-identification of electronic health records (EHR) is a vital step towards advancing health informatics research and maximising the use of available data. It is a two-step process where step one is the identification of protected health information (PHI), and step two is replacing such PHI with surrogates. Despite the recent advances in automatic de-identification of EHR, significant obstacles remain if the abundant health data available are to be used to the full potential. Accuracy in de-identification could be considered a necessary, but not sufficient condition for the use of EHR without individual patient consent. We present here a comprehensive review of the progress to date, both the impressive successes in achieving high accuracy and the significant risks and challenges that remain. To best of our knowledge, this is the first paper to present a complete picture of end-to-end automatic deidentification. We review 18 recently published automatic de-identification systems -designed to de-identify EHR in the form of free textto show the advancements made in improving the overall accuracy of the system, and in identifying individual PHI. We argue that despite the improvements in accuracy there remain challenges in surrogate generation and replacements of identified PHIs, and the risks posed to patient protection and privacy.", "corpus_id": 59413754}, "pos": {"sha": "a70e02b6e42b908cdbc53bc6cecb532cf72d4d4a", "title": "MIMIC-III, a freely accessible critical care database", "abstract": "MIMIC-III ('Medical Information Mart for Intensive Care') is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework.", "corpus_id": 33285731}, "neg": {"sha": "0891ed6ed64fb461bc03557b28c686f87d880c9a", "title": "Neural Architectures for Named Entity Recognition", "abstract": "State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures\u2014one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers. 1", "corpus_id": 6042994}}, {"query": {"sha": "3d213687dd0f4966381b40d7b18b5147ecaf3523", "title": "A Process-Based Knowledge Management System for Schools: A Case Study in Taiwan", "abstract": "Knowledge management systems, or KMSs, have been widely adopted in business organizations, yet little research exists on the actual integration of the knowledge management model and the application of KMSs in secondary schools. In the present study, the common difficulties and limitations regarding the implementation of knowledge management into schools\u2019 organizational cultures are reviewed and discussed. Furthermore, relevant theories of knowledge management models are summarized, and a model of process-based knowledge management appropriate for schools is proposed. Based on the proposed model, this study applied a low-cost, open-source software development framework to establish a process-based knowledge management system for schools, or PKMSS. We conducted a 30-day empirical observation and survey at a secondary school in Taiwan. This case study used methods including a satisfaction survey, qualitative content analysis of knowledge discussion, and unstructured interviews to explore the progress, performance, and limitations of PKMSS implementation. It was determined that PKMSS has some value in promoting schools\u2019 knowledge management. It not only facilitates the externalization and combination of knowledge and effectively keeps the objectives of knowledge sharing in focus, but it also promotes inter-member interactions. However, this study also found certain restrictions in terms of the classification of knowledge content and system functions. Based on the above findings, we propose relevant suggestions as references for the evaluation and introduction of a KMS in educational organizations.", "corpus_id": 533785}, "pos": {"sha": "0a9b3dc3f46ca655869db789a0d1823d3141fb39", "title": "Knowledge sharing behavior of physicians in hospitals", "abstract": "Sharing knowledge of Physicians within hospitals can realize potential gains enormously and is critical to be successful and survive in competitive environments. There is a need for empirical research to identify the factors that determine physician's behavior to share knowledge. This study investigates the factors that determine the physician's individual knowledge sharing behavior in his/her department. The purpose of this study is to examine empirically the physicians' knowledge sharing behavior The research models under investigation are the Theory of Reasoned Action (TRA) model and the Theory of Planned Behavior (TPBI model. These models are empirically examined and compared, using the survey results on physicians' knowledge sharing behavior collected from 286 physicians practicing in 28 departments in 13 tertiary hospitals in Korea. TPB model exhibited a good fit with the data and appeared to be superior to TRA in explaining physicians' intentions to share knowledge. Amended TPB model provided an important improvement In fit over that of original TPB model. In amended TPB model, subjective norms were found to have the strongest total effects on behavioral intentions to share knowledge of physicians through direct and indirect path by attitude. Attitude was found to be the second important factor influencing physicians' intentions. Perceived behavioral control was also found to have effect on the intentions to share knowledge though it was weaker than that of subjective norms or attitude. The implications for physician's knowledge sharing activities are discussed", "corpus_id": 18299689}, "neg": {"sha": "8383250e32900a8ce2aa0ca534a5e84a275a5af4", "title": "Unsupervised Learning of Hierarchical Models for Hand-Object Interactions", "abstract": "Contact forces of the hand are visually unobservable, but play a crucial role in understanding hand-object interactions. In this paper, we propose an unsupervised learning approach for manipulation event segmentation and manipulation event parsing. The proposed framework incorporates hand pose kinematics and contact forces using a low-cost easy-to-replicate tactile glove. We use a temporal grammar model to capture the hierarchical structure of events, integrating extracted force vectors from the raw sensory input of poses and forces. The temporal grammar is represented as a temporal And-Or graph (T-AOG), which can be induced in an unsupervised manner. We obtain the event labeling sequences by measuring the similarity between segments using the Dynamic Time Alignment Kernel (DTAK). Experimental results show that our method achieves high accuracy in manipulation event segmentation, recognition and parsing by utilizing both pose and force data.", "corpus_id": 39216246}}, {"query": {"sha": "3cab3d4a4ca4c8a30474850dc1298168cf8580ff", "title": "A Deep Learning Based DDoS Detection System in Software-Defined Networking (SDN)", "abstract": "Distributed Denial of Service (DDoS) is one of the most prevalent attacks that an organizational network infrastructure comes across nowadays. Poor network management, low-priced Internet subscriptions, and readily available attack tools can be attributed to their rise. The recently emerged software-defined networking (SDN) and deep learning (DL) concepts promise to revolutionize their respective domains. SDN keeps the global view of the entire managed the network from a single point, i.e., the controller, thus making the network management easier. DL-based approaches improve feature extraction/reduction from a high-dimensional dataset such as network traffic headers. This work proposes a deep learning based multi-vector DDoS detection system in an SDN environment. The detection system is implemented as a network application on top of the SDN controller and can monitor the managed network traffic. Performance evaluation is based on different metrics by applying the system on traffic traces collected from different scenarios. A high accuracy with low false-positive rate is observed in attack detection for the proposed system. Received on 27 November 2016; accepted on 2 July 2017; published on 28 December 2017", "corpus_id": 5572966}, "pos": {"sha": "0c6c1d841d9bc3e921f2823e7953273cc44cfb2b", "title": "Combining ensemble methods and social network metrics for improving accuracy of OCSVM on intrusion detection in SCADA systems", "abstract": "Modern Supervisory Control and Data Acquisition SCADA systems used by the electric utility industry to monitor and control electric power generation, transmission and distribution are recognized today as critical components of the electric power delivery infrastructure. SCADA systems are large, complex and incorporate increasing numbers of widely distributed components. The presence of a real time intrusion detection mechanism, which can cope with different types of attacks, is of great importance, in order to defend a system against cyber attacks This defense mechanism must be distributed, cheap and above all accurate, since false positive alarms, or mistakes regarding the origin of the intrusion mean severe costs for the system. Recently an integrated detection mechanism, namely IT-OCSVM was proposed, which is distributed in a SCADA network as a part of a distributed intrusion detection system (IDS), providing accurate data about the origin and the time of an intrusion. In this paper we also analyze the architecture of the integrated detection mechanism and we perform extensive simulations based on real cyber attacks in a small SCADA testbed in order to evaluate the performance of the proposed mechanism.", "corpus_id": 11926819}, "neg": {"sha": "3dde3fec553b8d24a85d7059a3cc629ab33f7578", "title": "OpenFlow: enabling innovation in campus networks", "abstract": "This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too", "corpus_id": 1153326}}, {"query": {"sha": "0943d9776ae0bbae7409a96c6253b5919838c382", "title": "Heterogeneous Cellular Network With Energy Harvesting-Based D2D Communication", "abstract": "The concept of mobile user equipment (UE) relay (UER) has been introduced to support device-to-device (D2D) communications for enhancing communication reliability. However, as the UER needs to use its own power for other UE's data transmission, relaying information in D2D communication may be undesirable for the UER. To overcome this issue, motivated by the recent advances in energy harvesting (EH) techniques, we propose a D2D communication provided EH heterogeneous cellular network (D2D-EHHN), where UERs harvest energy from an access point (AP) and use the harvested energy for D2D communication. We develop a framework for the design and analysis of D2D-EHHN by introducing the EH region (EHR) and modeling the status of harvested energy using Markov chain. The UER distribution is derived, and a transmission mode selection scheme including the efficient UER selection method is proposed. The network outage probability is derived in close form to measure the performance of D2D-EHHN. Based on our analysis results, we explore the effects of network parameters on the outage probability and the optimal offloading bias in terms of the outage probability. Particularly, we show that having a high EH efficiency enhances the performance of D2D-EHHN, but can also degrade, especially for dense network.", "corpus_id": 11871752}, "pos": {"sha": "0bfc3626485953e2d3f87854a00a50f88c62269d", "title": "A Tractable Approach to Coverage and Rate in Cellular Networks", "abstract": "Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage/outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.", "corpus_id": 1434542}, "neg": {"sha": "63d984f99622a2831d8f15e9a9552bd585ba8e25", "title": "On the achievable throughput of a multiantenna Gaussian broadcast channel", "abstract": "A Gaussian broadcast channel (GBC) with r single-antenna receivers and t antennas at the transmitter is considered. Both transmitter and receivers have perfect knowledge of the channel. Despite its apparent simplicity, this model is, in general, a nondegraded broadcast channel (BC), for which the capacity region is not fully known. For the two-user case, we find a special case of Marton's (1979) region that achieves optimal sum-rate (throughput). In brief, the transmitter decomposes the channel into two interference channels, where interference is caused by the other user signal. Users are successively encoded, such that encoding of the second user is based on the noncausal knowledge of the interference caused by the first user. The crosstalk parameters are optimized such that the overall throughput is maximum and, surprisingly, this is shown to be optimal over all possible strategies (not only with respect to Marton's achievable region). For the case of r>2 users, we find a somewhat simpler choice of Marton's region based on ordering and successively encoding the users. For each user i in the given ordering, the interference caused by users j>i is eliminated by zero forcing at the transmitter, while interference caused by users j