_id
stringlengths
40
40
title
stringlengths
8
300
text
stringlengths
0
10k
0b099066706cb997feb7542d4bf502c6be38e755
Model-Driven Design for the Visual Analysis of Heterogeneous Data
As heterogeneous data from different sources are being increasingly linked, it becomes difficult for users to understand how the data are connected, to identify what means are suitable to analyze a given data set, or to find out how to proceed for a given analysis task. We target this challenge with a new model-driven design process that effectively codesigns aspects of data, view, analytics, and tasks. We achieve this by using the workflow of the analysis task as a trajectory through data, interactive views, and analytical processes. The benefits for the analysis session go well beyond the pure selection of appropriate data sets and range from providing orientation or even guidance along a preferred analysis path to a potential overall speedup, allowing data to be fetched ahead of time. We illustrate the design process for a biomedical use case that aims at determining a treatment plan for cancer patients from the visual analysis of a large, heterogeneous clinical data pool. As an example for how to apply the comprehensive design approach, we present Stack'n'flip, a sample implementation which tightly integrates visualizations of the actual data with a map of available data sets, views, and tasks, thus capturing and communicating the analytical workflow through the required data sets.
06bd4d2d21624c7713d7f10ccb7df61bf6b9ee71
Cache-oblivious streaming B-trees
A <b><i>streaming B-tree</i></b> is a dictionary that efficiently implements insertions and range queries. We present two cache-oblivious streaming B-trees, the <b><i>shuttle tree</i></b>, and the <b><i>cache-oblivious lookahead array (COLA)</i></b>. For block-transfer size <i>B</i> and on <i>N</i> elements, the shuttle tree implements searches in optimal <i>O</i>(log <sub><i>B</i>+1</sub><i>N</i>) transfers, range queries of <i>L</i> successive elements in optimal <i>O</i>(log <sub><i>B</i>+1</sub><i>N</i> +<i>L/B</i>) transfers, and insertions in <i>O</i>((log <sub><i>B</i>+1</sub><i>N</i>)/<i>B</i><sup>Θ(1/(log log <i>B</i>)<sup>2</sup>)</sup>+(log<sup>2</sup><i>N</i>)/<i>B</i>) transfers, which is an asymptotic speedup over traditional B-trees if <i>B</i> ≥ (log <i>N</i>)<sup>1+<i>c</i> log log log<sup>2</sup> <i>N</i></sup> for any constant <i>c</i> >1. A COLA implements searches in <i>O</i>(log <i>N</i>) transfers, range queries in O(log <i>N</i> + <i>L/B</i>) transfers, and insertions in amortized <i>O</i>((log <i>N</i>)/<i>B</i>) transfers, matching the bounds for a (cache-aware) buffered repository tree. A partially deamortized COLA matches these bounds but reduces the worst-case insertion cost to <i>O</i>(log <i>N</i>) if memory size <i>M</i> = Ω(log <i>N</i>). We also present a cache-aware version of the COLA, the <b><i>lookahead array</i></b>, which achieves the same bounds as Brodal and Fagerberg's (cache-aware) B<sup>ε</sup>-tree. We compare our COLA implementation to a traditional B-tree. Our COLA implementation runs 790 times faster for random inser-tions, 3.1 times slower for insertions of sorted data, and 3.5 times slower for searches.
d09bdfbf43bf409bc3bce436ba7a5374456b3c74
Dynamic Behaviour of an Electronically Commutated ( Brushless DC ) Motor Drive with Back-emf Sensing
Conventionally, BLDC motors are commutated in six-step pattern with commutation controlled by position sensors. To reduce cost and complexity of the drive system, sensorless drive is preferred. The existing sensorless control scheme with the conventional back EMF sensing based on motor neutral voltage for BLDC has certain drawbacks, which limit its applications. This paper presents the dynamic behaviour of an analytical and circuit model of a Brushless DC (BLDC) motors with back emf sensing. The circuit model was simulated using LTspice and the results obtained were compared with the experimental results. The value of the motor constant and the back emf measured from the experiment agreed with the simulated model. The starting behaviour of the motor, changing of load torque when current are varied and disturbance of sensing method at peak load shows that the dynamic behaviour results of the experiment obtained from oscilloscope are similar to the simulated value.
415b85c2f3650ac233399a6f147763055475126d
Quasi-Cyclic LDPC Codes: Influence of Proto- and Tanner-Graph Structure on Minimum Hamming Distance Upper Bounds
Quasi-cyclic (QC) low-density parity-check (LDPC) codes are an important instance of proto-graph-based LDPC codes. In this paper we present upper bounds on the minimum Hamming distance of QC LDPC codes and study how these upper bounds depend on graph structure parameters (like variable degrees, check node degrees, girth) of the Tanner graph and of the underlying proto-graph. Moreover, for several classes of proto-graphs we present explicit QC LDPC code constructions that achieve (or come close to) the respective minimum Hamming distance upper bounds. Because of the tight algebraic connection between QC codes and convolutional codes, we can state similar results for the free Hamming distance of convolutional codes. In fact, some QC code statements are established by first proving the corresponding convolutional code statements and then using a result by Tanner that says that the minimum Hamming distance of a QC code is upper bounded by the free Hamming distance of the convolutional code that is obtained by “unwrapping” the QC code.
0af8c168f4423535773afea201c05a9e63ee9515
Piranha: a scalable architecture based on single-chip multiprocessing
The microprocessor industry is currently struggling with higher development costs and longer design times that arise from exceedingly complex processors that are pushing the limits of instruction-level parallelism. Meanwhile, such designs are especially ill suited for important commercial applications, such as on-line transaction processing (OLTP), which suffer from large memory stall times and exhibit little instruction-level parallelism. Given that commercial applications constitute by far the most important market for high-performance servers, the above trends emphasize the need to consider alternative processor designs that specifically target such workloads. The abundance of explicit thread-level parallelism in commercial workloads, along with advances in semiconductor integration density, identify chip multiprocessing (CMP) as potentially the most promising approach for designing processors targeted at commercial servers. This paper describes the Piranha system, a research prototype being developed at Compaq that aggressively exploits chip multi-processing by integrating eight simple Alpha processor cores along with a two-level cache hierarchy onto a single chip. Piranha also integrates further on-chip functionality to allow for scalable multiprocessor configurations to be built in a glueless and modular fashion. The use of simple processor cores combined with an industry-standard ASIC design methodology allow us to complete our prototype within a short time-frame, with a team size and investment that are an order of magnitude smaller than that of a commercial microprocessor. Our detailed simulation results show that while each Piranha processor core is substantially slower than an aggressive next-generation processor, the integration of eight cores onto a single chip allows Piranha to outperform next-generation processors by up to 2.9 times (on a per chip basis) on important workloads such as OLTP. This performance advantage can approach a factor of five by using full-custom instead of ASIC logic. In addition to exploiting chip multiprocessing, the Piranha prototype incorporates several other unique design choices including a shared second-level cache with no inclusion, a highly optimized cache coherence protocol, and a novel I/O architecture.
20948c07477fe449dc3da2f06b8a68b3e76e2b08
Short-Circuit Detection for Electrolytic Processes Employing Optibar Intercell Bars
This paper presents a method to detect metallurgical short circuits suitable for Optibar intercell bars in copper electrowinning and electrorefining processes. One of the primary achievements of this bar is to limit short-circuit currents to a maximum of 1.5 p.u. of the actual process current. However, low-current short circuits are more difficult to detect. Thus, conventional short-circuit detection instruments like gaussmeters and infrared cameras become ineffective. To overcome this problem, the proposed method is based on detecting the voltage drop across anode-cathode pairs. The method does not affect the operation of the process and does not require modifications of the industrial plant. In order to verify the performance of this proposal, experimental measurements done over a period of four months at a copper refinery are presented. A 100% success rate was obtained.
6afe915d585ee9471c39efc7de245ec9db4072cb
Rating Image Aesthetics Using Deep Learning
This paper investigates unified feature learning and classifier training approaches for image aesthetics assessment . Existing methods built upon handcrafted or generic image features and developed machine learning and statistical modeling techniques utilizing training examples. We adopt a novel deep neural network approach to allow unified feature learning and classifier training to estimate image aesthetics. In particular, we develop a double-column deep convolutional neural network to support heterogeneous inputs, i.e., global and local views, in order to capture both global and local characteristics of images . In addition, we employ the style and semantic attributes of images to further boost the aesthetics categorization performance . Experimental results show that our approach produces significantly better results than the earlier reported results on the AVA dataset for both the generic image aesthetics and content -based image aesthetics. Moreover, we introduce a 1.5-million image dataset (IAD) for image aesthetics assessment and we further boost the performance on the AVA test set by training the proposed deep neural networks on the IAD dataset.
99b4ec66e2c732e4127e13b0ff2d90c80e31be7d
Vehicles Capable of Dynamic Vision
A survey is given on two decades of developments in the field, encompassing an increase in computing power by four orders of magnitude. The '4-D approach' integrating expectation-based methods from systems dynamics and control engineering with methods from AI has allowed to create vehicles with unprecedented capabilities in the technical realm: Autonomous road vehicle guidance in public traffic on freeways at speeds beyond 130 km/h, on-board-autonomous landing approaches of aircraft, and landmark navigation for AGV's, for road vehicles including turn-offs onto cross-roads, and for helicopters in low-level flight (real-time, hardware-in-the-loop simulations in the latter case).
3ecc4821d55c0e528690777be3588fc9cf023882
DeLS-3D: Deep Localization and Segmentation with a 3D Semantic Map
For applications such as augmented reality, autonomous driving, self-localization/camera pose estimation and scene parsing are crucial technologies. In this paper, we propose a unified framework to tackle these two problems simultaneously. The uniqueness of our design is a sensor fusion scheme which integrates camera videos, motion sensors (GPS/IMU), and a 3D semantic map in order to achieve robustness and efficiency of the system. Specifically, we first have an initial coarse camera pose obtained from consumer-grade GPS/IMU, based on which a label map can be rendered from the 3D semantic map. Then, the rendered label map and the RGB image are jointly fed into a pose CNN, yielding a corrected camera pose. In addition, to incorporate temporal information, a multi-layer recurrent neural network (RNN) is further deployed improve the pose accuracy. Finally, based on the pose from RNN, we render a new label map, which is fed together with the RGB image into a segment CNN which produces perpixel semantic label. In order to validate our approach, we build a dataset with registered 3D point clouds and video camera images. Both the point clouds and the images are semantically-labeled. Each video frame has ground truth pose from highly accurate motion sensors. We show that practically, pose estimation solely relying on images like PoseNet [25] may fail due to street view confusion, and it is important to fuse multiple sensors. Finally, various ablation studies are performed, which demonstrate the effectiveness of the proposed system. In particular, we show that scene parsing and pose estimation are mutually beneficial to achieve a more robust and accurate system.
440099b3dfff6d553b237e14985ee558b39d57dd
The learning curve in microtia surgery.
Reconstruction of the auricle is known to be complex. Our objective was to evaluate the improvement of the outcome of the lobulus-type microtia reconstruction. Patient satisfaction was also evaluated. There are no previous reports of the learning process in this field. Postoperative photographs of 51 microtia reconstructions were assessed and rated by a panel made up of six surgeons. The ratings were gathered to generate learning curves. Twenty-two patients assessed the outlook of their reconstructed ears, and the results were analyzed as a self-assessment group. The reliability of the rating by a panel was tested by intraclass correlations. There is a highly significant increasing trend in learning ( P = 0.000001). This trend is not constantly upward, and the steady state was not reached during the study. In the self-assessment group, females were significantly more critical than males ( P = 0.014). Intraclass correlation for six panel members was 0.90, and the rating was considered reliable. Thus, a long and gentle learning curve does exist in microtia reconstruction. To secure good quality and continuity, centralization of the operations and trainee arrangements are highly advisable. Outcomes of plastic surgery can reliably be rated by an evaluation panel.
f96bdd1e2a940030fb0a89abbe6c69b8d7f6f0c1
Comparison of human and computer performance across face recognition experiments
a r t i c l e i n f o Since 2005, human and computer performance has been systematically compared as part of face recognition competitions, with results being reported for both still and video imagery. The key results from these competitions are reviewed. To analyze performance across studies, the cross-modal performance analysis (CMPA) framework is introduced. The CMPA framework is applied to experiments that were part of face a recognition competition. The analysis shows that for matching frontal faces in still images, algorithms are consistently superior to humans. For video and difficult still face pairs, humans are superior. Finally, based on the CMPA framework and a face performance index, we outline a challenge problem for developing algorithms that are superior to humans for the general face recognition problem.
6a74e6b9bbbb093ebe928bc5c233953d74813392
Facilitating relational governance through service level agreements in IT outsourcing: An application of the commitment-trust theory
Article history: Received 18 June 2007 Received in revised form 9 June 2008 Accepted 20 June 2008 Available online 25 June 2008 Firms increasingly rely on outsourcing for strategic IT decisions, and themany sophisticated forms of outsourcing require significant management attention to ensure their success. Two forms of interorganizational governance–formal control and relational–have been used to examine the management of IT outsourcing relationships. Contrary to the conventional substitution view, recent studies have found that these two governance modes are complementary; however, the dynamics of their interactions remain unexplored. Based on the commitment–trust theory, this paper focuses on how the formal controlmechanism can influence the relational governance in an outsourcing engagement. Using service level agreements (SLAs) as a proxy for formal control, this studyfinds that eleven contractual elements, characterized as foundation, governance, and change management variables in an SLA, are positively related to the trust and relationship commitment among the parties. Trust and commitment, in turn, positively influence relational outcomes that we theorize would contribute to outsourcing success. Both research and practical implications of the results are discussed. © 2008 Elsevier B.V. All rights reserved.
633614f969b869388508c636a322eba35fe1f280
Plan 9 , A Distributed System
Plan 9 is a computing environment physically distributed across many machines. The distribution itself is transparent to most programs giving both users and administrators wide latitude in configuring the topology of the environment. Two properties make this possible: a per process group name space and uniform access to all resources by representing them as files.
fe400b814cfea5538887c92040f1ab0d6fb45bfe
Measuring the Diversity of Automatic Image Descriptions
Automatic image description systems typically produce generic sentences that only make use of a small subset of the vocabulary available to them. In this paper, we consider the production of generic descriptions as a lack of diversity in the output, which we quantify using established metrics and two new metrics that frame image description as a word recall task. This framing allows us to evaluate system performance on the head of the vocabulary, as well as on the long tail, where system performance degrades. We use these metrics to examine the diversity of the sentences generated by nine state-of-the-art systems on the MS COCO data set. We find that the systems trained with maximum likelihood objectives produce less diverse output than those trained with additional adversarial objectives. However, the adversarially-trained models only produce more types from the head of the vocabulary and not the tail. Besides vocabulary-based methods, we also look at the compositional capacity of the systems, specifically their ability to create compound nouns and prepositional phrases of different lengths. We conclude that there is still much room for improvement, and offer a toolkit to measure progress towards the goal of generating more diverse image descriptions.
26e9de9f9675bdd6550e72ed0eb2c25327bf3e19
Delaunay Meshing of Isosurfaces
We present an isosurface meshing algorithm, DelIso, based on the Delaunay refinement paradigm. This paradigm has been successfully applied to mesh a variety of domains with guarantees for topology, geometry, mesh gradedness, and triangle shape. A restricted Delaunay tri- angulation, dual of the intersection between the surface and the three dimensional Voronoi diagram, is often the main ingredient in Delaunay refinement. Computing and storing three dimensional Voronoi/Delaunay diagrams become bottlenecks for Delaunay refinement techniques since isosurface computations generally have large input datasets and output meshes. A highlight of our algorithm is that we find a simple way to recover the restricted Delaunay triangulation of the surface without computing the full 3D structure. We employ techniques for efficient ray tracing of isosurfaces to generate surface sample points, and demonstrate the effectiveness of our implementation using a variety of volume datasets.
c498b2dd59e7e097742f7cdcaed91e1228ec6224
The Many Faces of Formative Assessment.
In this research paper we consider formative assessment (FA) and discuss ways in which it has been implemented in four different university courses. We illustrate the different aspects of FA by deconstructing it and then demonstrating effectiveness in improving both teaching and student achievement. It appears that specifically “what is done” was less important since there were positive achievement gains in each study. While positive gains were realized with use of technology, gains were also realized with implementation of nontechnology dependent techniques. Further, gains were independent of class size or subject matter.
ef5f6d6b3a5d3436f1802120f71e765a1ec72c2f
A review of issues and challenges in designing Iris recognition Systems for noisy imaging environment
Iris recognition is a challenging task in a noisy imaging environment. Nowadays researcher's primary focus is to develop reliable Iris recognition System that can work in noisy imaging environment and to increase the iris recognition rate on different iris database. But there are major issues involved in designing such systems like occlusion by eyelashes, eyelids, glass frames, off-angle imaging, presence of contact lenses, poor illumination, motion blur, close-up of iris image acquired at a large standoff distance and specular reflections etc. Because of these issues the quality of acquired iris image gets affected. The performance of the iris based recognition system will deteriorate abruptly, when the iris mask is not accurate. This results in lower recognition rate. In this review paper different challenges in designing iris recognition systems for noisy imaging environment are reviewed and methodologies involved in overcoming these issues are discussed. At the end, some measures to improve the accuracy of such systems are suggested.
2a68c39e3586f87da501bc2a5ae6138469f50613
Mining Multi-label Data
A large body of research in supervised learning deals with the analysis of singlelabel data, where training examples are associated with a single label λ from a set of disjoint labels L. However, training examples in several application domains are often associated with a set of labels Y ⊆ L. Such data are called multi-label. Textual data, such as documents and web pages, are frequently annotated with more than a single label. For example, a news article concerning the reactions of the Christian church to the release of the “Da Vinci Code” film can be labeled as both religion and movies. The categorization of textual data is perhaps the dominant multi-label application. Recently, the issue of learning from multi-label data has attracted significant attention from a lot of researchers, motivated from an increasing number of new applications, such as semantic annotation of images [1, 2, 3] and video [4, 5], functional genomics [6, 7, 8, 9, 10], music categorization into emotions [11, 12, 13, 14] and directed marketing [15]. Table 1 presents a variety of applications that are discussed in the literature. This chapter reviews past and recent work on the rapidly evolving research area of multi-label data mining. Section 2 defines the two major tasks in learning from multi-label data and presents a significant number of learning methods. Section 3 discusses dimensionality reduction methods for multi-label data. Sections 4 and 5 discuss two important research challenges, which, if successfully met, can significantly expand the real-world applications of multi-label learning methods: a) exploiting label structure and b) scaling up to domains with large number of labels. Section 6 introduces benchmark multi-label datasets and their statistics, while Section 7 presents the most frequently used evaluation measures for multi-label learn-
1d9dece252de9457f504c8e79efe50fda73a2199
Prediction of central nervous system embryonal tumour outcome based on gene expression
Embryonal tumours of the central nervous system (CNS) represent a heterogeneous group of tumours about which little is known biologically, and whose diagnosis, on the basis of morphologic appearance alone, is controversial. Medulloblastomas, for example, are the most common malignant brain tumour of childhood, but their pathogenesis is unknown, their relationship to other embryonal CNS tumours is debated, and patients’ response to therapy is difficult to predict. We approached these problems by developing a classification system based on DNA microarray gene expression data derived from 99 patient samples. Here we demonstrate that medulloblastomas are molecularly distinct from other brain tumours including primitive neuroectodermal tumours (PNETs), atypical teratoid/rhabdoid tumours (AT/RTs) and malignant gliomas. Previously unrecognized evidence supporting the derivation of medulloblastomas from cerebellar granule cells through activation of the Sonic Hedgehog (SHH) pathway was also revealed. We show further that the clinical outcome of children with medulloblastomas is highly predictable on the basis of the gene expression profiles of their tumours at diagnosis.
4dc881bf2fb04ffe71bed7a9e0612cb93a9baccf
Problems in dealing with missing data and informative censoring in clinical trials
Acommon problem in clinical trials is the missing data that occurs when patients do not complete the study and drop out without further measurements. Missing data cause the usual statistical analysis of complete or all available data to be subject to bias. There are no universally applicable methods for handling missing data. We recommend the following: (1) Report reasons for dropouts and proportions for each treatment group; (2) Conduct sensitivity analyses to encompass different scenarios of assumptions and discuss consistency or discrepancy among them; (3) Pay attention to minimize the chance of dropouts at the design stage and during trial monitoring; (4) Collect post-dropout data on the primary endpoints, if at all possible; and (5) Consider the dropout event itself an important endpoint in studies with many.
5b62860a9eb3492c5c2d7fb42fd023cae891df45
Missing value estimation methods for DNA microarrays
MOTIVATION Gene expression microarray experiments can generate data sets with multiple missing expression values. Unfortunately, many algorithms for gene expression analysis require a complete matrix of gene array values as input. For example, methods such as hierarchical clustering and K-means clustering are not robust to missing data, and may lose effectiveness even with a few missing values. Methods for imputing missing data are needed, therefore, to minimize the effect of incomplete data sets on analyses, and to increase the range of data sets to which these algorithms can be applied. In this report, we investigate automated methods for estimating missing data. RESULTS We present a comparative study of several methods for the estimation of missing values in gene microarray data. We implemented and evaluated three methods: a Singular Value Decomposition (SVD) based method (SVDimpute), weighted K-nearest neighbors (KNNimpute), and row average. We evaluated the methods using a variety of parameter settings and over different real data sets, and assessed the robustness of the imputation methods to the amount of missing data over the range of 1--20% missing values. We show that KNNimpute appears to provide a more robust and sensitive method for missing value estimation than SVDimpute, and both SVDimpute and KNNimpute surpass the commonly used row average method (as well as filling missing values with zeros). We report results of the comparative experiments and provide recommendations and tools for accurate estimation of missing microarray data under a variety of conditions.
125d7bd51c44907e166d82469aa4a7ba1fb9b77f
Molecular classification of cancer: class discovery and class prediction by gene expression monitoring.
Although cancer classification has improved over the past 30 years, there has been no general approach for identifying new cancer classes (class discovery) or for assigning tumors to known classes (class prediction). Here, a generic approach to cancer classification based on gene expression monitoring by DNA microarrays is described and applied to human acute leukemias as a test case. A class discovery procedure automatically discovered the distinction between acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) without previous knowledge of these classes. An automatically derived class predictor was able to determine the class of new leukemia cases. The results demonstrate the feasibility of cancer classification based solely on gene expression monitoring and suggest a general strategy for discovering and predicting cancer classes for other types of cancer, independent of previous biological knowledge.
2dfae0e14aea9cc7fae04aa8662765e6227439ae
Multiple Imputation for Missing Data: Concepts and New Development
Multiple imputation provides a useful strategy for dealing with data sets with missing values. Instead of filling in a single value for each missing value, Rubin’s (1987) multiple imputation procedure replaces each missing value with a set of plausible values that represent the uncertainty about the right value to impute. These multiply imputed data sets are then analyzed by using standard procedures for complete data and combining the results from these analyses. No matter which complete-data analysis is used, the process of combining results from different imputed data sets is essentially the same. This results in valid statistical inferences that properly reflect the uncertainty due to missing values. This paper reviews methods for analyzing missing data, including basic concepts and applications of multiple imputation techniques. The paper also presents new SAS R procedures for creating multiple imputations for incomplete multivariate data and for analyzing results from multiply imputed data sets. These procedures are still under development and will be available in experimental form in Release 8.1 of the SAS System. Introduction Most SAS statistical procedures exclude observations with any missing variable values from the analysis. These observations are called incomplete cases. While using only complete cases has its simplicity, you lose information in the incomplete cases. This approach also ignores the possible systematic difference between the complete cases and incomplete cases, and the resulting inference may not be applicable to the population of all cases, especially with a smaller number of complete cases. Some SAS procedures use all the available cases in an analysis, that is, cases with available information. For example, PROC CORR estimates a variable mean by using all cases with nonmissing values on this variable, ignoring the possible missing values in other variables. PROC CORR also estimates a correlation by using all cases with nonmissing values for this pair of variables. This may make better use of the available data, but the resulting correlation matrix may not be positive definite. Another strategy is simple imputation, in which you substitute a value for each missing value. Standard statistical procedures for complete data analysis can then be used with the filled-in data set. For example, each missing value can be imputed from the variable mean of the complete cases, or it can be imputed from the mean conditional on observed values of other variables. This approach treats missing values as if they were known in the complete-data analyses. Single imputation does not reflect the uncertainty about the predictions of the unknown missing values, and the resulting estimated variances of the parameter estimates will be biased toward zero. Instead of filling in a single value for each missing value, a multiple imputation procedure (Rubin 1987) replaces each missing value with a set of plausible values that represent the uncertainty about the right value to impute. The multiply imputed data sets are then analyzed by using standard procedures for complete data and combining the results from these analyses. No matter which complete-data analysis is used, the process of combining results from different data sets is essentially the same. Multiple imputation does not attempt to estimate each missing value through simulated values but rather to represent a random sample of the missing values. This process results in valid statistical inferences that properly reflect the uncertainty due to missing values; for example, valid confidence intervals for parameters. Multiple imputation inference involves three distinct phases: The missing data are filled in m times to generate m complete data sets. The m complete data sets are analyzed by using standard procedures. The results from the m complete data sets are combined for the inference. A new SAS/STAT R procedure, PROC MI, is a multiple imputation procedure that creates multiply imputed data sets for incomplete p-dimensional multivariate data. It uses methods that incorporate appropriate variability across the m imputations. Once the m complete data sets are analyzed by using standard procedures, another new procedure, PROC MIANALYZE, can be used to generate valid statistical inferences about these parameters by combining results from the m complete data sets. Statistics and Data Analysis
4c1b6d34c0c35e41fb0b0e76794f04d1d871d34b
An Image-based Feature Extraction Approach for Phishing Website Detection
Phishing website creators and anti-phishing defenders are in an arms race. Cloning a website is fairly easy and can be automated by any junior programmer. Attempting to recognize numerous phishing links posted in the wild e.g. on social media sites or in email is a constant game of escalation. Automated phishing website detection systems need both speed and accuracy to win. We present a new method of detecting phishing websites and a prototype system LEO (Logo Extraction and cOmparison) that implements it. LEO uses image feature recognition to extract “visual hotspots” of a webpage, and compare these parts with known logo images. LEO can recognize phishing websites that has different layout from the original websites, or logos embedded in images. Comparing to existing visual similaritybased methods, our method has a much wider application range and higher detection accuracy. Our method successfully recognized 24 of 25 random URLs from PhishTank that previously evaded detection of other visual similarity-based methods.
22c33a890c0bf4fc2a2d354d48ee9e00bffcc9a6
Clustering based anomalous transaction reporting
Anti-money laundering (AML) refers to a set of financial and technological controls that aim to combat the entrance of dirty money into financial systems. A robust AML system must be able to automatically detect any unusual/anomalous financial transactions committed by a customer. The paper presents a hybrid anomaly detection approach that employs clustering to establish customers’ normal behaviors and uses statistical techniques to determine deviation of a particular transaction from the corresponding group behavior. The approach implements a variant of Euclidean Adaptive Resonance Theory, termed as TEART, to group customers in different clusters. The paper also suggests an anomaly index, named AICAF, for ranking transactions as anomalous. The approach has been tested on a real data set comprising of 8.2 million transactions and the results suggest that TEART scales well in terms of the partitions obtained when compared to the traditional K-means algorithm. The presented approach marks transactions having high AICAF values as suspicious.
4cb04f57941ed2a5335cdb82e3db9bdd5079bd87
Decomposing Adult Age Differences in Working Memory
Two studies, involving a total of 460 adults between 18 and 87 years of age, were conducted to determine which of several hypothesized processing components was most responsible for age-related declines in working memory functioning. Significant negative correlations between age and measures of working memory (i.e., from -.39 to -.52) were found in both studies, and these relations were substantially attenuated by partialing measures hypothesized to reflect storage capacity, processing efficiency, coordination effectiveness, and simple comparison speed. Because the greatest attenuation of the age relations occurred with measures of simple processing speed, it was suggested that many of the age differences in working memory may be mediated by age-related reductions in the speed of executing elementary operations.
b97cdc4bd0b021caefe7921c8c637b76f8a8114b
The Deep Regression Bayesian Network and Its Applications: Probabilistic Deep Learning for Computer Vision
Deep directed generative models have attracted much attention recently due to their generative modeling nature and powerful data representation ability. In this article, we review different structures of deep directed generative models and the learning and inference algorithms associated with the structures. We focus on a specific structure that consists of layers of Bayesian networks (BNs) due to the property of capturing inherent and rich dependencies among latent variables. The major difficulty of learning and inference with deep directed models with many latent variables is the intractable inference due to the dependencies among the latent variables and the exponential number of latent variable configurations. Current solutions use variational methods, often through an auxiliary network, to approximate the posterior probability inference. In contrast, inference can also be performed directly without using any auxiliary network to maximally preserve the dependencies among the latent variables. Specifically, by exploiting the sparse representation with the latent space, max-max instead of maxsum operation can be used to overcome the exponential number of latent configurations. Furthermore, the max-max operation and augmented coordinate ascent (AugCA) are applied to both supervised and unsupervised learning as well as to various inference. Quantitative evaluations on benchmark data sets of different models are given for both data representation and feature-learning tasks.
ec3cd5873b32221677df219fb7a06876fdd1de49
Making working memory work: a meta-analysis of executive-control and working memory training in older adults.
This meta-analysis examined the effects of process-based executive-function and working memory training (49 articles, 61 independent samples) in older adults (> 60 years). The interventions resulted in significant effects on performance on the trained task and near-transfer tasks; significant results were obtained for the net pretest-to-posttest gain relative to active and passive control groups and for the net effect at posttest relative to active and passive control groups. Far-transfer effects were smaller than near-transfer effects but were significant for the net pretest-to-posttest gain relative to passive control groups and for the net gain at posttest relative to both active and passive control groups. We detected marginally significant differences in training-induced improvements between working memory and executive-function training, but no differences between the training-induced improvements observed in older adults and younger adults, between the benefits associated with adaptive and nonadaptive training, or between the effects in active and passive control conditions. Gains did not vary with total training time.
5d1730136d23d5f1a6d0fea50a2203d8df6eb3db
Direct Torque and Indirect Flux Control of Brushless DC Motor
In this paper, the position-sensorless direct torque and indirect flux control of brushless dc (BLDC) motor with nonsinusoidal back electromotive force (EMF) has been extensively investigated. In the literature, several methods have been proposed for BLDC motor drives to obtain optimum current and torque control with minimum torque pulsations. Most methods are complicated and do not consider the stator flux linkage control, therefore, possible high-speed operations are not feasible. In this study, a novel and simple approach to achieve a low-frequency torque ripple-free direct torque control (DTC) with maximum efficiency based on dq reference frame is presented. The proposed sensorless method closely resembles the conventional DTC scheme used for sinusoidal ac motors such that it controls the torque directly and stator flux amplitude indirectly using d-axis current. This method does not require pulsewidth modulation and proportional plus integral regulators and also permits the regulation of varying signals. Furthermore, to eliminate the low-frequency torque oscillations, two actual and easily available line-to-line back EMF constants ( kba and kca) according to electrical rotor position are obtained offline and converted to the dq frame equivalents using the new line-to-line park transformation. Then, they are set up in the look-up table for torque estimation. The validity and practical applications of the proposed sensorless three-phase conduction DTC of BLDC motor drive scheme are verified through simulations and experimental results.
b0de31324518f5281c769b8047fae7c2cba0de5c
Automatic Identification and Classification of Misogynistic Language on Twitter
Hate speech may take different forms in online social media. Most of the investigations in the literature are focused on detecting abusive language in discussions about ethnicity, religion, gender identity and sexual orientation. In this paper, we address the problem of automatic detection and categorization of misogynous language in online social media. The main contribution of this paper is two-fold: (1) a corpus of misogynous tweets, labelled from different perspective and (2) an exploratory investigations on NLP features and ML models for detecting and classifying misogynistic language.
017f511734b7094c360ac7854d39f2fa063e8c9c
Role of IL-33 in inflammation and disease
Interleukin (IL)-33 is a new member of the IL-1 superfamily of cytokines that is expressed by mainly stromal cells, such as epithelial and endothelial cells, and its expression is upregulated following pro-inflammatory stimulation. IL-33 can function both as a traditional cytokine and as a nuclear factor regulating gene transcription. It is thought to function as an 'alarmin' released following cell necrosis to alerting the immune system to tissue damage or stress. It mediates its biological effects via interaction with the receptors ST2 (IL-1RL1) and IL-1 receptor accessory protein (IL-1RAcP), both of which are widely expressed, particularly by innate immune cells and T helper 2 (Th2) cells. IL-33 strongly induces Th2 cytokine production from these cells and can promote the pathogenesis of Th2-related disease such as asthma, atopic dermatitis and anaphylaxis. However, IL-33 has shown various protective effects in cardiovascular diseases such as atherosclerosis, obesity, type 2 diabetes and cardiac remodeling. Thus, the effects of IL-33 are either pro- or anti-inflammatory depending on the disease and the model. In this review the role of IL-33 in the inflammation of several disease pathologies will be discussed, with particular emphasis on recent advances.
60686a80b91ce9518428e00dea95dfafadadd93c
A Dual-Fed Aperture-Coupled Microstrip Antenna With Polarization Diversity
This communication presents a dual-port reconfigurable square patch antenna with polarization diversity for 2.4 GHz. By controlling the states of four p-i-n diodes on the patch, the polarization of the proposed antenna can be switched among linear polarization (LP), left- or right-hand circular polarization (CP) at each port. The air substrate and aperture-coupled feed structure are employed to simplify the bias circuit of p-i-n diodes. With high isolation and low cross-polarization level in LP modes, both ports can work simultaneously as a dual linearly polarized antenna for polarimetric radars. Different CP waves are obtained at each port, which are suitable for addressing challenges ranging from mobility, adverse weather conditions and non-line-of-sight applications. The antenna has advantages of simple biasing network, easy fabrication and adjustment, which can be widely applied in polarization diversity applications.
0cea7a2f9e0d156af3ce6ff3ebf9b07fbd98a90d
Expression Cloning of TMEM16A as a Calcium-Activated Chloride Channel Subunit
Calcium-activated chloride channels (CaCCs) are major regulators of sensory transduction, epithelial secretion, and smooth muscle contraction. Other crucial roles of CaCCs include action potential generation in Characean algae and prevention of polyspermia in frog egg membrane. None of the known molecular candidates share properties characteristic of most CaCCs in native cells. Using Axolotl oocytes as an expression system, we have identified TMEM16A as the Xenopus oocyte CaCC. The TMEM16 family of "transmembrane proteins with unknown function" is conserved among eukaryotes, with family members linked to tracheomalacia (mouse TMEM16A), gnathodiaphyseal dysplasia (human TMEM16E), aberrant X segregation (a Drosophila TMEM16 family member), and increased sodium tolerance (yeast TMEM16). Moreover, mouse TMEM16A and TMEM16B yield CaCCs in Axolotl oocytes and mammalian HEK293 cells and recapitulate the broad CaCC expression. The identification of this new family of ion channels may help the development of CaCC modulators for treating diseases including hypertension and cystic fibrosis.
6af807ff627e9fff8742e6d9196d8cbe79007f85
An improved wavelet based shock wave detector
In this paper, the detection of shock wave that generated by supersonic bullet is considered. A wavelet based multi-scale products method has been widely used for detection. However, the performance of method decreased at low signal-to-noise ratio (SNR). It is noted that the method does not consider the distribution of the signal and noise. Thus we analyze the method under the standard likelihood ratio test in this paper. It is found that the multi-scale product method is made in an assumption that is extremely restricted, just hold for a special noise condition. Based on the analysis, a general condition is considered for the detection. An improved detector under the standard likelihood ratio test is proposed. Monte Carlo simulations is conducted with simulated shock waves under additive white Gaussian noise. The result shows that this new detection algorithm outperforms the conventional detection algorithm.
dd41010faa2c848729bc79614f1846a2267f1904
Cascaded Random Forest for Fast Object Detection
A Random Forest consists of several independent decision trees arranged in a forest. A majority vote over all trees leads to the final decision. In this paper we propose a Random Forest framework which incorporates a cascade structure consisting of several stages together with a bootstrap approach. By introducing the cascade, 99% of the test images can be rejected by the first and second stage with minimal computational effort leading to a massively speeded-up detection framework. Three different cascade voting strategies are implemented and evaluated. Additionally, the training and classification speed-up is analyzed. Several experiments on public available datasets for pedestrian detection, lateral car detection and unconstrained face detection demonstrate the benefit of our contribution.
f212f69199a4ca3ca7c5b59cd6325d06686c1956
A density-based cluster validity approach using multi-representatives
Although the goal of clustering is intuitively compelling and its notion arises in many fields, it is difficult to define a unified approach to address the clustering problem and thus diverse clustering algorithms abound in the research community. These algorithms, under different clustering assumptions, often lead to qualitatively different results. As a consequence the results of clustering algorithms (i.e. data set partitionings) need to be evaluated as regards their validity based on widely accepted criteria. In this paper a cluster validity index, CDbw, is proposed which assesses the compactness and separation of clusters defined by a clustering algorithm. The cluster validity index, given a data set and a set of clustering algorithms, enables: i) the selection of the input parameter values that lead an algorithm to the best possible partitioning of the data set, and ii) the selection of the algorithm that provides the best partitioning of the data set. CDbw handles efficiently arbitrarily shaped clusters by representing each cluster with a number of points rather than by a single representative point. A full implementation and experimental results confirm the reliability of the validity index showing also that its performance compares favourably to that of several others.
144ea690592d0dce193cbbaac94266a0c3c6f85d
Multilevel Inverters for Electric Vehicle Applications
 This paper presents multilevel inverters as an application for all-electric vehicle (EV) and hybrid-electric vehicle (HEV) motor drives. Diode-clamped inverters and cascaded H-bridge inverters, (1) can generate near-sinusoidal voltages with only fundamental frequency switching; (2) have almost no electromagnetic interference (EMI) and commonmode voltage; and (3) make an EV more accessible/safer and open wiring possible for most of an EV’s power system. This paper explores the benefits and discusses control schemes of the cascade inverter for use as an EV motor drive or a parallel HEV drive and the diode-clamped inverter as a series HEV motor drive. Analytical, simulated, and experimental results show the superiority of these multilevel inverters for this new niche.
d2f4fb27454bb92f63446e4a059f59b35f4c2508
X-band FMCW radar system with variable chirp duration
For application in a short range ground based surveillance radar a combination between frequency modulated continuous wave (FMCW) transmit signals and a receive antenna array system is considered in this paper. The target echo signal is directly down converted by the instantaneous transmit frequency. The target range R will be estimated based on the measured frequency shift fB between transmit and receive signal. Due to an extremely short chirp duration Tchirp, the target radial velocity υτ has only a very small influence to the measured frequency shift fB. Therefore the radial velocity υτ will not be measured inside a single FMCW chirp but in a sequence of chirp signals and inside each individual range gate. Finally, the target azimuth angle is calculated utilizing the receive antenna array and applying a digital beamforming scheme. Furthermore, in order to unambiguously measure even high radial velocities, a variable chirp duration is proposed on a dwell to dwell basis.
0d11248c42d5a57bb28b00d64e21a32d31bcd760
Code-Red: a case study on the spread and victims of an internet worm
On July 19, 2001, more than 359,000 computers connected to the Internet were infected with the Code-Red (CRv2) worm in less than 14 hours. The cost of this epidemic, including subsequent strains of Code-Red, is estimated to be in excess of $2.6 billion. Despite the global damage caused by this attack, there have been few serious attempts to characterize the spread of the worm, partly due to the challenge of collecting global information about worms. Using a technique that enables global detection of worm spread, we collected and analyzed data over a period of 45 days beginning July 2nd, 2001 to determine the characteristics of the spread of Code-Red throughout the Internet.In this paper, we describe the methodology we use to trace the spread of Code-Red, and then describe the results of our trace analyses. We first detail the spread of the Code-Red and CodeRedII worms in terms of infection and deactivation rates. Even without being optimized for spread of infection, Code-Red infection rates peaked at over 2,000 hosts per minute. We then examine the properties of the infected host population, including geographic location, weekly and diurnal time effects, top-level domains, and ISPs. We demonstrate that the worm was an international event, infection activity exhibited time-of-day effects, and found that, although most attention focused on large corporations, the Code-Red worm primarily preyed upon home and small business users. We also qualified the effects of DHCP on measurements of infected hosts and determined that IP addresses are not an accurate measure of the spread of a worm on timescales longer than 24 hours. Finally, the experience of the Code-Red worm demonstrates that wide-spread vulnerabilities in Internet hosts can be exploited quickly and dramatically, and that techniques other than host patching are required to mitigate Internet worms.
bd58d8547ca844e6dc67f41c953bf133ce11d9b7
On the Generation of Skeletons from Discrete Euclidean Distance Maps
The skeleton is an important representation for shape analysis. A common approach for generating discrete skeletons takes three steps: 1) computing the distance map, 2) detecting maximal disks from the distance map, and 3) linking the centers of maximal disks (CMDs) into a connected skeleton. Algorithms using approximate distance metrics are abundant and their theory has been well established. However, the resulting skeletons may be inaccurate and sensitive to rotation. In this paper, we study methods for generating skeletons based on the exact Euclidean metric. We first show that no previous algorithms identifies the exact set of discrete maximal disks under the Euclidean metric. We then propose new algorithms and show that they are correct. To link CMDs into connected skeletons, we examine two prevalent approaches: connected thinning and steepest ascent. We point out that the connected thinning approach does not work properly for Euclidean distance maps. Only the steepest ascent algorithm produces skeletons that are truly medially placed. The resulting skeletons have all the desirable properties: they have the same simple connectivity as the figure, they are well-centered, they are insensitive to rotation, and they allow exact reconstruction. The effectiveness of our algorithms is demonstrated with numerous examples.
0e62a123913b0dca9e1697a3cbf978d69dd9284d
CloudSpeller: query spelling correction by using a unified hidden markov model with web-scale resources
Query spelling correction is an important component of modern search engines that can help users to express an information need more accurately and thus improve search quality. In this work we proposed and implemented an end-to-end speller correction system, namely CloudSpeller. The CloudSpeller system uses a Hidden Markov Model to effectively model major types of spelling errors in a unified framework, in which we integrate a large-scale lexicon constructed using Wikipedia, an error model trained from high confidence correction pairs, and the Microsoft Web N-gram service. Our system achieves excellent performance on two search query spelling correction datasets, reaching 0.960 and 0.937 F1 scores on the TREC dataset and the MSN dataset respectively.
ddc334306f269968451ca720b3d804e9b0911765
Unsupervised Event Tracking by Integrating Twitter and Instagram
This paper proposes an unsupervised framework for tracking real world events from their traces on Twitter and Instagram. Empirical data suggests that event detection from Instagram streams errs on the false-negative side due to the relative sparsity of Instagram data (compared to Twitter data), whereas event detection from Twitter can suffer from false-positives, at least if not paired with careful analysis of tweet content. To tackle both problems simultaneously, we design a unified unsupervised algorithm that fuses events detected originally on Instagram (called I-events) and events detected originally on Twitter (called T-events), that occur in adjacent periods, in an attempt to combine the benefits of both sources while eliminating their individual disadvantages. We evaluate the proposed framework with real data crawled from Twitter and Instagram. The results indicate that our algorithm significantly improves tracking accuracy compared to baselines.
92319b104fe2e8979e8237a587bdf455bc7fbc83
Design consideration of recent advanced low-voltage CMOS boost converter for energy harvesting
With the emergence of nanoscale material-based energy harvesters such as thermoelectric generator and microbial fuel cells, energy-harvesting-assisted self-powered electronics systems are gaining popularity. The state-of-the-art low-voltage CMOS boost converter, a critical voltage converter circuit for low power energy harvesting sources will be reviewed in this paper. Fundamentals of the boost converter circuit startup problem are discussed and recent circuit solutions to solve this problem are compared and analyzed. Necessary design considerations and trade-offs regarding circuit topology, component and CMOS process are also addressed.
0462a4fcd991f8d6f814337882da182c504d1d7b
Syntactic Annotations for the Google Books NGram Corpus
We present a new edition of the Google Books Ngram Corpus, which describes how often words and phrases were used over a period of five centuries, in eight languages; it reflects 6% of all books ever published. This new edition introduces syntactic annotations: words are tagged with their part-of-speech, and headmodifier relationships are recorded. The annotations are produced automatically with statistical models that are specifically adapted to historical text. The corpus will facilitate the study of linguistic trends, especially those related to the evolution of syntax.
00daf408c36359b14a92953fda814b6e3603b522
A Bayesian framework for word segmentation: Exploring the effects of context
Since the experiments of Saffran et al. [Saffran, J., Aslin, R., & Newport, E. (1996). Statistical learning in 8-month-old infants. Science, 274, 1926-1928], there has been a great deal of interest in the question of how statistical regularities in the speech stream might be used by infants to begin to identify individual words. In this work, we use computational modeling to explore the effects of different assumptions the learner might make regarding the nature of words--in particular, how these assumptions affect the kinds of words that are segmented from a corpus of transcribed child-directed speech. We develop several models within a Bayesian ideal observer framework, and use them to examine the consequences of assuming either that words are independent units, or units that help to predict other units. We show through empirical and theoretical results that the assumption of independence causes the learner to undersegment the corpus, with many two- and three-word sequences (e.g. what's that, do you, in the house) misidentified as individual words. In contrast, when the learner assumes that words are predictive, the resulting segmentation is far more accurate. These results indicate that taking context into account is important for a statistical word segmentation strategy to be successful, and raise the possibility that even young infants may be able to exploit more subtle statistical patterns than have usually been considered.
0dde53334f17ac4a2b9aee0915ab001f8add692f
Quantifying the evolutionary dynamics of language
Human language is based on grammatical rules. Cultural evolution allows these rules to change over time. Rules compete with each other: as new rules rise to prominence, old ones die away. To quantify the dynamics of language evolution, we studied the regularization of English verbs over the past 1,200 years. Although an elaborate system of productive conjugations existed in English’s proto-Germanic ancestor, Modern English uses the dental suffix, ‘-ed’, to signify past tense. Here we describe the emergence of this linguistic rule amidst the evolutionary decay of its exceptions, known to us as irregular verbs. We have generated a data set of verbs whose conjugations have been evolving for more than a millennium, tracking inflectional changes to 177 Old-English irregular verbs. Of these irregular verbs, 145 remained irregular in Middle English and 98 are still irregular today. We study how the rate of regularization depends on the frequency of word usage. The half-life of an irregular verb scales as the square root of its usage frequency: a verb that is 100 times less frequent regularizes 10 times as fast. Our study provides a quantitative analysis of the regularization process by which ancestral forms gradually yield to an emerging linguistic rule.
8f563b44db3e9fab315b78cbcccae8ad69f0a000
Internet Privacy Concerns Confirm the Case for Intervention
yberspace is invading private space. Controversies about spam, cookies, and the clickstream are merely the tip of an iceberg. Behind them loom real-time person location technologies includ-It's small wonder that lack of public confidence is a serious impediment to the take-up rate of consumer e-commerce. The concerns are not merely about security of value, but about something much more significant: trust in the information society. Conventional thinking has been the Internet renders laws less relevant. On the contrary, this article argues that the current debates about privacy and the Internet are the harbingers of a substantial shift. Because the U.S. has held off general privacy protections for so long, it will undergo much more significant adjustments than European countries. Privacy is often thought of as a moral right or a legal right. But it's often more useful to perceive privacy as the interest that individuals have in sustaining a personal space, free from interference by other people and organizations. Personal space has multiple dimensions, in particular , privacy of the person (concerned with the integrity of the individual's body), privacy of personal behavior, privacy of personal communications , and privacy of personal data. Information privacy refers to the claims of individuals that data about themselves should generally not be available to other individuals and organizations, and that, where data is possessed by another party, the individual must be able to exercise a substantial degree of control over that data and its use. (Definitional issues are examined in [6].) Information privacy has been under increasing threat as a result of the rapid replacement of expensive physical surveillance by what I referred to in Communications over a decade ago as " dataveillance: " the systematic use of personal data systems in the investigation or monitoring of people's actions or communications [2]. Intensive data trails about each individual provide a basis for the exercise of power over I In nt te er rv ve en nt ti io on n Public confidence in matters of online privacy seemingly lessens as the Internet grows. Indeed, there is mounting evidence the necessary remedy may be a protective framework that includes (gulp) legislative provisions.
fbf7e8e8ecc47eceee4e3f86e3eecf5b489a350b
An Engineering Model for Color Difference as a Function of Size
This work describes a first step towards the creation of an engineering model for the perception of color difference as a function of size. Our approach is to non-uniformly rescale CIELAB using data from crowdsourced experiments, such as those run on Amazon Mechanical Turk. In such experiments, the inevitable variations in viewing conditions reflect the environment many applications must run in. Our goal is to create a useful model for design applications where it is important to make colors distinct, but for which a small set of highly distinct colors is inadequate.
3aeb560af8ff8509e6ef0010ae2b53bd15726230
Generating UML Diagrams from Natural Language Specifications
The process of generating UML Diagrams from natural language specification is a highly challenging task. This paper proposes a method and tool to facilitate the requirements analysis process and extract UML diagrams from textual requirements using natural language processing (NLP) and Domain Ontology techniques. Requirements engineers analyze requirements manually to understand the scope of the system. The time spent on the analysis and the low quality of human analysis justifies the need of a tool for better understanding of the system. “Requirement analysis to Provide Instant Diagrams (RAPID)” is a desktop tool to assist requirements analysts and Software Engineering students to analyze textual requirements, finding core concepts and its relationships, and extraction UML diagrams. The evaluation of RAPID system is in the process and will be conducted through two forms of evaluation, experimental and expert evaluation.
02eff775e05d9e67e2498fe464be598be4ab84ce
Chatbot for admissions
The communication of potential students with a university department is performed manually and it is a very time consuming procedure. The opportunity to communicate with on a one-to-one basis is highly valued. However with many hundreds of applications each year, one-to-one conversations are not feasible in most cases. The communication will require a member of academic staff to expend several hours to find suitable answers and contact each student. It would be useful to reduce his costs and time. The project aims to reduce the burden on the head of admissions, and potentially other users, by developing a convincing chatbot. A suitable algorithm must be devised to search through the set of data and find a potential answer. The program then replies to the user and provides a relevant web link if the user is not satisfied by the answer. Furthermore a web interface is provided for both users and an administrator. The achievements of the project can be summarised as follows. To prepare the background of the project a literature review was undertaken, together with an investigation of existing tools, and consultation with the head of admissions. The requirements of the system were established and a range of algorithms and tools were investigated, including keyword and template matching. An algorithm that combines keyword matching with string similarity has been developed. A usable system using the proposed algorithm has been implemented. The system was evaluated by keeping logs of questions and answers and by feedback received by potential students that used it. 3 Acknowledgements I would like to thank Dr Peter Hancox for his immeasurable help and support throughout this project. I also need to express my thanks to the computer support team for their excellent help and instructions. Finally, I feel the need to acknowledge the constant support offered by my parents. Introduction This chapter covers an introduction to the project including the context, a description of aims and objectives, a description of what has been achieved, contributions and the structure of the report. Although the admissions process works properly as it is, it is very difficult and time consuming to contact a member of staff of the university. However, the problem would be partially solved if the applicant could talk to a convincing chatbot, able to respond to their concerns with information about admissions, booking accommodation, paying fees in instalments and what pre-sessional courses are on offer. The …
5cc695c35e87c91c060aa3fbf9305b4fdc960c9f
Levofloxacin implants with predefined microstructure fabricated by three-dimensional printing technique.
A novel three-dimensional (3D) printing technique was utilized in the preparation of drug implants that can be designed to have complex drug release profiles. The method we describe is based on a lactic acid polymer matrix with a predefined microstructure that is amenable to rapid prototyping and fabrication. We describe how the process parameters, especially selection of the binder, were optimized. Implants containing levofloxacin (LVFX) with predefined microstructures using an optimized binder solution of ethanol and acetone (20:80, v/v) were prepared by a 3D printing process that achieved a bi-modal profile displaying both pulsatile and steady state LVFX release from a single implant. The pulse release appeared from day 5 to 25, followed by a steady state phase of 25 days. The next pulse release phase then began at the 50th day and ended at the 80th day. To evaluate the drug implants structurally and analytically, the microscopic morphologies and the in vitro release profiles of the implants fabricated by both the 3D printing technique and the conventional lost mold technique were assessed using environmental scanning electron microscopy (ESEM) and UV absorbance spectrophotometry. The results demonstrate that the 3D printing technology can be used to fabricate drug implants with sophisticated micro- and macro-architecture in a single device that may be rapidly prototyped and fabricated. We conclude that drug implants with predefined microstructure fabricated by 3D printing techniques can have clear advantages compared to implants fabricated by conventional compressing methods.
30fa9a026e511ee1f00f57c761b62f59c0c4b7c0
A Machine Learning Approach to Pronoun Resolution in Spoken Dialogue
We apply a decision tree based approach to pronoun resolution in spoken dialogue. Our system deals with pronouns with NPand non-NP-antecedents. We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron’s (2002) manually tuned system.
a4f649c50b328705540652cb26e0e8a1830ff676
Smart Home Automated Control System Using Android Application and Microcontroller
Smart Home System (SHS) is a dwelling incorporating a communications network that connects the electrical appliances and services allowing them to be remotely controlled, monitored or accessed. SHS includes different approaches to achieve multiple objectives range from enhancing comfort in daily life to enabling a more independent life for elderly and handicapped people. In this paper, the main four fields for SHS which are, home automation and remote monitoring, environmental monitoring, including humidity, temperature, fault tracking and management and finally the health monitoring have been considered. The system design is based on the Microcontroller MIKRO C software; multiple passive and active sensors and also a wireless internet services which is used in different monitoring and control processes .This paper presents the hardware implementation of a multiplatform control system for house automation and combines both hardware and software technologies. The system results shows that it can be classified as a comfortable, secure, private, economic and safe system in addition to its great flexibility and reliability.
221d61b5719c3c66109d476f3b35b1f557a60769
Regression Shrinkage and Selection via the Elastic Net , with Applications to Microarrays
We propose the elastic net, a new regression shrinkage and selection method. Real data and a simulation study show that the elastic net often outperforms the lasso, while it enjoys a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strong correlated predictors are kept in the model. The elastic net is particularly useful in the analysis of microarray data in which the number of genes (predictors) is much bigger than the number of samples (observations). We show how the elastic net can be used to construct a classification rule and do automatic gene selection at the same time in microarray data, where the lasso is not very satisfied. We also propose an efficient algorithm for solving the elastic net based on the recently invented LARS algorithm. keywords: Gene selection; Grouping effect; Lasso; LARS algorithm; Microarray classification.
5999a5f3a49a53461b02c139e16f79cf820a5774
Path-planning strategies for a point mobile automaton moving amidst unknown obstacles of arbitrary shape
The problem of path planning for an automaton moving in a two-dimensional scene filled with unknown obstacles is considered. The automaton is presented as a point; obstacles can be of an arbitrary shape, with continuous boundaries and of finite size; no restriction on the size of the scene is imposed. The information available to the automaton is limited to its own current coordinates and those of the target position. Also, when the automaton hits an obstacle, this fact is detected by the automaton's “tactile sensor.” This information is shown to be sufficient for reaching the target or concluding in finite time that the target cannot be reached. A worst-case lower bound on the length of paths generated by any algorithm operating within the framework of the accepted model is developed; the bound is expressed in terms of the perimeters of the obstacles met by the automaton in the scene. Algorithms that guarantee reaching the target (if the target is reachable), and tests for target reachability are presented. The efficiency of the algorithms is studied, and worst-case upper bounds on the length of generated paths are produced.
a1dcc2a3bbd58befa7ba4b9b816aabc4aa450b38
Obsessive-compulsive disorder and gut microbiota dysregulation.
Obsessive-compulsive disorder (OCD) is a debilitating disorder for which the cause is not known and treatment options are modestly beneficial. A hypothesis is presented wherein the root cause of OCD is proposed to be a dysfunction of the gut microbiome constituency resulting in a susceptibility to obsessional thinking. Both stress and antibiotics are proposed as mechanisms by which gut microbiota are altered preceding the onset of OCD symptomology. In this light, pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections (PANDAS) leading to episodic OCD is explained not by group A beta-hemolytic streptococcal infections, but rather by prophylactic antibiotics that are administered as treatment. Further, stressful life events known to trigger OCD, such as pregnancy, are recast to show the possibility of altering gut microbiota prior to onset of OCD symptoms. Suggested treatment for OCD would be the directed, specie-specific (re)introduction of beneficial bacteria modifying the gut microbiome, thereby ameliorating OCD symptoms. Special considerations should be contemplated when considering efficacy of treatment, particularly the unhealthy coping strategies often observed in patients with chronic OCD that may need addressing in conjunction with microbiome remediation.
a67f9ecae9ccab7e13630f90cdbf826ba064eef7
Event-Based Mobile Social Networks: Services, Technologies, and Applications
Event-based mobile social networks (MSNs) are a special type of MSN that has an immanently temporal common feature, which allows any smart phone user to create events to share group messaging, locations, photos, and insights among participants. The emergence of Internet of Things and event-based social applications integrated with context-awareness ability can be helpful in planning and organizing social events like meetings, conferences, and tradeshows. This paper first provides review of the event-based social networks and the basic principles and architecture of event-based MSNs. Next, event-based MSNs with smartphone contained technology elements, such as context-aware mobility and multimedia sharing, are presented. By combining the feature of context-aware mobility with multimedia sharing in event-based MSNs, event organizers, and planners with the service providers optimize their capability to recognize value for the multimedia services they deliver. The unique features of the current event-based MSNs give rise to the major technology trends to watch for designing applications. These mobile applications and their main features are described. At the end, discussions on the evaluation of the event-based mobile applications based on their main features are presented. Some open research issues and challenges in this important area of research are also outlined.
c75c82be2e98a8d66907742a89b886902c1a0162
Fully Integrated Startup at 70 mV of Boost Converters for Thermoelectric Energy Harvesting
This paper presents an inductive DC-DC boost converter for energy harvesting using a thermoelectric generator with a minimum startup voltage of 70 mV and a regulated output voltage of 1.25 V. With a typical generator resistance of 40 Ω, an output power of 17 μW can be provided, which translates to an end-to-end efficiency of 58%. The converter employs Schmitt-trigger logic startup control circuitry and an ultra-low voltage charge pump using modified Schmitt-trigger driving circuits optimized for driving capacitive loads. Together with a novel ultra-low leakage power switch and the required control scheme, to the best of the authors' knowledge, this enables the lowest minimum voltage with fully integrated startup.
9ce153635b16fed63a2ec5023533f1143c19e619
Adaptation and the SetPoint Model of Subjective Well-Being Does Happiness Change After Major Life Events ?
Hedonic adaptation refers to the process by which individuals return to baseline levels of happiness following a change in life circumstances. Dominant models of subjective well-being (SWB) suggest that people can adapt to almost any life event and that happiness levels fluctuate around a biologically determined set point that rarely changes. Recent evidence from large-scale panel studies challenges aspects of this conclusion. Although inborn factors certainly matter and some adaptation does occur, events such as divorce, death of a spouse, unemployment, and disability are associated with lasting changes in SWB. These recent studies also show that there are considerable individual differences in the extent to which people adapt. Thus, happiness levels do change, and adaptation is not inevitable. KEYWORDS—happiness; subjective well-being; adaptation; set-point theory People’s greatest hopes and fears often center on the possible occurrence of rare but important life events. People may dread the possibility of losing a loved one or becoming disabled, and they may go to great lengths to find true love or to increase their chances of winning the lottery. In many cases, people strive to attain or avoid these outcomes because of the outcomes’ presumed effect on happiness. But do these major life events really affect long-term levels of subjective well-being (SWB)? Dominant models of SWB suggest that after experiencing major life events, people inevitably adapt. More specifically, set-point theorists posit that inborn personality factors cause an inevitable return to genetically determined happiness set points. However, recent evidence from large-scale longitudinal studies challenges some of the stronger conclusions from these models. ADAPTATION RESEARCH AND THEORY Although the thought that levels of happiness cannot change may distress some people, researchers believe that adaptation processes serve important functions (Frederick & Loewenstein, 1999). For one thing, these processes protect people from potentially dangerous psychological and physiological consequences of prolonged emotional states. In addition, because adaptation processes allow unchanging stimuli to fade into the attentional background, these processes ensure that change in the environment receives extra attention. Attention to environmental change is advantageous because threats that have existed for prolonged periods of time are likely to be less dangerous than novel threats. Similarly, because rewards that have persisted are less likely to disappear quickly than are novel rewards, it will often be advantageous to attend and react more strongly to these novel rewards. Finally, by reducing emotional reactions over time, adaptation processes allow individuals to disengage from goals that have little chance of success. Thus, adaptation can be beneficial, and some amount of adaptation to life circumstances surely occurs. Yet many questions about the strength and ubiquity of adaptation effects remain, partly because of the types of evidence that have been used to support adaptation theories. In many cases, adaptation is not directly observed. Instead, it must be inferred from indirect evidence. For instance, psychologists often cite the low correlation between happiness and life circumstances as evidence for adaptation effects. Factors such as income, age, health, marital status, and number of friends account for only a small percentage of the variance in SWB (Diener, Suh, Lucas, & Smith, 1999). One explanation that has been offered for this counterintuitive finding is that these factors initially have an impact but that people adapt over time. However, the weak associations between life circumstances and SWB themselves provide only suggestive evidence for this explanation. Additional indirect support for the set-point model comes from research that takes a personality perspective on SWB. Address correspondence to Richard E. Lucas, Department of Psychology, Michigan State University, East Lansing, MI 48823; e-mail: lucasri@msu.edu. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Volume 16—Number 2 75 Copyright r 2007 Association for Psychological Science Three pieces of evidence are relevant (Lucas, in press-b). First, SWB exhibits moderate stability even over very long periods of time and even in the face of changing life circumstances. Recent reviews suggest that approximately 30 to 40% of the variance in life-satisfaction measures is stable over periods as long as 20 years. Second, a number of studies have shown that well-being variables are about 40 to 50% heritable. These heritability estimates appear to be even higher (about 80%) for long-term levels of happiness (Lykken & Tellegen, 1996). Finally, personality variables like extroversion and neuroticism are relatively strong predictors of happiness, at least when compared to the predictive power of external factors. The explanation for this set of findings is that events can influence short-term levels of happiness, but personality-based adaptation processes inevitably move people back to their genetically determined set point after a relatively short period of time. More direct evidence for hedonic adaptation comes from studies that examine the well-being of individuals who have experienced important life events. However, even these studies can be somewhat equivocal. For instance, one of the most famous studies is that of Brickman, Coates, and Janoff-Bulman (1978) comparing lottery winners and patients with spinal-cord injuries to people in a control group. Brickman et al. showed that lottery winners were not significantly happier than the control-group participants and that individuals with spinal-cord injuries ‘‘did not appear nearly as unhappy as might be expected’’ (p. 921). This study appears to show adaptation to even the most extreme events imaginable. What is often not mentioned, however, is that although the participants with spinal-cord injuries were above neutral on the happiness scale (which is what led Brickman et al. to conclude that they were happier than might be expected), they were significantly less happy than the people in the control group, and the difference between the groups was actually quite large. Individuals with spinal-cord injuries were more than three quarters of a standard deviation below the mean of the control group. This means that the average participant from the control group was happier than approximately 78% of participants with spinal-cord injuries. This result has now been replicated quite often—most existing studies show relatively large differences between individuals with spinal-cord injuries and healthy participants in control groups (Dijkers, 1997). In addition to problems that result from the interpretation of effect sizes, methodological limitations restrict the conclusions that can be drawn from many existing studies of adaptation. Most studies are not longitudinal, and even fewer are prospective (though there are some notable exceptions; see e.g., Bonanno, 2004; Caspi et al., 2003). Because participants’ pre-event levels of SWB are not known, it is always possible that individuals who experienced an event were more or less happy than average before the event occurred. Certain people may be predisposed to experience life events, and these predisposing factors may be responsible for their happiness levels being lower than average. For instance, in a review of the literature examining the well-being of children who had lost limbs from various causes, Tyc (1992) suggested that those who lost limbs due to accidents tended to have higher levels of premorbid psychological disorders than did those who lost limbs due to disease. Thus, simply comparing the well-being of children who lost limbs to those who did not might overestimate the effect of the injury. Psychologists have demonstrated that level of happiness predicts the occurrence of a variety of events and outcomes (Lyubomirsky, King, & Diener, 2005), and therefore, studies that compare individuals who have experienced a particular event with those who have not but that do not take into account previous happiness level must be interpreted cautiously. A second methodological concern relates to what are known as demand characteristics. When researchers recruit participants specifically because they have experienced a given life event, participants may overor underreport SWB. These reports may occur because people believe the life event should have an impact, because they want to appear well-adjusted, or simply because the context of the study makes the event more salient. For instance, Smith, Schwarz, Roberts, and Ubel (2006) showed that patients with Parkinson’s disease reported lower life satisfaction when the study instructions indicated that Parkinson’s disease was a focus than when the instructions indicated that the study focused on the general population. USING LARGE-SCALE PANEL STUDIES TO ASSESS ADAPTATION TO LIFE EVENTS Recently, my colleagues and I have turned to archival data analysis using large, nationally representative panel studies to address questions about adaptation to life events. These studies have a number of advantages over alternative designs. First, they are prospective, which means that pre-event levels of SWB are known. Second, they are longitudinal, which means that change over time can be accurately modeled. Third, very large samples are often involved, which means that even rare events are sampled. Finally, because designers of these studies often recruit nationally representative samples, and because the questionnaires often focus on a variety of issues, demand characteristics are unlikely to have much of an effect. We have used two such panel studies—the German Socioeconomic Panel Study (GSOEP) and the British Household Panel Study (BHPS)—to examine the amount of adaptation that occurs following major life events. The GSOEP includes almost 40,000 individuals living in Germ
9312a805f90f0858ae421a8472dc794fe8f1cf03
Comparison of perioperative outcomes between robotic and laparoscopic partial nephrectomy: a systematic review and meta-analysis.
CONTEXT Robotic partial nephrectomy (RPN) is rapidly increasing; however, the benefit of RPN over laparoscopic partial nephrectomy (LPN) is controversial. OBJECTIVE To compare perioperative outcomes of RPN and LPN. EVIDENCE ACQUISITION We searched Ovid-Medline, Ovid-Embase, the Cochrane Library, KoreaMed, KMbase, KISS, RISS, and KisTi from their inception through August 2013. Two independent reviewers extracted data using a standardized form. Quality of the selected studies was assessed using the methodological index for nonrandomized studies. EVIDENCE SYNTHESIS A total of 23 studies and 2240 patients were included. All studies were cohort studies with no randomization, and the methodological quality varied. There was no significant difference between the two groups regarding complications of Clavien-Dindo classification grades 1-2 (p=0.62), Clavien-Dindo classification grades 3-5 (p=0.78), change of serum creatinine (p=0.65), operative time (p=0.35), estimated blood loss (p=0.76), and positive margins (p=0.75). The RPN group had a significantly lower rate of conversion to open surgery (p=0.02) and conversion to radical surgery (p=0.0006), shorter warm ischemia time (WIT; p=0.005), smaller change of estimated glomerular filtration rate (eGFR; p=0.03), and shorter length of stay (LOS; p=0.004). CONCLUSIONS This meta-analysis shows that RPN is associated with more favorable results than LPN in conversion rate to open or radical surgery, WIT, change of eGFR, and shorter LOS. To establish the safety and effectiveness outcomes of robotic surgery, well-designed randomized clinical studies with long-term follow-up are needed. PATIENT SUMMARY Robotic partial nephrectomy (PN) is more favorable than laparoscopic PN in terms of lower conversion rate to radical nephrectomy, a favorable renal function indexed estimated glomerular filtration rate, shorter length of hospital stay, and shorter warm ischemia time.
6cec70991312db3341b597892f79d218a5b2b047
Bonding-wire triangular spiral inductor for on-chip switching power converters
This work presents the first design and modelling of bonding-wire-based triangular spiral inductors (Fig. 1), targeting their application to on-chip switching power converters. It is demonstrated that the equilateral triangular shape compared to other polygonal shapes best balances the inductive density as well as the total Equivalent Series Resistance (ESR). Afterwards, a design procedure is presented in order to optimize the inductor design, in terms of ESR and occupied area reduction. Finally, finite-elements simulation results of an optimized design (27nH, 1 Ω) are presented to validate the proposed expressions.
d31798506874705f900e72203515abfaa9278409
Off-line recognition of realistic Chinese handwriting using segmentation-free strategy
Article history: Received 26 August 2007 Received in revised form 7 May 2008 Accepted 13 May 2008
1bfa7d524c649bd81ef5bf0b01e4524d28c6895e
Formal Analysis of Enhanced Authorization in the TPM 2.0
The Trusted Platform Module (TPM) is a system component that provides a hardware-based approach to establish trust in a platform by providing protected storage, robust platform integrity measurement, secure platform attestation and other secure functionalities. The access to TPM commands and TPM-resident key objects are protected via an authorization mechanism. Enhanced Authorization (EA) is a new mechanism introduced by the TPM 2.0 to provide a rich authorization model for specifying flexible access control policies for TPM-resident objects. In our paper, we conduct a formal verification of the EA mechanism. Firstly, we propose a model of the TPM 2.0 EA mechanism in a variant of the applied pi calculus. Secondly, we identify and formalize the security properties of the EA mechanism (Prop.1 and 2) in its design. We also give out a misuse problem that is easily to be neglected (Lemma 7). Thirdly, using the SAPIC tool and the tamarin prover, we have verified both the two security properties. Meanwhile, we have found 3 misuse cases and one of them leads to an attack on the application in [12].
d37e6593c2b14e319d7e4a8c18c8ef9f4e3ef168
RUNNING HEAD : THE SOCIAL ORIENTATION HYPOTHESIS-1The Origin of Cultural Differences in Cognition : The Social Orientation Hypothesis
A large body of research documents cognitive differences between Westerners and East Asians. Westerners tend to be more analytic and East Asians tend to be more holistic. These findings have often been explained as being due to corresponding differences in social orientation. Westerners are more independent and Easterners are more interdependent. However, comparisons of the cognitive tendencies of Westerners and East Asians do not allow us to rule out alternative explanations for the cognitive differences, such as linguistic and genetic differences, as well as cultural differences other than social orientation. In this review we summarize recent developments that provide stronger support for the social-orientation hypothesis. Keywordsculture; cross-cultural differences; within-culture differences; reasoning; independence/interdependence; holistic/analytic cognition RUNNING HEAD: THE SOCIAL ORIENTATION HYPOTHESIS 3 Cultural psychologists have consistently found different patterns of thinking and perception in different societies, with some cultures demonstrating a more analytic pattern and others a more holistic pattern (see Table 1). Analytic cognition is characterized by taxonomic and rule-based categorization of objects, a narrow focus in visual attention, dispositional bias in causal attribution, and the use of formal logic in reasoning. In contrast, holistic cognition is characterized by thematic and familyresemblance-based categorization of objects, a focus on contextual information and relationships in visual attention, an emphasis on situational causes in attribution, and dialecticism (Nisbett, Peng, Choi, & Norenzayan, 2001). What unites the elements of the analytic style is a tendency to focus on a single dimension or aspect—whether in categorizing objects or evaluating arguments—and a tendency to disentangle phenomena from the contexts in which they are embedded—for example, focusing on the individual as a causal agent or attending to focal objects in visual scenes. What unites the elements of the holistic style is a broad attention to context and relationships in visual attention, categorizing objects, and explaining social behavior. Table 1 about here Cultures also differ in their social orientations (independence vs. interdependence) (see Table 2). Cultures that endorse and afford independent social orientation tend to emphasize self-direction, autonomy, and self-expression. Cultures that endorse and afford interdependent social orientation tend to emphasize harmony, relatedness, and connection. Independently oriented cultures tend to view the self as bounded and separate from social others, whereas interdependently oriented cultures tend to view the self as interconnected and as encompassing important relationships (e.g. Markus & Kitayama, 1991; Triandis, 1989). In independently oriented cultural contexts, happiness is most often experienced as a socially disengaging emotion (i.e. pride), whereas in interdependently oriented cultural contexts, happiness is most often experienced as a RUNNING HEAD: THE SOCIAL ORIENTATION HYPOTHESIS 4 socially engaging emotion (i.e. sense of closeness to others; Kitayama, Mesquita, & Karasawa, 2006). Finally, in cultures that have an independent social orientation, people are more motivated to symbolically enhance the self at the expense of others; this tendency is not as common in interdependently oriented cultures (Kitayama, Ishii, Imada, Takemura, & Ramaswamy, 2006; Kitayama, Mesquita, & Karasawa, 2006). Table 2 about here The proposition that cultures differing in their social orientation (independence vs. interdependence) also differ in their cognitive habits (analytic vs. holistic cognition) is by no means new (e.g. Markus & Kitayama, 1991; Witkin & Berry, 1975). Indeed one can trace the origin of this claim back at least to Tönnies (/2002). And certainly a large body of literature has demonstrated that cultures which differ in social orientation also show corresponding differences in cognitive style; Western societies tend to be more independent and more analytic, while East Asian societies tend to be more interdependent and holistic (Nisbett et al., 2001). On the basis of such evidence, it has been proposed that differences in social orientation are the driving force behind cultural differences in cognition (Markus & Kitayama, 1991; Nisbett et al., 2001). While the link between social orientation and cognitive style has been widely accepted, the evidence presented until recently has not provided strong support for this connection. East Asia and the West are huge geographic and cultural areas differing from one another in many ways. There are fairly large genetic differences between the two populations. The linguistic differences are large. Western languages are almost all IndoEuropean in origin and differ in many systematic ways from the major languages of East Asia. And there are many large cultural differences between the two regions other than in social orientation along lines of independence and interdependence. East Asia was heavily influenced by Confucian values and ways of thought and European cultures were heavily influenced by ancient Greek, specifically Aristotelian, values and ways of RUNNING HEAD: THE SOCIAL ORIENTATION HYPOTHESIS 5 thought (Lloyd, 1996). Just within this broad set of cultural differences it would be possible to find many hypotheses that might account for the kind of cognitive differences that have been observed between East and West. Examples of other large societal differences between East and West have to do with the length of time that the respective societies have been industrialized and the degree to which political institutions in these societies have a tradition of being democratic. Both of these latter dimensions are frequently invoked to account for a host of differences between East and West. In the present review, we focus on recent studies that narrow the plausible range of candidates for explaining the cognitive differences. These studies look at much tighter cultural comparisons than those found in previous research. These studies compare Eastern and Western Europe, Europe with the United States, northern and southern Italy, Hokkaido and Mainland Japan, adjacent villages in Turkey, and middle-class and working-class Americans. All of these comparisons involve contrasting more interdependent cultures with more independent cultures. We also review research that manipulates independence vs. interdependence and finds differences in analytic vs. holistic cognition The recent studies make it much less likely that the cognitive differences observed between East and West are due to large genetic or linguistic differences and make it more plausible that the cognitive differences are indeed due to differences in social orientation having to do with independence vs. interdependence rather than to societal differences such as Aristotelian vs. Confucian intellectual traditionsor degree of industrialization. RUNNING HEAD: THE SOCIAL ORIENTATION HYPOTHESIS 6 CROSS-CULTURAL COMPARISONS Several recent studies have shown that the covariation between social orientation and cognitive style is not confined to North America and East Asia. Even within societies that are part of the European cultural tradition, one observes that cultures differing in social orientation also differ in terms of cognitive style. For example, East Europeans and Americans differ along these dimensions. Russians are more interdependent than Americans (Grossmann, 2009; Matsumoto, Takeuchi, Andayani, Kouznetsova, & Krupp, 1998) and are more holistic in terms of categorization, attribution, visual attention, and reasoning about change (Grossmann, 2009). Similarly, Croats are more interdependent than Americans (Šverko, 1995) and show more holistic patterns of cognition in terms of categorization and visual attention (Varnum, Grossmann, Katunar, Nisbett, & Kitayama, 2008). Recent evidence suggests that similar differences exist within Europe. Russians, who are more interdependent than Germans (Naumov, 1996), also show more contextual patterns of visual attention (Medzheritskaya, 2008). WITHIN-CULTURE DIFFERENCES The fact that social orientation and cognitive style covary in comparisons across and within broad cultural regions does not fully address alternative explanations for this pattern. Cross-cultural differences in cognition might conceivably be accounted for by differences in linguistics, genetics, and degree and recency of industrialization and democratization. However, studies comparing groups within the same culture tend to argue against such interpretations. In a recent study comparing Hokkaido Japanese with those from mainland Japan, Kitayama and colleagues (Kitayama, Ishii, et al., 2006) found that those from Hokkaido RUNNING HEAD: THE SOCIAL ORIENTATION HYPOTHESIS 7 (settled by pioneers from the southern Japanese islands) were more independent than those from the main islands and also showed more dispositional bias in attribution. Similarly, Northern Italians, who are more independent than Southern Italians (Martella & Maass, 2000), also show more analytic cognitive habits, categorizing objects in a more taxonomic fashion (Knight & Nisbett, 2007). Even more fine-grained comparisons have found that, within a culture, groups differing in social orientation also differ in cognitive style. For example, Uskul and colleagues compared neighboring villages in the Black Sea region of Turkey that differed in terms of their primary economic activity (Uskul, Kitayama, & Nisbett, 2008). Previous research has found that more sedentary communities (such as farming communities and cooperative fishing communities) tend to be characterized by a more interdependent social orientation and holistic cognition (specifically field dependence or the tendency to have difficulty separating objects from their contexts; Berry, 1966; Witkin & Berry, 1975). Less sedentary communiti
cd426e8e7c356d5c31ac786749ac474d8e583937
Application of Data Mining Techniques in IoT: A Short Review
Internet of Things (IoT) has been growing rapidly due to recent advancements in communications and sensor technologies. Interfacing an every object together through internet looks very difficult, but within a frame of time Internet of Things will drastically change our life. The enormous data captured by the Internet of Things (IoT) are considered of high business as well as social values and extracting hidden information from raw data, various data mining algorithm can be applied to IoT data. In this paper, We survey systematic review of various data mining models as well as its application in Internet of Thing (IoT) field along with its merits and demerits. At last, we discussed challenges in IoT.
2dd2c7602d7f4a0b78494ac23ee1e28ff489be88
Large scale metric learning from equivalence constraints
In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art.
6d96f946aaabc734af7fe3fc4454cf8547fcd5ed
The AR face database
db5aa767a0e8ceb09e7202f708e15a37bbc7ca01
Universal approximation using incremental constructive feedforward networks with random hidden nodes
According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions g : R --> R and the activation functions for RBF nodes can be any integrable piecewise continuous functions g : R --> R and integral of R g(x)dx not equal to 0. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters.
0e3fcfe63b7b6620e3c47e9751fe3456e85cc52f
Robust Discriminative Response Map Fitting with Constrained Local Models
We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes.
451f72230e607cb59d60f996299c578623a19294
Permission Re-Delegation: Attacks and Defenses
Modern browsers and smartphone operating systems treat applications as mutually untrusting, potentially malicious principals. Applications are (1) isolated except for explicit IPC or inter-application communication channels and (2) unprivileged by default, requiring user permission for additional privileges. Although inter-application communication supports useful collaboration, it also introduces the risk of permission redelegation. Permission re-delegation occurs when an application with permissions performs a privileged task for an application without permissions. This undermines the requirement that the user approve each application’s access to privileged devices and data. We discuss permission re-delegation and demonstrate its risk by launching real-world attacks on Android system applications; several of the vulnerabilities have been confirmed as bugs. We discuss possible ways to address permission redelegation and present IPC Inspection, a new OS mechanism for defending against permission re-delegation. IPC Inspection prevents opportunities for permission redelegation by reducing an application’s permissions after it receives communication from a less privileged application. We have implemented IPC Inspection for a browser and Android, and we show that it prevents the attacks we found in the Android system applications.
97aa698a422d037ed322ef093371d424244cb131
Spatio-temporal proximity and social distance: a confirmation framework for social reporting
Social reporting is based on the idea that the members of a location-based social network observe real-world events and publish reports about their observations. Application scenarios include crisis management, bird watching or even some sorts of mobile games. A major issue in social reporting is the quality of the reports. We propose an approach to the quality problem that is based on the reciprocal confirmation of reports by other reports. This contrasts with approaches that require users to verify reports, that is, to explicitly evaluate their veridicality. We propose to use spatio-termporal proximity as a first criterion for confirmation and social distance as a second one. By combining these two measures we construct a graph containing the reports as nodes connected by confirmation edges that can adopt positive as well as negative values. This graph builds the basis for the computation of confirmation values for individual reports by different aggregation measures. By applying our approach to two use cases, we show the importance of a weighted combination, since the meaningfulness of the constituent measures varies between different contexts.
e070de33e302b7e8270c3ef12ff5a47f5f700194
Modeling and Verification of a Six-Phase Interior Permanent Magnet Synchronous Motor
In this paper, a new mathematical modeling for a six-phase interior permanent magnet synchronous motor (IPMSM) is presented. The proposed model utilizes two synchronous reference frames. First, the flux model in the <inline-formula> <tex-math notation="LaTeX">$abcxyz$</tex-math></inline-formula> frame is mapped into the stationary <inline-formula> <tex-math notation="LaTeX">$dq$</tex-math></inline-formula> frames and then to two synchronous rotating frames. Then, differentiating the flux models, voltage equations are derived in rotating frames. Through this analysis, the interaction between the <inline-formula><tex-math notation="LaTeX">$abc$</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">$xyz$</tex-math></inline-formula> subsystems is properly described by a coupling matrix. The torque equation is also derived using the two reference current variables. Flux model was verified through FEM analysis. Experiments were done using a 100 kW six-phase IPMSM in a dynamo system. The validity of the torque equation was checked with some experimental results under a shorted condition on an <inline-formula><tex-math notation="LaTeX">$xyz$</tex-math></inline-formula> subsystem.
f84070f5ecd2d9be81e09e5a3699a525382309e3
Autonomous exploration of motor skills by skill babbling
Autonomous exploration of motor skills is a key capability of learning robotic systems. Learning motor skills can be formulated as inverse modeling problem, which targets at finding an inverse model that maps desired outcomes in some task space, e.g., via points of a motion, to appropriate actions, e.g., motion control policy parameters. In this paper, autonomous exploration of motor skills is achieved by incrementally learning inverse models starting from an initial demonstration. The algorithm is referred to as skill babbling, features sample-efficient learning, and scales to high-dimensional action spaces. Skill babbling extends ideas of goal-directed exploration, which organizes exploration in the space of goals. The proposed approach provides a modular framework for autonomous skill exploration by separating the learning of the inverse model from the exploration mechanism and a model of achievable targets, i.e. the workspace. The effectiveness of skill babbling is demonstrated for a range of motor tasks comprising the autonomous bootstrapping of inverse kinematics and parameterized motion primitives.
f1e2d4d8c7ca6e2b2a25f935501031a4ce3e9912
NestedNet: Learning Nested Sparse Structures in Deep Neural Networks
Recently, there have been increasing demands to construct compact deep architectures to remove unnecessary redundancy and to improve the inference speed. While many recent works focus on reducing the redundancy by eliminating unneeded weight parameters, it is not possible to apply a single deep network for multiple devices with different resources. When a new device or circumstantial condition requires a new deep architecture, it is necessary to construct and train a new network from scratch. In this work, we propose a novel deep learning framework, called a nested sparse network, which exploits an n-in-1-type nested structure in a neural network. A nested sparse network consists of multiple levels of networks with a different sparsity ratio associated with each level, and higher level networks share parameters with lower level networks to enable stable nested learning. The proposed framework realizes a resource-aware versatile architecture as the same network can meet diverse resource requirements, i.e., anytime property. Moreover, the proposed nested network can learn different forms of knowledge in its internal networks at different levels, enabling multiple tasks using a single network, such as coarse-to-fine hierarchical classification. In order to train the proposed nested network, we propose efficient weight connection learning and channel and layer scheduling strategies. We evaluate our network in multiple tasks, including adaptive deep compression, knowledge distillation, and learning class hierarchy, and demonstrate that nested sparse networks perform competitively, but more efficiently, compared to existing methods.
838a8c607a993b2448636e2a89262eb3490dbdb4
Marketing actions can modulate neural representations of experienced pleasantness.
Despite the importance and pervasiveness of marketing, almost nothing is known about the neural mechanisms through which it affects decisions made by individuals. We propose that marketing actions, such as changes in the price of a product, can affect neural representations of experienced pleasantness. We tested this hypothesis by scanning human subjects using functional MRI while they tasted wines that, contrary to reality, they believed to be different and sold at different prices. Our results show that increasing the price of a wine increases subjective reports of flavor pleasantness as well as blood-oxygen-level-dependent activity in medial orbitofrontal cortex, an area that is widely thought to encode for experienced pleasantness during experiential tasks. The paper provides evidence for the ability of marketing actions to modulate neural correlates of experienced pleasantness and for the mechanisms through which the effect operates.
1c26786513a0844c3a547118167452bed17abf5d
Automatic Transliteration of Proper Nouns from Arabic to English
After providing a brief introduction to the transliteration problem, and highlighting some issues specific to Arabic to English translation, a three phase algorithm is introduced as a computational solution to the problem. The algorithm is based on a Hidden Markov Model approach, but also leverages information available in on-line databases. The algorithm is then evaluated, and shown to achieve accuracy approaching 80%.
886bc30c4709535031a36b390bf5ad8dbca2a916
A glasses-type wearable device for monitoring the patterns of food intake and facial activity
Here we present a new method for automatic and objective monitoring of ingestive behaviors in comparison with other facial activities through load cells embedded in a pair of glasses, named GlasSense. Typically, activated by subtle contraction and relaxation of a temporalis muscle, there is a cyclic movement of the temporomandibular joint during mastication. However, such muscular signals are, in general, too weak to sense without amplification or an electromyographic analysis. To detect these oscillatory facial signals without any use of obtrusive device, we incorporated a load cell into each hinge which was used as a lever mechanism on both sides of the glasses. Thus, the signal measured at the load cells can detect the force amplified mechanically by the hinge. We demonstrated a proof-of-concept validation of the amplification by differentiating the force signals between the hinge and the temple. A pattern recognition was applied to extract statistical features and classify featured behavioral patterns, such as natural head movement, chewing, talking, and wink. The overall results showed that the average F1 score of the classification was about 94.0% and the accuracy above 89%. We believe this approach will be helpful for designing a non-intrusive and un-obtrusive eyewear-based ingestive behavior monitoring system.
20f2f3775df4c2f93188311d8d66d6dd8a308c43
A survey of dynamic replication and replica selection strategies based on data mining techniques in data grids
Mining grid data is an interesting research field which aims at analyzing grid systems with data mining techniques in order to efficiently discover new meaningful knowledge to enhance grid management. In this paper, we focus particularly on how extracted knowledge enables enhancing data replication and replica selection strategies which are important data management techniques commonly used in data grids. Indeed, relevant knowledge such as file access patterns, file correlations, user or job access behavior, prediction of future behavior or network performance, and so on, can be efficiently discovered. These findings are then used to enhance both data replication and replica selection strategies. Various works in this respect are then discussed along with their merits and demerits. In addition, we propose a new guideline to data mining application in the context of data replication and replica selection strategies. & 2015 Elsevier Ltd. All rights reserved.
e9302c3fee03abb5dd6e134118207272c1dcf303
Neural embedding-based indices for semantic search
Traditional information retrieval techniques that primarily rely on keyword-based linking of the query and document spaces face challenges such as the vocabulary mismatch problem where relevant documents to a given query might not be retrieved simply due to the use of different terminology for describing the same concepts. As such, semantic search techniques aim to address such limitations of keyword-based retrieval models by incorporating semantic information from standard knowledge bases such as Freebase and DBpedia. The literature has already shown that while the sole consideration of semantic information might not lead to improved retrieval performance over keyword-based search, their consideration enables the retrieval of a set of relevant documents that cannot be retrieved by keyword-based methods. As such, building indices that store and provide access to semantic information during the retrieval process is important. While the process for building and querying keyword-based indices is quite well understood, the incorporation of semantic information within search indices is still an open challenge. Existing work have proposed to build one unified index encompassing both textual and semantic information or to build separate yet integrated indices for each information type but they face limitations such as increased query process time. In this paper, we propose to use neural embeddings-based representations of term, semantic entity, semantic type and documents within the same embedding space to facilitate the development of a unified search index that would consist of these four information types. We perform experiments on standard and widely used document collections including Clueweb09-B and Robust04 to evaluate our proposed indexing strategy from both effectiveness and efficiency perspectives. Based on our experiments, we find that when neural embeddings are used to build inverted indices; hence relaxing the requirement to explicitly observe the posting list key in the indexed document: (a) retrieval efficiency will increase compared to a standard inverted index, hence reduces the index size and query processing time, and (b) while retrieval efficiency, which is the main objective of an efficient indexing mechanism improves using our proposed method, retrieval effectiveness also retains competitive performance compared to the baseline in terms of retrieving a reasonable number of relevant documents from the indexed corpus. Email addresses: fatemeh.lashkari@unb.ca (Fatemeh Lashkari), bagheri@ryerson.ca (Ebrahim Bagheri), ghorbani@unb.ca (Ali A. Ghorbani) Preprint submitted to Information Processing and Management September 11, 2018
9133753f7f1c5bddc85e5435478b10f04ae37ac3
Visualizing RFM Segmentation
Segmentation based on RFM (Recency, Frequency, and Monetary) has been used for over 50 years by direct marketers to target a subset of their customers, save mailing costs, and improve profits. RFM analysis is commonly performed using the Arthur Hughes method, which bins each of the three RFM attributes independently into five equal frequency bins. The resulting 125 cells are depicted in a tabular format or as bar graphs and analyzed by marketers, who determine the best cells (customer segments) to target. We propose an interactive visualization of RFM that helps marketers visualize and quickly identify important customer segments. Additionally, we show an integrated filtering approach that allows marketers to interactively explore the RFM segments in relation to other customer attributes, such as behavioral or demographic, to identify interesting subsegments in the customer base. We depict these RFM visualizations on two large real-world data sets and discuss how customers have used these visualizations in practice to glean interesting insights from their data. Given, the widespread use of RFM as a critical, and many times the only, segmentation tool, we believe that the proposed intuitive and interactive visualization will provide significant business value.
cbdc32f6bc16cc8271dbba13cc7d6338b2be3d38
Prognostics and Health Management of Industrial Equipment
Prognostics and health management (PHM) is a field of research and application which aims at making use of past, present and future information on the environmental, operational and usage conditions of an equipment in order to detect its degradation, diagnose its faults, predict and proactively manage its failures. The present paper reviews the state of knowledge on the methods for PHM, placing these in context with the different information and data which may be available for performing the task and identifying the current challenges and open issues which must be addressed for achieving reliable deployment in practice. The focus is predominantly on the prognostic part of PHM, which addresses the prediction of equipment failure occurrence and associated residual useful life (RUL).
d48a5454562adfdef47f3ec2e6fdef3ddaf317cb
Constraint-based sequential pattern mining: the pattern-growth methods
Constraints are essential for many sequential pattern mining applications. However, there is no systematic study on constraint-based sequential pattern mining. In this paper, we investigate this issue and point out that the framework developed for constrained frequent-pattern mining does not fit our mission well. An extended framework is developed based on a sequential pattern growth methodology. Our study shows that constraints can be effectively and efficiently pushed deep into the sequential pattern mining under this new framework. Moreover, this framework can be extended to constraint-based structured pattern mining as well.
614a793cb5d8d05fd259bf2832d76018fb31cb35
Bad to the bone: facial structure predicts unethical behaviour.
Researchers spanning many scientific domains, including primatology, evolutionary biology and psychology, have sought to establish an evolutionary basis for morality. While researchers have identified social and cognitive adaptations that support ethical behaviour, a consensus has emerged that genetically determined physical traits are not reliable signals of unethical intentions or actions. Challenging this view, we show that genetically determined physical traits can serve as reliable predictors of unethical behaviour if they are also associated with positive signals in intersex and intrasex selection. Specifically, we identify a key physical attribute, the facial width-to-height ratio, which predicts unethical behaviour in men. Across two studies, we demonstrate that men with wider faces (relative to facial height) are more likely to explicitly deceive their counterparts in a negotiation, and are more willing to cheat in order to increase their financial gain. Importantly, we provide evidence that the link between facial metrics and unethical behaviour is mediated by a psychological sense of power. Our results demonstrate that static physical attributes can indeed serve as reliable cues of immoral action, and provide additional support for the view that evolutionary forces shape ethical judgement and behaviour.
75f52663f803d5253690442dcd4f9995009af601
Impact of social media usage on students academic performance in Saudi Arabia
Social media is a popular method for communication amongst university students in Saudi Arabia. However excessive social media use can raise questions about whether academic performance is affected. This research explores this question by conducting a survey on university students in Saudi Arabia in regards to social media usage and their academic performance. The survey also explored which social network is the most popular amongst Saudi students, what students thought about their social media usage and factors besides social media usage which negatively affect academic performance. The survey received 108 responses and descriptive statistics including normality tests i.e. scatter plots were used to examine the relationship between the average number of hours students spent of social media a week and GPA scores of the students. The results demonstrated that there was no linear relationship between social media usage in a week and GPA score. Students highlighted that besides social media use, time management is a factor which affects students ‘studies negatively. The findings of the paper can be used to propose the effective plans for improving the academic performance of the students in such a way that a balance in the leisure, information exchange and academic performance can be maintained. 2014 Elsevier Ltd. All rights reserved.
9652745bcecd6f50fb2b8319862bfbf0ea4c0d7a
Patterns of Play: Play-Personas in User-Centred Game Development
In recent years certain trends from User-Centered design have been seeping into the practice of designing computer games. The balance of power between game designers and players is being renegotiated in order to find a more active role for players and provide them with control in shaping the experiences that games are meant to evoke. A growing player agency can turn both into an increased sense of player immersion and potentially improve the chances of critical acclaim. This paper presents a possible solution to the challenge of involving the user in the design of interactive entertainment by adopting and adapting the "persona" framework introduced by Alan Cooper in the field of Human Computer Interaction. The original method is improved by complementing the traditional ethnographic descriptions of personas with parametric, quantitative, data-oriented models of patterns of user behaviour for computer games. Author
dbc82e5b8b17faec972e1d09c34ec9f9cd1a33ea
Common Consensus : a web-based game for collecting commonsense goals
In our research on Commonsense reasoning, we have found that an especially important kind of knowledge is knowledge about human goals. Especially when applying Commonsense reasoning to interface agents, we need to recognize goals from user actions (plan recognition), and generate sequences of actions that implement goals (planning). We also often need to answer more general questions about the situations in which goals occur, such as when and where a particular goal might be likely, or how long it is likely to take to achieve. In past work on Commonsense knowledge acquisition, users have been directly asked for such information. Recently, however, another approach has emerged—to entice users into playing games where supplying the knowledge is the means to scoring well in the game, thus motivating the players. This approach has been pioneered by Luis von Ahn and his colleagues, who refer to it as Human Computation. Common Consensus is a fun, self-sustaining web-based game, that both collects and validates Commonsense knowledge about everyday goals. It is based on the structure of the TV game show Family Feud1. A small user study showed that users find the game fun, knowledge quality is very good, and the rate of knowledge collection is rapid. ACM Classification: H.3.3 [INFORMATION STORAGE AND RETRIEVAL]: Information Search and Retrieval; I.2.6 [ARTIFICIAL INTELLIGENCE]: Learning
0d635696ef2c768095d9f6378df93241a0e78d16
Collaborative Filtering with Graph Information: Consistency and Scalable Methods
Low rank matrix completion plays a fundamental role in collaborative filtering applications, the key idea being that the variables lie in a smaller subspace than the ambient space. Often, additional information about the variables is known, and it is reasonable to assume that incorporating this information will lead to better predictions. We tackle the problem of matrix completion when pairwise relationships among variables are known, via a graph. We formulate and derive a highly efficient, conjugate gradient based alternating minimization scheme that solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods. On the theoretical front, we show that such methods generalize weighted nuclear norm formulations, and derive statistical consistency guarantees. We validate our results on both real and synthetic datasets.
4672f24bf1828452dc367669ab8a29f79834ad58
Collaborative Deep Learning for Recommender Systems
Collaborative filtering (CF) is a successful approach commonly used by many recommender systems. Conventional CF-based methods use the ratings given to items by users as the sole source of information for learning to make recommendation. However, the ratings are often very sparse in many applications, causing CF-based methods to degrade significantly in their recommendation performance. To address this sparsity problem, auxiliary information such as item content information may be utilized. Collaborative topic regression (CTR) is an appealing recent method taking this approach which tightly couples the two components that learn from two different sources of information. Nevertheless, the latent representation learned by CTR may not be very effective when the auxiliary information is very sparse. To address this problem, we generalize recently advances in deep learning from i.i.d. input to non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian model called collaborative deep learning (CDL), which jointly performs deep representation learning for the content information and collaborative filtering for the ratings (feedback) matrix. Extensive experiments on three real-world datasets from different domains show that CDL can significantly advance the state of the art.
4ef807650090b4a18910701d697d038c5ab0bcf0
Social collaborative filtering for cold-start recommendations
We examine the cold-start recommendation task in an online retail setting for users who have not yet purchased (or interacted in a meaningful way with) any available items but who have granted access to limited side information, such as basic demographic data (gender, age, location) or social network information (Facebook friends or page likes). We formalize neighborhood-based methods for cold-start collaborative filtering in a generalized matrix algebra framework that does not require purchase data for target users when their side information is available. In real-data experiments with 30,000 users who purchased 80,000+ books and had 9,000,000+ Facebook friends and 6,000,000+ page likes, we show that using Facebook page likes for cold-start recommendation yields up to a 3-fold improvement in mean average precision (mAP) and up to 6-fold improvements in Precision@k and Recall@k compared to most-popular-item, demographic, and Facebook friend cold-start recommenders. These results demonstrate the substantial predictive power of social network content, and its significant utility in a challenging problem - recommendation for cold-start users.
01ba3b2c57f2a1145c219976787480102148669c
Predicting purchase behaviors from social media
In the era of social commerce, users often connect from e-commerce websites to social networking venues such as Facebook and Twitter. However, there have been few efforts on understanding the correlations between users' social media profiles and their e-commerce behaviors. This paper presents a system for predicting a user's purchase behaviors on e-commerce websites from the user's social media profile. We specifically aim at understanding if the user's profile information in a social network (for example Facebook) can be leveraged to predict what categories of products the user will buy from (for example eBay Electronics). The paper provides an extensive analysis on how users' Facebook profile information correlates to purchases on eBay, and analyzes the performance of different feature sets and learning algorithms on the task of purchase behavior prediction.
78746473cbf9452cd0d35f7bbbb26b50ef9dc730
Efficient Character Skew Rectification in Scene Text Images
We present an efficient method for character skew rectification in scene text images. The method is based on a novel skew estimators, which exploit intuitive glyph properties and which can be efficiently computed in a linear time. The estimators are evaluated on a synthetically generated data (including Latin, Cyrillic, Greek, Runic scripts) and real scene text images, where the skew rectification by the proposed method improves the accuracy of a state-of-the-art scene text recognition pipeline.
65124306996ec4ec68f7b2eb889e93728ec3629e
Why Do Those With Long-Term Substance Use Disorders Stop Abusing Substances? A Qualitative Study
Although a significant proportion of adults recover from substance use disorders (SUDs), little is known about how they reach this turning point or why they stop using. The purpose of the study was to explore the factors that influence reasoning and decision making about quitting substance use after a long-term SUD. Semistructured interviews were conducted with 18 participants, each of whom had been diagnosed with a SUD and had been abstinent for at least 5 years. A resource group of peer consultants in long-term recovery from SUDs contributed to the study's planning, preparation, and initial analyses. Participants recalled harmful consequences and significant events during their years of substance use. Pressure and concern from close family members were important in their initial efforts to abstain from substance use. Being able to imagine a different life, and the awareness of existing treatment options, promoted hope and further reinforced their motivation to quit. Greater focus on why those with SUDs want to quit may help direct treatment matching; treatment completion may be more likely if the person's reasons for seeking help are addressed.
b47812577acbb67c58b432e2f2bc0a5eb091bc61
Play Therapy: Practitioners' Perspectives on Implementation and Effectiveness
The purpose of the present research was to explore practitioners’ perspectives on play therapy as an intervention when working with a child who has experienced trauma, has present PTSD symptoms and has a co-morbid mental health diagnosis. Play therapy has been accepted as an effective intervention to utilize with children who have been exposed to trauma (Schaefer, 1994). However, there is currently limited research evaluating play therapy as an intervention with children who have been traumatized and have developed PTSD or other mental health symptoms/disorders. The current study aimed to supplement the gap in existing research. Two agencies that serve early childhood mental health clients agreed to participate in the present study by completing an online survey. Data was gathered from 22 practitioner respondents. The results indicate that practitioners believe that play therapy is an effective intervention when treating children with trauma histories, PTSD symptoms, and mental health disorders. The results of the present research support findings from previous literature regarding play therapy when used as an intervention for treating trauma and/or mental health disorders. Furthermore, the present research confirms the notion that creating a safe space for their clients using play therapy is an important part of the intervention process. Given the gap in research surrounding play therapy as an intervention when PTSD and a co-morbid mental health disorders occur concurrently, further research would be beneficial to the field of social work and would positively inform the practitioners who work in early intervention settings. PRACTITIONERS’ PERSPECTIVES 2
a0650d278aa0f50e2ca59e770782b94ffcdd47ce
A Reliability Perspective of the Smart Grid
Increasing complexity of power grids, growing demand, and requirement for greater reliability, security and efficiency as well as environmental and energy sustainability concerns continue to highlight the need for a quantum leap in harnessing communication and information technologies. This leap toward a ¿smarter¿ grid is widely referred to as ¿smart grid.¿ A framework for cohesive integration of these technologies facilitates convergence of acutely needed standards, and implementation of necessary analytical capabilities. This paper critically reviews the reliability impacts of major smart grid resources such as renewables, demand response, and storage. We observe that an ideal mix of these resources leads to a flatter net demand that eventually accentuates reliability challenges further. A gridwide IT architectural framework is presented to meet these challenges while facilitating modern cybersecurity measures. This architecture supports a multitude of geographically and temporally coordinated hierarchical monitoring and control actions over time scales from milliseconds and up.
73aa92ce51fa7107f4c34b5f2e7b45b3694e19ec
An Approach to Generate Topic Similar Document by Seed Extraction-Based SeqGAN Training for Bait Document
In recent years, topic similar document generation has drawn more and more attention in both academia and industry. Especially, bait document generation is very important for security. For more-like and fast bait document generation, we proposed the topic similar document generation model based on SeqGAN model (TSDG-SeqGAN). In the training phrase, we used jieba word segmentation tool for training text to greatly reduce the training time. In the generation phrase, we extract keywords and key sentence from the subject document as seeds, and then enter the seeds into the trained generation network. Next, we get keyword-based documents and documents based on key sentences from generation network. Finally, we output documents that are most similar to the subject document as the final result. Experiments show the effectiveness of our model.
82d7a7ab3fc4aa0bb545deb2b3ac172b39cfec26
NB-IoT Technology Overview and Experience from Cloud-RAN Implementation
The 3GPP has introduced a new narrowband radio technology called narrowband Internet of Things (NB-IoT) in Release 13. NB-IoT was designed to support very low power consumption and low-cost devices in extreme coverage conditions. NB-IoT operates in very small bandwidth and will provide connectivity to a large number of low-data-rate devices. This article highlights some of the key features introduced in NB-IoT and presents performance results from real-life experiments. The experiments were carried out using an early-standard-compliant prototype based on a software defined radio partial implementation of NB-IoT that runs on a desktop computer connected to the network. It is found that a cloud radio access network is a good candidate for NB-IoT implementation.
b1c6f513e347ed9fbf508bd67f763407fa6d5ec6
RGB-H-CbCr skin colour model for human face detection
While the RGB, HSV and YUV (YCbCr) are standard models used in various colour imaging applications, not all of their information are necessary to classify skin colour. This paper presents a novel skin colour model, RGB-H-CbCr for the detection of human faces. Skin regions are extracted using a set of bounding rules based on the skin colour distribution obtained from a training set. The segmented face regions are further classified using a parallel combination of simple morphological operations. Experimental results on a large photo data set have demonstrated that the proposed model is able to achieve good detection success rates for near-frontal faces of varying orientations, skin colour and background environment. The results are also comparable to that of the AdaBoost face classifier.
51e95da85a91844ee939147c6f647f749437f42c
Multilabel SVM active learning for image classification
Image classification is an important task in computer vision. However, how to assign suitable labels to images is a subjective matter, especially when some images can be categorized into multiple classes simultaneously. Multilabel image classification focuses on the problem that each image can have one or multiple labels. It is known that manually labelling images is time-consuming and expensive. In order to reduce the human effort of labelling images, especially multilabel images, we proposed a multilabel SVM active learning method. We also proposed two selection strategies: Max Loss strategy and Mean Max Loss strategy. Experimental results on both artificial data and real-world images demonstrated the advantage of proposed method.
9785c1040a2cdb5d905f8721991c3480d73769cf
Unhealthy region of citrus leaf detection using image processing techniques
Producing agricultural products are difficult task as the plant comes to an attack from various micro-organisms, pests and bacterial diseases. The symptoms of the attacks are generally distinguished through the leaves, steams or fruit inspection. The present paper discusses the image processing techniques used in performing early detection of plant diseases through leaf features inspection. The objective of this work is to implement image analysis and classification techniques for extraction and classification of leaf diseases. Leaf image is captured and then processed to determine the status of each plant. Proposed framework is model into four parts image preprocessing including RGB to different color space conversion, image enhancement; segment the region of interest using K-mean clustering for statistical usage to determine the defect and severity areas of plant leaves, feature extraction and classification. texture feature extraction using statistical GLCM and color feature by means of mean values. Finally classification achieved using SVM. This technique will ensure that chemicals only applied when plant leaves are detected to be effected with the disease.
bcf6433c1a37328c554063447262c574bf3d0f27
New step-up DC-DC converters for PV power generation systems
This paper proposes new step-up DC-DC converters with high ratio capability. The proposed DC-DC converters are derived using combination of boost converters and buck-boost converters. A high ratio capability is achieved by parallel input and series output combination so that the efficiency is better than the one that is achieved by using a conventional boost converter. A method to reduce the input and output ripples are also proposed in this paper. Simulated and experimental results are included to show the validity of the proposed converters.