_id
stringlengths
40
40
title
stringlengths
8
300
text
stringlengths
0
10k
4d40a715a51bcca554915ecc5d88005fd56dc1e5
The future of seawater desalination: energy, technology, and the environment.
In recent years, numerous large-scale seawater desalination plants have been built in water-stressed countries to augment available water resources, and construction of new desalination plants is expected to increase in the near future. Despite major advancements in desalination technologies, seawater desalination is still more energy intensive compared to conventional technologies for the treatment of fresh water. There are also concerns about the potential environmental impacts of large-scale seawater desalination plants. Here, we review the possible reductions in energy demand by state-of-the-art seawater desalination technologies, the potential role of advanced materials and innovative technologies in improving performance, and the sustainability of desalination as a technological solution to global water shortages.
6180482e02eb79eca6fd2e9b1ee9111d749d5ca2
A bidirectional soft pneumatic fabric-based actuator for grasping applications
THIS paper presents the development of a bidirectional fabric-based soft pneumatic actuator requiring low fluid pressurization for actuation, which is incorporated into a soft robotic gripper to demonstrate its utility. The bidirectional soft fabric-based actuator is able to provide both flexion and extension. Fabrication of the fabric actuators is simple as compared to the steps involved in traditional silicone-based approach. In addition, the fabric actuators are able to generate comparably larger vertical grip resistive force at lower operating pressure than elastomeric actuators and 3D-printed actuators, being able to generate resistive grip force up to 20N at 120 kPa. Five of the bidirectional soft fabric-based actuators are deployed within a five-fingered soft robotic gripper, complete with five casings and a base. It is capable of grasping a variety of objects with maximum width or diameter closer to its bending curvature. A cutting task involved bimanual manipulation was demonstrated successfully with the gripper. To incorporate intelligent control for such a task, a soft force made completely of compliant material was attached to the gripper, which allows determination of whether the cutting task is completed. To the authors' knowledge, this work is the first study which incorporates two soft robotic grippers for bimanual manipulation with one of the grippers sensorized to provide closed loop control.
3f0924241a7deba2b40b0c1ea57a2e3d10c57ae0
Principles of GNSS, inertial, and multisensor integrated navigation systems, 2nd edition [Book review]
This second edition of Dr. Grove's book (the original was published in 2008) could arguably be considered a new work. At just under 1,000 pages (including the 11 appendices on the DVD), the second edition is 80% longer than the original. Frankly, the word "book" hardly seems adequate, considering the wide range of topics covered. "Mini-encyclopedia" seems more appropriate. The hardcover portion of the book comprises 18 chapters, and the DVD includes the aforementioned appendices plus 20 fully worked examples, 125 problems or exercises (with answers), and MATLAB routines for the simulation of many of the algorithms discussed in the main text. Here is a brief overview of the contents: ▸ Chapters 1–3: an overview of the diversity of positioning techniques and navigation systems; fundamentals of coordinate frames, kinematics and earth models; introduction to Kaiman filtering ▸ Chapters 4–6: inertial sensors, inertial navigation, and lower-cost dead reckoning systems ▸ Chapters 7–12: principles of radio positioning, short-, medium-, and long-range radio navigation, as well as extensive coverage of global navigation satellite systems (GNSS) ▸ Chapter 13: environmental feature matching. ▸ Chapters 14–16: various integration topics, including inertial navigation system (INS)/GNSS integration, alignment, zero-velocity updates, and multisensor integration ▸ Chapter 17: fault detection. ▸ Chapter 18: applications and trends. In summary, this book is an excellent reference (with numerous nuggets of wisdom) that should be readily handy on the shelf of every practicing navigation engineer. In the hands of an experienced instructor, the book will also serve students as a great textbook. However, the lack of examples integrated in the main text makes it difficult for the book to serve as a self-study guide for those that are new to the field.
b0e7d36c94935fadf3c514903e4340eaa415e4ee
True self-configuration for the IoT
For the Internet of Things to finally become a reality, obstacles on different levels need to be overcome. This is especially true for the upcoming challenge of leaving the domain of technical experts and scientists. Devices need to connect to the Internet and be able to offer services. They have to announce and describe these services in machine-understandable ways so that user-facing systems are able to find and utilize them. They have to learn about their physical surroundings, so that they can serve sensing or acting purposes without explicit configuration or programming. Finally, it must be possible to include IoT devices in complex systems that combine local and remote data, from different sources, in novel and surprising ways. We show how all of that is possible today. Our solution uses open standards and state-of-the art protocols to achieve this. It is based on 6LowPAN and CoAP for the communications part, semantic web technologies for meaningful data exchange, autonomous sensor correlation to learn about the environment, and software built around the Linked Data principles to be open for novel and unforeseen applications.
a8e656fe16825c47a41df9b28e0c97d4bc8fa58f
From turtles to Tangible Programming Bricks: explorations in physical language design
This article provides a historical overview of educational computing research at MIT from the mid-1960s to the present day, focusing on physical interfaces. It discusses some of the results of this research: electronic toys that help children develop advanced modes of thinking through free-form play. In this historical context, the article then describes and discusses the author’s own research into tangible programming, culminating in the development of the Tangible Programming Bricks system—a platform for creating microworlds for children to explore computation and scientific thinking.
f83a207712fd4cf41aded79e9e6c4345ba879128
Ray: A Distributed Framework for Emerging AI Applications
The next generation of AI applications will continuously interact with the environment and learn from these interactions. These applications impose new and demanding systems requirements, both in terms of performance and flexibility. In this paper, we consider these requirements and present Ray—a distributed system to address them. Ray implements a unified interface that can express both task-parallel and actor-based computations, supported by a single dynamic execution engine. To meet the performance requirements, Ray employs a distributed scheduler and a distributed and fault-tolerant store to manage the system’s control state. In our experiments, we demonstrate scaling beyond 1.8 million tasks per second and better performance than existing specialized systems for several challenging reinforcement learning applications.
aa2213a9f39736f80ccc54b9096e414682afa082
Wave-front Transformation with Gradient Metasurfaces
Relying on abrupt phase discontinuities, metasurfaces characterized by a transversely inhomogeneous surface impedance profile have been recently explored as an ultrathin platform to generate arbitrary wave fronts over subwavelength thicknesses. Here, we outline fundamental limitations of passive gradient metasurfaces in molding the impinging wave and show that local phase compensation is essentially insufficient to realize arbitrary wave manipulation, but full-wave designs should be considered. These findings represent a critical step towards realistic and highly efficient conformal wave manipulation beyond the scope of ray optics, enabling unprecedented nanoscale light molding.
8641be8daff5b24e98a0d68138a61456853aef82
Adaptation impact and environment models for architecture-based self-adaptive systems
Self-adaptive systems have the ability to adapt their behavior to dynamic operating conditions. In reaction to changes in the environment, these systems determine the appropriate corrective actions based in part on information about which action will have the best impact on the system. Existing models used to describe the impact of adaptations are either unable to capture the underlying uncertainty and variability of such dynamic environments, or are not compositional and described at a level of abstraction too low to scale in terms of specification effort required for non-trivial systems. In this paper, we address these shortcomings by describing an approach to the specification of impact models based on architectural system descriptions, which at the same time allows us to represent both variability and uncertainty in the outcome of adaptations, hence improving the selection of the best corrective action. The core of our approach is a language equipped with a formal semantics defined in terms of Discrete Time Markov Chains that enables us to describe both the impact of adaptation tactics, as well as the assumptions about the environment. To validate our approach, we show how employing our language can improve the accuracy of predictions used for decision-making in the Rainbow framework for architecture-based self-adaptation.
a65e815895bed510c0549957ce6baa129c909813
Induction of Root and Pattern Lexicon for Unsupervised Morphological Analysis of Arabic
We propose an unsupervised approach to learning non-concatenative morphology, which we apply to induce a lexicon of Arabic roots and pattern templates. The approach is based on the idea that roots and patterns may be revealed through mutually recursive scoring based on hypothesized pattern and root frequencies. After a further iterative refinement stage, morphological analysis with the induced lexicon achieves a root identification accuracy of over 94%. Our approach differs from previous work on unsupervised learning of Arabic morphology in that it is applicable to naturally-written, unvowelled text.
5da41b7d7b1963cd1e86d99b4d9b86ad6d7a227a
An Unequal Wilkinson Power Divider for a Frequency and Its First Harmonic
This letter presents a Wilkinson power divider operating at a frequency and its first harmonic with unequal power divider ratio. To obtain the unequal property, four groups of 1/6 wavelength transmission lines with different characteristic impedances are needed to match all ports. Theoretically, closed-form equations for the design are derived based on transmission line theory. Experimental results have indicated that all the features of this novel power divider can be fulfilled at f 0 and 2f 0 simultaneously.
6cd700af0b7953345d831c129a5a4e0d927bfa19
Adaptive Haptic Feedback Steering Wheel for Driving Simulators
Controlling a virtual vehicle is a sensory-motor activity with a specific rendering methodology that depends on the hardware technology and the software in use. We propose a method that computes haptic feedback for the steering wheel. It is best suited for low-cost, fixed-base driving simulators but can be ported to any driving simulator platform. The goal of our method is twofold. 1) It provides an efficient yet simple algorithm to model the steering mechanism using a quadri-polar representation. 2) This model is used to compute the haptic feedback on top of which a tunable haptic augmentation is adjusted to overcome the lack of presence and the unavoidable simulation loop latencies. This algorithm helps the driver to laterally control the virtual vehicle. We also discuss the experimental results that demonstrate the usefulness of our haptic feedback method.
3f4e71d715fce70c89e4503d747aad11fcac8a43
Competing Values in the Era of Digitalization
This case study examines three different digital innovation projects within Auto Inc -- a large European automaker. By using the competing values framework as a theoretical lens we explore how dynamic capabilities occur in a firm trying to meet increasing demands in originating and innovating from digitalization. In this digitalization process, our study indicates that established socio-technical congruences are being challenged. More so, we pinpoint the need for organizations to find ways to embrace new experimental learning processes in the era of digitalization. While such a change requires long-term commitment and vision, this study presents three informal enablers for such experimental processes these enablers are timing, persistence, and contacts.
215b4c25ad34557644b1a177bd5aeac8b2e66bc6
Why Your Encrypted Database Is Not Secure
Encrypted databases, a popular approach to protecting data from compromised database management systems (DBMS's), use abstract threat models that capture neither realistic databases, nor realistic attack scenarios. In particular, the "snapshot attacker" model used to support the security claims for many encrypted databases does not reflect the information about past queries available in any snapshot attack on an actual DBMS. We demonstrate how this gap between theory and reality causes encrypted databases to fail to achieve their "provable security" guarantees.
84cf1178a7526355f323ce0442458de3b3744358
A high performance parallel algorithm for 1-D FFT
In this paper we propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. We use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. We show that the multidimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. We implemented this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine.
1f7594d3be7f5c32e117bc669ed898dd0af88aa3
Dual-Band Textile MIMO Antenna Based on Substrate-Integrated Waveguide (SIW) Technology
A dual-band textile antenna for multiple-input-multiple-output (MIMO) applications, based on substrate-integrated waveguide (SIW) technology, is designed. The fundamental SIW cavity mode is designed to resonate at 2.4 GHz. Meanwhile, the second and third modes are modified and combined by careful placement of a via within the cavity to enable wideband coverage in the 5-GHz WLAN band. The simple antenna topology can be fabricated fully using textiles in a planar form, ensuring reliability and comfort. Numerical and experimental results indicate satisfactory antenna performance when worn on body in terms of impedance bandwidth, radiation efficiency, and specific absorption ratio (SAR). In order to validate its potential for MIMO applications, two elements of the proposed SIW antenna are arranged in six configurations to study the performance in terms of mutual coupling and envelope correlation. It is observed that the placement of the shorted edges of the two elements adjacent to each other produces the lowest mutual coupling and consequently the best envelope correlation.
a2204b1ae6109db076a2b3c8d0db8cf390008812
Low self-esteem during adolescence predicts poor health, criminal behavior, and limited economic prospects during adulthood.
Using prospective data from the Dunedin Multidisciplinary Health and Development Study birth cohort, the authors found that adolescents with low self-esteem had poorer mental and physical health, worse economic prospects, and higher levels of criminal behavior during adulthood, compared with adolescents with high self-esteem. The long-term consequences of self-esteem could not be explained by adolescent depression, gender, or socioeconomic status. Moreover, the findings held when the outcome variables were assessed using objective measures and informant reports; therefore, the findings cannot be explained by shared method variance in self-report data. The findings suggest that low self-esteem during adolescence predicts negative real-world consequences during adulthood.
02bb762c3bd1b3d1ad788340d8e9cdc3d85f33e1
Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web
We describe a family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and where it is not feasible for every server to have complete information about the current state of the entire network. The protocols are easy to implement using existing network protocols such as TCP/fF’,and require very little overhead. The protocols work with local control, make efficient use of existing resources, and scale gracefully as the network grows. Our caching protocols are based on a special kind of hashing that we call consistent hashing. Roughly speaking, a consistent hash function is one which changes nr.inimaflyas the range of the function changes. Through the development of good consistent hash functions, we are able to develop caching protocols which do not require users to have a current or even consistent view of the network. We believe that consistent hash functions may eventually prove to be useful in other applications such as distributed name servers and/or quorum systems.
155ca30ef360d66af571eee47c7f60f300e154db
In Search of an Understandable Consensus Algorithm
Raft is a consensus algorithm for managing a replicated log. It produces a result equivalent to (multi-)Paxos, and it is as efficient as Paxos, but its structure is different from Paxos; this makes Raft more understandable than Paxos and also provides a better foundation for building practical systems. In order to enhance understandability, Raft separates the key elements of consensus, such as leader election, log replication, and safety, and it enforces a stronger degree of coherency to reduce the number of states that must be considered. Results from a user study demonstrate that Raft is easier for students to learn than Paxos. Raft also includes a new mechanism for changing the cluster membership, which uses overlapping majorities to guarantee safety.
2a0d27ae5c82d81b4553ea44e81eb986be5fd126
Paxos Made Simple
The Paxos algorithm, when presented in plain English, is very simple.
3593269a4bf87a7d0f7aba639a50bc74cb288fb1
Space/Time Trade-offs in Hash Coding with Allowable Errors
In this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash-coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency. The new methods are intended to reduce the amount of space required to contain the hash-coded information from that associated with conventional methods. The reduction in space is accomplished by exploiting the possibility that a small fraction of errors of commission may be tolerable in some applications, in particular, applications in which a large amount of data is involved and a core resident hash area is consequently not feasible using conventional methods. In such applications, it is envisaged that overall performance could be improved by using a smaller core resident hash area in conjunction with the new methods and, when necessary, by using some secondary and perhaps time-consuming test to “catch” the small fraction of errors associated with the new methods. An example is discussed which illustrates possible areas of application for the new methods. Analysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time.
691564e0f19d5f62597adc0720d0e51ddbce9b89
Web Caching with Consistent Hashing
A key performance measure for the World Wide Web is the speed with which content is served to users. As traffic on the Web increases, users are faced with increasing delays and failures in data delivery. Web caching is one of the key strategies that has been explored to improve performance. An important issue in many caching systems is how to decide what is cached where at any given time. Solutions have included multicast queries and directory schemes. In this paper, we offer a new Web caching strategy based on consistent hashing. Consistent hashing provides an alternative to multicast and directory schemes, and has several other advantages in load balancing and fault tolerance. Its performance was analyzed theoretically in previous work; in this paper we describe the implementation of a consistent-hashing-based system and experiments that support our thesis that it can provide performance improvements.  1999 Published by Elsevier Science B.V. All rights reserved.
215ac9b23a9a89ad7c8f22b5f9a9ad737204d820
An Empirical Investigation into Programming Language Syntax
Recent studies in the literature have shown that syntax remains a significant barrier to novice computer science students in the field. While this syntax barrier is known to exist, whether and how it varies across programming languages has not been carefully investigated. For this article, we conducted four empirical studies on programming language syntax as part of a larger analysis into the, so called, programming language wars. We first present two surveys conducted with students on the intuitiveness of syntax, which we used to garner formative clues on what words and symbols might be easy for novices to understand. We followed up with two studies on the accuracy rates of novices using a total of six programming languages: Ruby, Java, Perl, Python, Randomo, and Quorum. Randomo was designed by randomly choosing some keywords from the ASCII table (a metaphorical placebo). To our surprise, we found that languages using a more traditional C-style syntax (both Perl and Java) did not afford accuracy rates significantly higher than a language with randomly generated keywords, but that languages which deviate (Quorum, Python, and Ruby) did. These results, including the specifics of syntax that are particularly problematic for novices, may help teachers of introductory programming courses in choosing appropriate first languages and in helping students to overcome the challenges they face with syntax.
e4edc414773e709e8eb3eddd77b519637f26f9a5
Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train
For the past 5 years, the ILSVRC competition and the ImageNet dataset have attracted a lot of interest from the Computer Vision community, allowing for state-of-the-art accuracy to grow tremendously. This should be credited to the use of deep artificial neural network designs. As these became more complex, the storage, bandwidth, and compute requirements increased. This means that with a non-distributed approach, even when using the most high-density server available, the training process may take weeks, making it prohibitive. Furthermore, as datasets grow, the representation learning potential of deep networks grows as well by using more complex models. This synchronicity triggers a sharp increase in the computational requirements and motivates us to explore the scaling behaviour on petaflop scale supercomputers. In this paper we will describe the challenges and novel solutions needed in order to train ResNet50 in this large scale environment. We demonstrate above 90% scaling efficiency and a training time of 28 minutes using up to 104K x86 cores. This is supported by software tools from Intel’s ecosystem. Moreover, we show that with regular 90 120 epoch train runs we can achieve a top-1 accuracy as high as 77% for the unmodified ResNet-50 topology. We also introduce the novel Collapsed Ensemble (CE) technique that allows us to obtain a 77.5% top-1 accuracy, similar to that of a ResNet-152, while training a unmodified ResNet-50 topology for the same fixed training budget. All ResNet-50 models as well as the scripts needed to replicate them will be posted shortly. Keywords—deep learning, scaling, convergence, large minibatch, ensembles.
154d62d97d43243d73352b969b2335caaa6c2b37
Ensemble learning for free with evolutionary algorithms?
Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-EEL) or incrementally along evolution (On-EEL). Experiments on a set of benchmark problems show that Off-EEL outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier ensembles.
3146fabd5631a7d1387327918b184103d06c2211
Person-Independent 3D Gaze Estimation Using Face Frontalization
Person-independent and pose-invariant estimation of eye-gaze is important for situation analysis and for automated video annotation. We propose a fast cascade regression based method that first estimates the location of a dense set of markers and their visibility, then reconstructs face shape by fitting a part-based 3D model. Next, the reconstructed 3D shape is used to estimate a canonical view of the eyes for 3D gaze estimation. The model operates in a feature space that naturally encodes local ordinal properties of pixel intensities leading to photometric invariant estimation of gaze. To evaluate the algorithm in comparison with alternative approaches, three publicly-available databases were used, Boston University Head Tracking, Multi-View Gaze and CAVE Gaze datasets. Precision for head pose and gaze averaged 4 degrees or less for pitch, yaw, and roll. The algorithm outperformed alternative methods in both datasets.
39773ed3c249a731224b77783a1c1e5f353d5429
End-to-End Radio Traffic Sequence Recognition with Deep Recurrent Neural Networks
We investigate sequence machine learning techniques on raw radio signal time-series data. By applying deep recurrent neural networks we learn to discriminate between several application layer traffic types on top of a constant envelope modulation without using an expert demodulation algorithm. We show that complex protocol sequences can be learned and used for both classification and generation tasks using this approach. Keywords—Machine Learning, Software Radio, Protocol Recognition, Recurrent Neural Networks, LSTM, Protocol Learning, Traffic Classification, Cognitive Radio, Deep Learning
fb7f39d7d24b30df7b177bca2732ff8c3ade0bc0
Homography estimation using one ellipse correspondence and minimal additional information
In sport scenarios like football or basketball, we often deal with central views where only the central circle and some additional primitives like the central line and the central point or a touch line are visible. In this paper we first characterize, from a mathematical point of view, the set of homographies that project a given ellipse into the unit circle, next, using some extra minimal additional information like the knowledge of the position in the image of the central line and central point or a touch line we show a method to fully determine the plane homography. We present some experiments in sport scenarios to show the ability of the proposed method to properly recover the plane homography.
591b52d24eb95f5ec3622b814bc91ac872acda9e
Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.
Multilayer connectionist models of memory based on the encoder model using the backpropagation learning rule are evaluated. The models are applied to standard recognition memory procedures in which items are studied sequentially and then tested for retention. Sequential learning in these models leads to 2 major problems. First, well-learned information is forgotten rapidly as new information is learned. Second, discrimination between studied items and new items either decreases or is nonmonotonic as a function of learning. To address these problems, manipulations of the network within the multilayer model and several variants of the multilayer model were examined, including a model with prelearned memory and a context model, but none solved the problems. The problems discussed provide limitations on connectionist models applied to human memory and in tasks where information to be learned is not all available during learning.
639f02d25eab3794e35b757ef64c6815a8929f84
A self-boost charge pump topology for a gate drive high-side power supply
A self-boost charge pump topology is presented for a floating high-side gate drive power supply that features high voltage and current capabilities for use in integrated power electronic modules (IPEMs). The transformerless topology uses a small capacitor to transfer energy to the high-side switch from a single power supply referred to the negative rail. Unlike conventional bootstrap power supplies, no switching of the main phase-leg switches is required to provide power continuously to the high-side gate drive, even if the high-side switch is permanently on. Additional advantages include low parts-count and simple control requirements. A piecewise linear model of the self-boost charge pump is derived and the circuit's operating characteristics are analyzed. Simulation and experimental results are provided to verify the desired operation of the new charge pump circuit. Guidelines are provided to assist with circuit component selection in new applications.
a4bf5c295f0bf4f7f8d5c1e702b62018cca9bc58
The long-term sequelae of child and adolescent abuse: a longitudinal community study.
The purpose of the present study was to examine the relationship between childhood and adolescent physical and sexual abuse before the age of 18 and psychosocial functioning in mid-adolescence (age 15) and early adulthood (age 21) in a representative community sample of young adults. Subjects were 375 participants in an ongoing 17-years longitudinal study. At age 21, nearly 11% reported physical or sexual abuse before age 18. Psychiatric disorders based on DSM-III-R criteria were assessed utilizing the NIMH Diagnostic Interview Schedule, Revised Version (DIS-III-R). Approximately 80% of the abused young adults met DSM-III-R criteria for at least one psychiatric disorder at age 21. Compared to their nonabused counterparts, abused subjects demonstrated significant impairments in functioning both at ages 15 and at 21, including more depressive symptomatology, anxiety, psychiatric disorders, emotional-behavioral problems, suicidal ideation, and suicide attempts. While abused individuals were functioning significantly more poorly overall at ages 15 and 21 than their nonabused peers, gender differences and distinct patterns of impaired functioning emerged. These deficits underscore the need for early intervention and prevention strategies to forestall or minimize the serious consequences of child abuse.
16e39000918a58e0755dc42abed368b2215c2aed
A radio resource management framework for TVWS exploitation under an auction-based approach
This paper elaborates on the design, implementation and performance evaluation of a prototype Radio Resource Management (RRM) framework for TV white spaces (TVWS) exploitation, under an auction-based approach. The proposed RRM framework is applied in a centralised Cognitive Radio (CR) network architecture, where exploitation of the available TVWS by Secondary Systems is orchestrated via a Spectrum Broker. Efficient RRM framework performance, as a matter of maximum-possible resources utilization and benefit of Spectrum Broker, is achieved by proposing and evaluating an auction-based algorithm. This auction-based algorithm considers both frequency and time domain during TVWS allocation process which was defined as an optimization problem, where maximum payoff of Spectrum Broker is the optimization goal. Experimental tests that were carried-out under controlled conditions environment, verified the validity of the proposed framework, besides identifying fields for further research.
d6619b3c0523f0a12168fbce750edeee7b6b8a53
High power and high efficiency GaN-HEMT for microwave communication applications
Microwaves have been widely used for the modern communication systems, which have advantages in high bit rate transmission and the easiness of compact circuit and antenna design. Gallium Nitride (GaN), featured with high breakdown and high saturation velocity, is one of the promising material for high power and high frequency devices, and a kW-class output power has already been achieved [1]. We have developed the high power and high efficiency GaN HEMTs [2–5], targeting the amplifier for the base transceiver station (BTS). This presentation summarizes our recent works, focusing on the developments for the efficiency boosting and the robustness in high power RF operation.
d2f210e3f34d65e3ae66b60e98d9c3a740b3c52a
Coloring-based coalescing for graph coloring register allocation
Graph coloring register allocation tries to minimize the total cost of spilled live ranges of variables. Live-range splitting and coalescing are often performed before the coloring to further reduce the total cost. Coalescing of split live ranges, called sub-ranges, can decrease the total cost by lowering the interference degrees of their common interference neighbors. However, it can also increase the total cost because the coalesced sub-ranges can become uncolorable. In this paper, we propose coloring-based coalescing, which first performs trial coloring and next coalesces all copyrelated sub-ranges that were assigned the same color. The coalesced graph is then colored again with the graph coloring register allocation. The rationale is that coalescing of differently colored sub-ranges could result in spilling because there are some interference neighbors that prevent them from being assigned the same color. Experiments on Java programs show that the combination of live-range splitting and coloring-based coalescing reduces the static spill cost by more than 6% on average, comparing to the baseline coloring without splitting. In contrast, well-known iterated and optimistic coalescing algorithms, when combined with splitting, increase the cost by more than 20%. Coloring-based coalescing improves the execution time by up to 15% and 3% on average, while the existing algorithms improve by up to 12% and 1% on average.
4bbd31803e900aebcdb984523ef3770de3641981
Mathematics Learning through Computational Thinking Activities: A Systematic Literature Review
Computational Thinking represents a terminology that embraces the complex set of reasoning processes that are held for problem stating and solving through a computational tool. The ability of systematizing problems and solve them by these means is currently being considered a skill to be developed by all students, together with Language, Mathematics and Sciences. Considering that Computer Science has many of its roots on Mathematics, it is reasonable to ponder if and how Mathematics learning can be influenced by offering activities related to Computational Thinking to students. In this sense, this article presents a Systematic Literature Review on reported evidences of Mathematics learning in activities aimed at developing Computational Thinking skills. Forty-two articles which presented didactic activities together with an experimental design to evaluate learning outcomes published from 2006 to 2017 were analyzed. The majority of identified activities used a software tool or hardware device for their development. In these papers, a wide variety of mathematical topics has been being developed, with some emphasis on Planar Geometry and Algebra. Conversion of models and solutions between different semiotic representations is a high level cognitive skill that is most frequently associated to educational outcomes. This review indicated that more recent articles present a higher level of rigor in methodological procedures to assess learning effects. However, joint analysis of evidences from more than one data source is still not frequently used as a validation procedure.
e9b87d8ba83281d5ea01e9b9fab14c73b0ae75eb
Partially overlapping neural networks for real and imagined hand movements.
Neuroimagery findings have shown similar cerebral networks associated with imagination and execution of a movement. On the other hand, neuropsychological studies of parietal-lesioned patients suggest that these networks may be at least partly distinct. In the present study, normal subjects were asked to either imagine or execute auditory-cued hand movements. Compared with rest, imagination and execution showed overlapping networks, including bilateral premotor and parietal areas, basal ganglia and cerebellum. However, direct comparison between the two experimental conditions showed that specific cortico-subcortical areas were more engaged in mental simulation, including bilateral premotor, prefrontal, supplementary motor and left posterior parietal areas, and the caudate nuclei. These results suggest that a specific neuronal substrate is involved in the processing of hand motor representations.
8a718fccc947750580851f10698de1f41f5991f4
Disconnected aging: Cerebral white matter integrity and age-related differences in cognition
Cognition arises as a result of coordinated processing among distributed brain regions and disruptions to communication within these neural networks can result in cognitive dysfunction. Cortical disconnection may thus contribute to the declines in some aspects of cognitive functioning observed in healthy aging. Diffusion tensor imaging (DTI) is ideally suited for the study of cortical disconnection as it provides indices of structural integrity within interconnected neural networks. The current review summarizes results of previous DTI aging research with the aim of identifying consistent patterns of age-related differences in white matter integrity, and of relationships between measures of white matter integrity and behavioral performance as a function of adult age. We outline a number of future directions that will broaden our current understanding of these brain-behavior relationships in aging. Specifically, future research should aim to (1) investigate multiple models of age-brain-behavior relationships; (2) determine the tract-specificity versus global effect of aging on white matter integrity; (3) assess the relative contribution of normal variation in white matter integrity versus white matter lesions to age-related differences in cognition; (4) improve the definition of specific aspects of cognitive functioning related to age-related differences in white matter integrity using information processing tasks; and (5) combine multiple imaging modalities (e.g., resting-state and task-related functional magnetic resonance imaging; fMRI) with DTI to clarify the role of cerebral white matter integrity in cognitive aging.
d53432934fa78151e7b75c95093c9b0be94b4b9a
Evolving computational intelligence systems
A new paradigm of the evolving computational intelligence systems (ECIS) is introduced in a generic framework of the knowledge and data integration (KDI). This generalization of the recent advances in the development of evolving fuzzy and neuro-fuzzy models and the more analytical angle of consideration through the prism of knowledge evolution as opposed to the usually used datacentred approach marks the novelty of the present paper. ECIS constitutes a suitable paradigm for adaptive modeling of continuous dynamic processes and tracing the evolution of knowledge. The elements of evolution, such as inheritance and structure development are related to the knowledge and data pattern dynamics and are considered in the context of an individual system/model. Another novelty of this paper consists of the comparison at a conceptual level between the concept of models and knowledge captured by these models evolution and the well known paradigm of evolutionary computation. Although ECIS differs from the concept of evolutionary (genetic) computing, both paradigms heavily borrow from the same source – nature and human evolution. As the origin of knowledge, humans are the best model of an evolving intelligent system. Instead of considering the evolution of population of spices or genes as the evolutionary computation algorithms does the ECIS concentrate on the evolution of a single intelligent system. The aim is to develop the intelligence/knowledge of this system through an evolution using inheritance and modification, upgrade and reduction. This approach is also suitable for the integration of new data and existing models into new models that can be incrementally adapted to future incoming data. This powerful new concept has been recently introduced by the authors in a series of parallel works and is still under intensive development. It forms the conceptual basis for the development of the truly intelligent systems. Another specific of this paper includes bringing together the two working examples of ECIS, namely ECOS and EFS. The ideas are supported by illustrative examples (a synthetic non-linear function for the ECOS case and a benchmark problem of house price modelling from UCI repository for the case of EFS).
7bdec3d91d8b649f892a779da78428986d8c5e3b
CCVis : Visual Analytics of Student Online Learning Behaviors Using Course Clickstream Data
As more and more college classrooms utilize online platforms to facilitate teaching and learning activities, analyzing student online behaviors becomes increasingly important for instructors to effectively monitor and manage student progress and performance. In this paper, we present CCVis, a visual analytics tool for analyzing the course clickstream data and exploring student online learning behaviors. Targeting a large college introductory course with over two thousand student enrollments, our goal is to investigate student behavior patterns and discover the possible relationships between student clickstream behaviors and their course performance. We employ higher-order network and structural identity classification to enable visual analytics of behavior patterns from the massive clickstream data. CCVis includes four coordinated views (the behavior pattern, behavior breakdown, clickstream comparative, and grade distribution views) for user interaction and exploration. We demonstrate the effectiveness of CCVis through case studies along with an ad-hoc expert evaluation. Finally, we discuss the limitation and extension of this work.
104829c56a7f1236a887a6993959dd52aebd86f5
Modeling the global freight transportation system: A multi-level modeling perspective
The interconnectedness of different actors in the global freight transportation industry has rendered such a system as a large complex system where different sub-systems are interrelated. On such a system, policy-related- exploratory analyses which have predictive capacity are difficult to perform. Although there are many global simulation models for various large complex systems, there is unfortunately very little research aimed to develop a global freight transportation model. In this paper, we present a multi-level framework to develop an integrated model of the global freight transportation system. We employ a system view to incorporate different relevant sub-systems and categorize them in different levels. The fourstep model of freight transport is used as the basic foundation of the framework proposed. In addition to that, we also present the computational framework which adheres to the high level modeling framework to provide a conceptualization of the discrete-event simulation model which will be developed.
c22366074e3b243f2caaeb2f78a2c8d56072905e
A broadband slotted ridge waveguide antenna array
A longitudinally-slotted ridge waveguide antenna array with a compact transverse dimension is presented. To broaden the bandwidth of the array, it is separated into two subarrays fed by a novel compact convex waveguide divider. A 16-element uniform linear array at X-band was fabricated and measured to verify the validity of the design. The measured bandwidth of S11les-15 dB is 14.9% and the measured cross- polarization level is less than -36 dB over the entire bandwidth. This array can be combined with the edge-slotted waveguide array to build a two-dimensional dual-polarization antenna array for the synthetic aperture radar (SAR) application
09c5b100f289a3993d91a66116e35ee95e99acc0
Segmenting cardiac MRI tagging lines using Gabor filter banks
t—This paper describes a new method for the segmentation and extraction of cardiac MRI s. Our method is based on the novel use of a 2D bank. By convolving the tagged input image with ilters, the tagging lines are automatically enhanced ted out. We design the Gabor filter bank based on age’s spatial and frequency characteristics. The is a combination of each filter’s response in the bank. We demonstrate that compared to bandpass ds such as HARP, this method results in robust and mentation of the tagging lines.
41e4eb8fbb335ae70026f4216069f33f8f9bbe53
Stepfather Involvement and Stepfather-Child Relationship Quality: Race and Parental Marital Status as Moderators.
Stepparent-child relationship quality is linked to stepfamily stability and children's well-being. Yet, the literature offers an incomplete understanding of factors that promote high-quality stepparent-child relationships, especially among socio-demographically diverse stepfamilies. In this study, we explore the association between stepfather involvement and stepfather-child relationship quality among a racially diverse and predominately low-income sample of stepfamilies with preadolescent children. Using a subsample of 467 mother-stepfather families from year 9 of the Fragile Families and Child Wellbeing Study, results indicate that stepfather involvement is positively associated with stepfather-child relationship quality. This association is statistically indistinguishable across racial groups, although the association is stronger among children in cohabiting stepfamilies compared to children in married stepfamilies.
45063cf2e0116e700da5ca2863c8bb82ad4d64c2
Conceptual and Database Modelling of Graph Databases
Comparing graph databases with traditional, e.g., relational databases, some important database features are often missing there. Particularly, a graph database schema including integrity constraints is not explicitly defined, also a conceptual modelling is not used at all. It is hard to check a consistency of the graph database, because almost no integrity constraints are defined. In the paper, we discuss these issues and present current possibilities and challenges in graph database modelling. Also a conceptual level of a graph database design is considered. We propose a sufficient conceptual model and show its relationship to a graph database model. We focus also on integrity constraints modelling functional dependencies between entity types, which reminds modelling functional dependencies known from relational databases and extend them to conditional functional dependencies.
6733017c5a01b698cc07b57fa9c9b9207b85cfbc
Accurate reconstruction of image stimuli from human fMRI based on the decoding model with capsule network architecture
In neuroscience, all kinds of computation models were designed to answer the open question of how sensory stimuli are encoded by neurons and conversely, how sensory stimuli can be decoded from neuronal activities. Especially, functional Magnetic Resonance Imaging (fMRI) studies have made many great achievements with the rapid development of the deep network computation. However, comparing with the goal of decoding orientation, position and object category from activities in visual cortex, accurate reconstruction of image stimuli from human fMRI is a still challenging work. In this paper, the capsule network (CapsNet) architecture based visual reconstruction (CNAVR) method is developed to reconstruct image stimuli. The capsule means containing a group of neurons to perform the better organization of feature structure and representation, inspired by the structure of cortical mini column including several hundred neurons in primates. The high-level capsule features in the CapsNet includes diverse features of image stimuli such as semantic class, orientation, location and so on. We used these features to bridge between human fMRI and image stimuli. We firstly employed the CapsNet to train the nonlinear mapping from image stimuli to high-level capsule features, and from highlevel capsule features to image stimuli again in an end-to-end manner. After estimating the serviceability of each voxel by encoding performance to accomplish the selecting of voxels, we secondly trained the nonlinear mapping from dimension-decreasing fMRI data to high-level capsule features. Finally, we can predict the high-level capsule features with fMRI data, and reconstruct image stimuli with the CapsNet. We evaluated the proposed CNAVR method on the dataset of handwritten digital images, and exceeded about 10% than the accuracy of all existing state-of-the-art methods on the structural similarity index (SSIM).
f8be08195b1a7e9e45028eee4844ea2482170a3e
Gut microbiota functions: metabolism of nutrients and other food components
The diverse microbial community that inhabits the human gut has an extensive metabolic repertoire that is distinct from, but complements the activity of mammalian enzymes in the liver and gut mucosa and includes functions essential for host digestion. As such, the gut microbiota is a key factor in shaping the biochemical profile of the diet and, therefore, its impact on host health and disease. The important role that the gut microbiota appears to play in human metabolism and health has stimulated research into the identification of specific microorganisms involved in different processes, and the elucidation of metabolic pathways, particularly those associated with metabolism of dietary components and some host-generated substances. In the first part of the review, we discuss the main gut microorganisms, particularly bacteria, and microbial pathways associated with the metabolism of dietary carbohydrates (to short chain fatty acids and gases), proteins, plant polyphenols, bile acids, and vitamins. The second part of the review focuses on the methodologies, existing and novel, that can be employed to explore gut microbial pathways of metabolism. These include mathematical models, omics techniques, isolated microbes, and enzyme assays.
7ec5f9694bc3d061b376256320eacb8ec3566b77
The CN2 Induction Algorithm
Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks.
0d57ba12a6d958e178d83be4c84513f7e42b24e5
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves ∼90% scaling efficiency when moving from 8 to 256 GPUs. This system enables us to train visual recognition models on internetscale data with high efficiency.
22ba26e56fc3e68f2e6a96c60d27d5f721ea00e9
RMSProp and equilibrated adaptive learning rates for non-convex optimization
Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us design better suited adaptive learning rate schemes. We show that the popular Jacobi preconditioner has undesirable behavior in the presence of both positive and negative curvature, and present theoretical and empirical evidence that the socalled equilibration preconditioner is comparatively better suited to non-convex problems. We introduce a novel adaptive learning rate scheme, called ESGD, based on the equilibration preconditioner. Our experiments show that ESGD performs as well or better than RMSProp in terms of convergence speed, always clearly improving over plain stochastic gradient descent.
27da8d31b23f15a8d4feefe0f309dfaad745f8b0
Understanding deep learning requires rethinking generalization
Despite their massivesize, successful deep artificial neural networkscan exhibit a remarkably small differencebetween training and test performance. Conventional wisdom attributessmall generalization error either to propertiesof themodel family, or to the regularization techniquesused during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experimentsestablish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simpledepth two neural networksalready haveperfect finitesampleexpressivity assoon as thenumber of parameters exceeds thenumber of datapointsas it usually does in practice. We interpret our experimental findingsby comparison with traditional models.
8e0eacf11a22b9705a262e908f17b1704fd21fa7
Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin
We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech—two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system [26]. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.
bcdce6325b61255c545b100ef51ec7efa4cced68
An overview of gradient descent optimization algorithms
Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent.
907149ace088dad97fe6a6cadfd0c9260bb75795
Expressing emotion through posture and gesture Introduction
Introduction Emotion and its physical expression are an integral part of social interaction, informing others about how we are feeling and affecting social outcomes (Vosk, Forehand, and Figueroa 1983). Studies on the physical expression of emotion can be traced back to the 19th century with Darwin’s seminal book “The Expression of the Emotions in Man and Animals” that reveals the key role of facial expressions and body movement in communicating status and emotion (Darwin 1872).
eaa6537b640e744216c8ec1272f6db5bbc53e0fe
Robust and Computationally Lightweight Autonomous Tracking of Vehicle Taillights and Signal Detection by Embedded Smart Cameras
An important aspect of collision avoidance and driver assistance systems, as well as autonomous vehicles, is the tracking of vehicle taillights and the detection of alert signals (turns and brakes). In this paper, we present the design and implementation of a robust and computationally lightweight algorithm for a real-time vision system, capable of detecting and tracking vehicle taillights, recognizing common alert signals using a vehicle-mounted embedded smart camera, and counting the cars passing on both sides of the vehicle. The system is low-power and processes scenes entirely on the microprocessor of an embedded smart camera. In contrast to most existing work that addresses either daytime or nighttime detection, the presented system provides the ability to track vehicle taillights and detect alert signals regardless of lighting conditions. The mobile vision system has been tested in actual traffic scenes and the results obtained demonstrate the performance and the lightweight nature of the algorithm.
dd18d4a30cb1f516b62950db44f73589f8083c3e
Role of the Immune system in chronic pain
During the past two decades, an important focus of pain research has been the study of chronic pain mechanisms, particularly the processes that lead to the abnormal sensitivity — spontaneous pain and hyperalgesia — that is associated with these states. For some time it has been recognized that inflammatory mediators released from immune cells can contribute to these persistent pain states. However, it has only recently become clear that immune cell products might have a crucial role not just in inflammatory pain, but also in neuropathic pain caused by damage to peripheral nerves or to the CNS.
5592c7e0225c956419a9a315718a87190b33f4c2
An Energy-Efficient Architecture for Binary Weight Convolutional Neural Networks
Binary weight convolutional neural networks (BCNNs) can achieve near state-of-the-art classification accuracy and have far less computation complexity compared with traditional CNNs using high-precision weights. Due to their binary weights, BCNNs are well suited for vision-based Internet-of-Things systems being sensitive to power consumption. BCNNs make it possible to achieve very high throughput with moderate power dissipation. In this paper, an energy-efficient architecture for BCNNs is proposed. It fully exploits the binary weights and other hardware-friendly characteristics of BCNNs. A judicious processing schedule is proposed so that off-chip I/O access is minimized and activations are maximally reused. To significantly reduce the critical path delay, we introduce optimized compressor trees and approximate binary multipliers with two novel compensation schemes. The latter is able to save significant hardware resource, and almost no computation accuracy is compromised. Taking advantage of error resiliency of BCNNs, an innovative approximate adder is developed, which significantly reduces the silicon area and data path delay. Thorough error analysis and extensive experimental results on several data sets show that the approximate adders in the data path cause negligible accuracy loss. Moreover, algorithmic transformations for certain layers of BCNNs and a memory-efficient quantization scheme are incorporated to further reduce the energy cost and on-chip storage requirement. Finally, the proposed BCNN hardware architecture is implemented with the SMIC 130-nm technology. The postlayout results demonstrate that our design can achieve an energy efficiency over 2.0TOp/s/W when scaled to 65 nm, which is more than two times better than the prior art.
55ea7bb4e75608115b50b78f2fea6443d36d60cc
Application of ordinal logistic regression analysis in determining risk factors of child malnutrition in Bangladesh
BACKGROUND The study attempts to develop an ordinal logistic regression (OLR) model to identify the determinants of child malnutrition instead of developing traditional binary logistic regression (BLR) model using the data of Bangladesh Demographic and Health Survey 2004. METHODS Based on weight-for-age anthropometric index (Z-score) child nutrition status is categorized into three groups-severely undernourished (< -3.0), moderately undernourished (-3.0 to -2.01) and nourished (≥-2.0). Since nutrition status is ordinal, an OLR model-proportional odds model (POM) can be developed instead of two separate BLR models to find predictors of both malnutrition and severe malnutrition if the proportional odds assumption satisfies. The assumption is satisfied with low p-value (0.144) due to violation of the assumption for one co-variate. So partial proportional odds model (PPOM) and two BLR models have also been developed to check the applicability of the OLR model. Graphical test has also been adopted for checking the proportional odds assumption. RESULTS All the models determine that age of child, birth interval, mothers' education, maternal nutrition, household wealth status, child feeding index, and incidence of fever, ARI & diarrhoea were the significant predictors of child malnutrition; however, results of PPOM were more precise than those of other models. CONCLUSION These findings clearly justify that OLR models (POM and PPOM) are appropriate to find predictors of malnutrition instead of BLR models.
32f6c0b6f801da365ed39f50a4966cf241bb905e
Why Sleep Matters-The Economic Costs of Insufficient Sleep: A Cross-Country Comparative Analysis.
The Centers for Disease Control and Prevention (CDC) in the United States has declared insufficient sleep a "public health problem." Indeed, according to a recent CDC study, more than a third of American adults are not getting enough sleep on a regular basis. However, insufficient sleep is not exclusively a US problem, and equally concerns other industrialised countries such as the United Kingdom, Japan, Germany, or Canada. According to some evidence, the proportion of people sleeping less than the recommended hours of sleep is rising and associated with lifestyle factors related to a modern 24/7 society, such as psychosocial stress, alcohol consumption, smoking, lack of physical activity and excessive electronic media use, among others. This is alarming as insufficient sleep has been found to be associated with a range of negative health and social outcomes, including success at school and in the labour market. Over the last few decades, for example, there has been growing evidence suggesting a strong association between short sleep duration and elevated mortality risks. Given the potential adverse effects of insufficient sleep on health, well-being and productivity, the consequences of sleep-deprivation have far-reaching economic consequences. Hence, in order to raise awareness of the scale of insufficient sleep as a public-health issue, comparative quantitative figures need to be provided for policy- and decision-makers, as well as recommendations and potential solutions that can help tackling the problem.
506277ae84149b82d215f76bc4f7135400f65b1d
User-defined Interface Gestures: Dataset and Analysis
We present a video-based gesture dataset and a methodology for annotating video-based gesture datasets. Our dataset consists of user-defined gestures generated by 18 participants from a previous investigation of gesture memorability. We design and use a crowd-sourced classification task to annotate the videos. The results are made available through a web-based visualization that allows researchers and designers to explore the dataset. Finally, we perform an additional descriptive analysis and quantitative modeling exercise that provide additional insights into the results of the original study. To facilitate the use of the presented methodology by other researchers we share the data, the source of the human intelligence tasks for crowdsourcing, a new taxonomy that integrates previous work, and the source code of the visualization tool.
aa6da71c3099cd394b9af663cfadce1ef77cb37b
Decision Support for Handling Mismatches between COTS Products and System Requirements
In the process of selecting commercial off-the-shelf (COTS) products, it is inevitable to encounter mismatches between COTS products and system requirements. Mismatches occur when COTS attributes do not exactly match our requirements. Many of these mismatches are resolved after selecting a COTS product in order to improve its fitness with the requirements. This paper proposes a decision support approach that aims at addressing COTS mismatches during and after the selection process. Our approach can be integrated with existing COTS selection methods at two stages: (I) When evaluating COTS candidates: our approach is used to estimate the anticipated fitness of the candidates if their mismatches are resolved. This helps to base our COTS selection decisions on the fitness that the COTS candidates will eventually have if selected. (2) After selecting a COTS product: the approach suggests alternative plans for resolving the most appropriate mismatches using suitable actions, such that the most important risk, technical, and resource constraints are met. A case study from the e-services domain is used to illustrate the method and to discuss its added value
58bd0411bce7df96c44aa3579136eff873b56ac5
Multimodal Classification of Remote Sensing Images: A Review and Future Directions
Earth observation through remote sensing images allows the accurate characterization and identification of materials on the surface from space and airborne platforms. Multiple and heterogeneous image sources can be available for the same geographical region: multispectral, hyperspectral, radar, multitemporal, and multiangular images can today be acquired over a given scene. These sources can be combined/fused to improve classification of the materials on the surface. Even if this type of systems is generally accurate, the field is about to face new challenges: the upcoming constellations of satellite sensors will acquire large amounts of images of different spatial, spectral, angular, and temporal resolutions. In this scenario, multimodal image fusion stands out as the appropriate framework to address these problems. In this paper, we provide a taxonomical view of the field and review the current methodologies for multimodal classification of remote sensing images. We also highlight the most recent advances, which exploit synergies with machine learning and signal processing: sparse methods, kernel-based fusion, Markov modeling, and manifold alignment. Then, we illustrate the different approaches in seven challenging remote sensing applications: 1) multiresolution fusion for multispectral image classification; 2) image downscaling as a form of multitemporal image fusion and multidimensional interpolation among sensors of different spatial, spectral, and temporal resolutions; 3) multiangular image classification; 4) multisensor image fusion exploiting physically-based feature extractions; 5) multitemporal image classification of land covers in incomplete, inconsistent, and vague image sources; 6) spatiospectral multisensor fusion of optical and radar images for change detection; and 7) cross-sensor adaptation of classifiers. The adoption of these techniques in operational settings will help to monitor our planet from space in the very near future.
9b69889c7d762c04a2d13b112d0b37e4f719ca34
Interface engineering of highly efficient perovskite solar cells
Advancing perovskite solar cell technologies toward their theoretical power conversion efficiency (PCE) requires delicate control over the carrier dynamics throughout the entire device. By controlling the formation of the perovskite layer and careful choices of other materials, we suppressed carrier recombination in the absorber, facilitated carrier injection into the carrier transport layers, and maintained good carrier extraction at the electrodes. When measured via reverse bias scan, cell PCE is typically boosted to 16.6% on average, with the highest efficiency of ~19.3% in a planar geometry without antireflective coating. The fabrication of our perovskite solar cells was conducted in air and from solution at low temperatures, which should simplify manufacturing of large-area perovskite devices that are inexpensive and perform at high levels.
159f32e0d91ef919e94d9b6f1ef13ce9be62155c
Concatenate text embeddings for text classification
Text embedding has gained a lot of interests in text classification area. This paper investigates the popular neural document embedding method Paragraph Vector as a source of evidence in document ranking. We focus on the effects of combining knowledge-based with knowledge-free document embeddings for text classification task. We concatenate these two representations so that the classification can be done more accurately. The results of our experiments show that this approach achieves better performances on a popular dataset.
8db81373f22957d430dddcbdaebcbc559842f0d8
Limits of predictability in human mobility.
A range of applications, from predicting the spread of human and electronic viruses to city planning and resource management in mobile communications, depend on our ability to foresee the whereabouts and mobility of individuals, raising a fundamental question: To what degree is human behavior predictable? Here we explore the limits of predictability in human dynamics by studying the mobility patterns of anonymized mobile phone users. By measuring the entropy of each individual's trajectory, we find a 93% potential predictability in user mobility across the whole user base. Despite the significant differences in the travel patterns, we find a remarkable lack of variability in predictability, which is largely independent of the distance users cover on a regular basis.
2bbe9735b81e0978125dad005656503fca567902
Reusing Hardware Performance Counters to Detect and Identify Kernel Control-Flow Modifying Rootkits
Kernel rootkits are formidable threats to computer systems. They are stealthy and can have unrestricted access to system resources. This paper presents NumChecker, a new virtual machine (VM) monitor based framework to detect and identify control-flow modifying kernel rootkits in a guest VM. NumChecker detects and identifies malicious modifications to a system call in the guest VM by measuring the number of certain hardware events that occur during the system call's execution. To automatically count these events, NumChecker leverages the hardware performance counters (HPCs), which exist in modern processors. By using HPCs, the checking cost is significantly reduced and the tamper-resistance is enhanced. We implement a prototype of NumChecker on Linux with the kernel-based VM. An HPC-based two-phase kernel rootkit detection and identification technique is presented and evaluated on a number of real-world kernel rootkits. The results demonstrate its practicality and effectiveness.
e7317fd7bd4f31e70351ca801f41d0040558ad83
Development and investigation of efficient artificial bee colony algorithm for numerical function optimization
Artificial bee colony algorithm (ABC), which is inspired by the foraging behavior of honey bee swarm, is a biological-inspired optimization. It shows more effective than genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO). However, ABC is good at exploration but poor at exploitation, and its convergence speed is also an issue in some cases. For these insufficiencies, we propose an improved ABC algorithm called I-ABC. In I-ABC, the best-so-far solution, inertia weight and acceleration coefficients are introduced to modify the search process. Inertia weight and acceleration coefficients are defined as functions of the fitness. In addition, to further balance search processes, the modification forms of the employed bees and the onlooker ones are different in the second acceleration coefficient. Experiments show that, for most functions, the I-ABC has a faster convergence speed and ptimization better performances than each of ABC and the gbest-guided ABC (GABC). But I-ABC could not still substantially achieve the best solution for all optimization problems. In a few cases, it could not find better results than ABC or GABC. In order to inherit the bright sides of ABC, GABC and I-ABC, a high-efficiency hybrid ABC algorithm, which is called PS-ABC, is proposed. PS-ABC owns the abilities of prediction and selection. Results show that PS-ABC has a faster convergence speed like I-ABC and better search ability ods f than other relevant meth
7401611a24f86dffb5b0cd39cf11ee55a4edb32b
Comparative Evaluation of Anomaly Detection Techniques for Sequence Data
We present a comparative evaluation of a large number of anomaly detection techniques on a variety of publicly available as well as artificially generated data sets. Many of these are existing techniques while some are slight variants and/or adaptations of traditional anomaly detection techniques to sequence data.
d7988bb266bc6653efa4b83dda102e1fc464c1f8
Flexible and Stretchable Electronics Paving the Way for Soft Robotics
Planar and rigid wafer-based electronics are intrinsically incompatible with curvilinear and deformable organisms. Recent development of organic and inorganic flexible and stretchable electronics enabled sensing, stimulation, and actuation of/for soft biological and artificial entities. This review summarizes the enabling technologies of soft sensors and actuators, as well as power sources based on flexible and stretchable electronics. Examples include artificial electronic skins, wearable biosensors and stimulators, electronics-enabled programmable soft actuators, and mechanically compliant power sources. Their potential applications in soft robotics are illustrated in the framework of a five-step human–robot interaction loop. Outlooks of future directions and challenges are provided at the end.
a3d638ab304d3ef3862d37987c3a258a24339e05
CycleGAN, a Master of Steganography
CycleGAN [Zhu et al., 2017] is one recent successful approach to learn a transformation between two image distributions. In a series of experiments, we demonstrate an intriguing property of the model: CycleGAN learns to “hide” information about a source image into the images it generates in a nearly imperceptible, highfrequency signal. This trick ensures that the generator can recover the original sample and thus satisfy the cyclic consistency requirement, while the generated image remains realistic. We connect this phenomenon with adversarial attacks by viewing CycleGAN’s training procedure as training a generator of adversarial examples and demonstrate that the cyclic consistency loss causes CycleGAN to be especially vulnerable to adversarial attacks.
5b54b6aa8288a1e9713293cec0178e8f3db3de2d
A Novel Variable Reluctance Resolver for HEV/EV Applications
In order to simplify the manufacturing process of variable reluctance (VR) resolvers for hybrid electric vehicle/electric vehicle (HEV/EV) applications, a novel VR resolver with nonoverlapping tooth-coil windings is proposed in this paper. A comparison of the winding configurations is first carried out between the existing and the proposed designs, followed by the description of the operating principle. Furthermore, the influence of actual application conditions is investigated by finite-element (FE) analyses, including operating speed and assembling eccentricity. In addition, identical stator and windings of the novel design can be employed in three resolvers of different rotor saliencies. The voltage difference among the three rotor combinations, as well as the detecting accuracy, is further investigated. Finally, prototypes are fabricated and tested to verify the analyses.
355f9782e9667c19144e137761a7d44977c7a5c2
A content analysis of depression-related tweets
This study examines depression-related chatter on Twitter to glean insight into social networking about mental health. We assessed themes of a random sample (n=2,000) of depression-related tweets (sent 4-11 to 5-4-14). Tweets were coded for expression of DSM-5 symptoms for Major Depressive Disorder (MDD). Supportive or helpful tweets about depression was the most common theme (n=787, 40%), closely followed by disclosing feelings of depression (n=625; 32%). Two-thirds of tweets revealed one or more symptoms for the diagnosis of MDD and/or communicated thoughts or ideas that were consistent with struggles with depression after accounting for tweets that mentioned depression trivially. Health professionals can use our findings to tailor and target prevention and awareness messages to those Twitter users in need.
69393d1fe9d68b7aeb5dd57741be392d18385e13
A Meta-Analysis of Methodologies for Research in Knowledge Management, Organizational Learning and Organizational Memory: Five Years at HICSS
The Task Force on Organizational Memory presented a report at the Hawaii International Conference for System Sciences in January 1998. The report included perspectives on knowledge-oriented research, conceptual models for organizational memory, and research methodologies for researchers considering work in organizational memory. This paper builds on the ideas originally presented in the 1998 report by examining research presented at HICSS in the general areas of knowledge management, organizational memory and organizational learning in the five years since the original task force report.
c171faac12e0cf24e615a902e584a3444fcd8857
The Satisfaction With Life Scale.
5a14949bcc06c0ae9eecd29b381ffce22e1e75b2
Organizational Learning and Management Information Systems
T he articles in this issue ofDATA BASE were chosen b y Anthony G . Hopwood, who is a professor of accounting and financial reporting at the London Graduate Schoo l of Business Studies . The articles contain important ideas , Professor Hopwood wrote, of significance to all intereste d in information systems, be they practitioners or academics . The authors, with their professional affiliations at th e time, were Chris Argyris, Graduate School of Education , Harvard University; Bo Hedberg and Sten Jonsson, Department of Business Administration, University o f Gothenburg; J . Frisco den Hertog, N . V. Philips' Gloeilampenfabrieken, The Netherlands, and Michael J . Earl, Oxford Centre for Management Studies . The articles appeared originally in Accounting, Organizations and Society, a publication of which Professor Hopwood is editor-in-chief. AOS exists to monitor emergin g developments and to actively encourage new approaches and perspectives .
ae4bb38eaa8fecfddbc9afefa33188ba3cc2282b
Missing Data Estimation in High-Dimensional Datasets: A Swarm Intelligence-Deep Neural Network Approach
In this paper, we examine the problem of missing data in high-dimensional datasets by taking into consideration the Missing Completely at Random and Missing at Random mechanisms, as well as the Arbitrary missing pattern. Additionally, this paper employs a methodology based on Deep Learning and Swarm Intelligence algorithms in order to provide reliable estimates for missing data. The deep learning technique is used to extract features from the input data via an unsupervised learning approach by modeling the data distribution based on the input. This deep learning technique is then used as part of the objective function for the swarm intelligence technique in order to estimate the missing data after a supervised fine-tuning phase by minimizing an error function based on the interrelationship and correlation between features in the dataset. The investigated methodology in this paper therefore has longer running times, however, the promising potential outcomes justify the trade-off. Also, basic knowledge of statistics is presumed.
349119a443223a45dabcda844ac41e37bd1abc77
Spatio-Temporal Join on Apache Spark
Effective processing of extremely large volumes of spatial data has led to many organizations employing distributed processing frameworks. Apache Spark is one such open-source framework that is enjoying widespread adoption. Within this data space, it is important to note that most of the observational data (i.e., data collected by sensors, either moving or stationary) has a temporal component, or timestamp. In order to perform advanced analytics and gain insights, the temporal component becomes equally important as the spatial and attribute components. In this paper, we detail several variants of a spatial join operation that addresses both spatial, temporal, and attribute-based joins. Our spatial join technique differs from other approaches in that it combines spatial, temporal, and attribute predicates in the join operator. In addition, our spatio-temporal join algorithm and implementation differs from others in that it runs in commercial off-the-shelf (COTS) application. The users of this functionality are assumed to be GIS analysts with little if any knowledge of the implementation details of spatio-temporal joins or distributed processing. They are comfortable using simple tools that do not provide the ability to tweak the configuration of the
0161e4348a7079e9c37434c5af47f6372d4b412d
Class segmentation and object localization with superpixel neighborhoods
We propose a method to identify and localize object classes in images. Instead of operating at the pixel level, we advocate the use of superpixels as the basic unit of a class segmentation or pixel localization scheme. To this end, we construct a classifier on the histogram of local features found in each superpixel. We regularize this classifier by aggregating histograms in the neighborhood of each superpixel and then refine our results further by using the classifier in a conditional random field operating on the superpixel graph. Our proposed method exceeds the previously published state-of-the-art on two challenging datasets: Graz-02 and the PASCAL VOC 2007 Segmentation Challenge.
02227c94dd41fe0b439e050d377b0beb5d427cda
Reading Digits in Natural Images with Unsupervised Feature Learning
Detecting and reading text from natural images is a hard computer vision task that is central to a variety of emerging applications. Related problems like document character recognition have been widely studied by computer vision and machine learning researchers and are virtually solved for practical applications like reading handwritten digits. Reliably recognizing characters in more complex scenes like photographs, however, is far more difficult: the best existing methods lag well behind human performance on the same tasks. In this paper we attack the problem of recognizing digits in a real application using unsupervised feature learning methods: reading house numbers from street level photos. To this end, we introduce a new benchmark dataset for research use containing over 600,000 labeled digits cropped from Street View images. We then demonstrate the difficulty of recognizing these digits when the problem is approached with hand-designed features. Finally, we employ variants of two recently proposed unsupervised feature learning methods and find that they are convincingly superior on our benchmarks.
081651b38ff7533550a3adfc1c00da333a8fe86c
How transferable are features in deep neural networks?
Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.
17facd6efab9d3be8b1681bb2c1c677b2cb02628
Transfer Feature Learning with Joint Distribution Adaptation
Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.
1c734a14c2325cb76783ca0431862c7f04a69268
Deep Domain Confusion: Maximizing for Domain Invariance
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.
1e21b925b65303ef0299af65e018ec1e1b9b8d60
Unsupervised Cross-Domain Image Generation
We study the ecological use of analogies in AI. Specifically, we address the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given representation function f, which accepts inputs in either domains, would remain unchanged. Other than f, the training data is unsupervised and consist of a set of samples from each domain, without any mapping between them. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f preserving component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.
6918fcbf5c5a86a7ffaf5650080505b95cd6d424
Hierarchical organization versus self-organization
In this paper the difference between hierarchical organization and selforganization is investigated. Organization is defined as a structure with a function. But how does the structure affect the function? I will start to examine this by doing two simulations. The idea is to have a given network of agents, which influence their neighbors. How the result differs in three different types of networks, is then explored. In the first simulation, agents try to align with their neighbors. The second simulation is inspired by the ecosystem. Agents take certain products from their neighbors, and transform them into products their neighbors can use.
891d443dc003ed5f8762373395aacfa9ff895fd4
Moving object detection, tracking and classification for smart video surveillance
MOVING OBJECT DETECTION, TRACKING AND CLASSIFICATION FOR SMART VIDEO SURVEILLANCE Yiğithan Dedeoğlu M.S. in Computer Engineering Supervisor: Assist. Prof. Dr. Uğur Güdükbay August, 2004 Video surveillance has long been in use to monitor security sensitive areas such as banks, department stores, highways, crowded public places and borders. The advance in computing power, availability of large-capacity storage devices and high speed network infrastructure paved the way for cheaper, multi sensor video surveillance systems. Traditionally, the video outputs are processed online by human operators and are usually saved to tapes for later use only after a forensic event. The increase in the number of cameras in ordinary surveillance systems overloaded both the human operators and the storage devices with high volumes of data and made it infeasible to ensure proper monitoring of sensitive areas for long times. In order to filter out redundant information generated by an array of cameras, and increase the response time to forensic events, assisting the human operators with identification of important events in video by the use of “smart” video surveillance systems has become a critical requirement. The making of video surveillance systems “smart” requires fast, reliable and robust algorithms for moving object detection, classification, tracking and activity analysis. In this thesis, a smart visual surveillance system with real-time moving object detection, classification and tracking capabilities is presented. The system operates on both color and gray scale video imagery from a stationary camera. It can handle object detection in indoor and outdoor environments and under changing illumination conditions. The classification algorithm makes use of the shape of the detected objects and temporal tracking results to successfully categorize objects into pre-defined classes like human, human group and vehicle. The system is also able to detect the natural phenomenon fire in various scenes reliably. The proposed tracking algorithm successfully tracks video objects even in full occlusion cases. In addition to these, some important needs of a robust iii
38a08fbe5eabbd68db495fa38f4ee506d82095d4
IMGPU: GPU-Accelerated Influence Maximization in Large-Scale Social Networks
Influence Maximization aims to find the top-$(K)$ influential individuals to maximize the influence spread within a social network, which remains an important yet challenging problem. Proven to be NP-hard, the influence maximization problem attracts tremendous studies. Though there exist basic greedy algorithms which may provide good approximation to optimal result, they mainly suffer from low computational efficiency and excessively long execution time, limiting the application to large-scale social networks. In this paper, we present IMGPU, a novel framework to accelerate the influence maximization by leveraging the parallel processing capability of graphics processing unit (GPU). We first improve the existing greedy algorithms and design a bottom-up traversal algorithm with GPU implementation, which contains inherent parallelism. To best fit the proposed influence maximization algorithm with the GPU architecture, we further develop an adaptive K-level combination method to maximize the parallelism and reorganize the influence graph to minimize the potential divergence. We carry out comprehensive experiments with both real-world and sythetic social network traces and demonstrate that with IMGPU framework, we are able to outperform the state-of-the-art influence maximization algorithm up to a factor of 60, and show potential to scale up to extraordinarily large-scale networks.
1459a6fc833e60ce0f43fe0fc9a48f8f74db77cc
Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization
We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tackle this issue, we develop fast stochastic algorithms that provably converge to a stationary point for constant minibatches. Furthermore, using a variant of these algorithms, we obtain provably faster convergence than batch proximal gradient descent. Our results are based on the recent variance reduction techniques for convex optimization but with a novel analysis for handling nonconvex and nonsmooth functions. We also prove global linear convergence rate for an interesting subclass of nonsmooth nonconvex functions, which subsumes several recent works.
19229afbce15d62bcf8d3afe84a2d47a0b6f1939
Participatory design and "democratizing innovation"
Participatory design has become increasingly engaged in public spheres and everyday life and is no longer solely concerned with the workplace. This is not only a shift from work oriented productive activities to leisure and pleasurable engagements, but also a new milieu for production and innovation and entails a reorientation from "democracy at work" to "democratic innovation". What democratic innovation entails is currently defined by management and innovation research, which claims that innovation has been democratized through easy access to production tools and lead-users as the new experts driving innovation. We sketch an alternative "democratizing innovation" practice more in line with the original visions of participatory design based on our experience of running Malmö Living Labs - an open innovation milieu where new constellations, issues and ideas evolve from bottom-up long-term collaborations amongst diverse stakeholders. Two cases and controversial matters of concern are discussed. The fruitfulness of the concepts "Things" (as opposed to objects), "infrastructuring" (as opposed to projects) and "agonistic public spaces" (as opposed to consensual decision-making) are explored in relation to participatory innovation practices and democracy.
853331d5c2e4a5c29ff578c012bff7fec7ebd7bc
Study on Virtual Control of a Robotic Arm via a Myo Armband for the Self-Manipulation of a Hand Amputee
This paper proposes a Myo device that has electromyography (EMG) sensors for detecting electrical activities from different parts of the forearm muscles; it also has a gyroscope and an accelerometer. EMG sensors detect and provide very clear and important data from muscles compared with other types of sensors. The Myo armband sends data from EMG, gyroscope, and accelerometer sensors to a computer via Bluetooth and uses these data to control a virtual robotic arm which was built in Unity 3D. Virtual robotic arms based on EMG, gyroscope, and accelerometer sensors have different features. A robotic arm based on EMG is controlled by using the tension and relaxation of muscles. Consequently, a virtual robotic arm based on EMG is preferred for a hand amputee to a virtual robotic arm based on a gyroscope and an accelerometer
21786e6ca30849f750656277573ee11fa4d469c5
Physical Demands of Different Positions in FA Premier League Soccer.
The purpose of this study was to evaluate the physical demands of English Football Association (FA) Premier League soccer of three different positional classifications (defender, midfielder and striker). Computerised time-motion video-analysis using the Bloomfield Movement Classification was undertaken on the purposeful movement (PM) performed by 55 players. Recognition of PM had a good inter-tester reliability strength of agreement (κ= 0.7277). Players spent 40.6 ± 10.0% of the match performing PM. Position had a significant influence on %PM time spent sprinting, running, shuffling, skipping and standing still (p < 0.05). However, position had no significant influence on the %PM time spent performing movement at low, medium, high or very high intensities (p > 0.05). Players spent 48.7 ± 9.2% of PM time moving in a directly forward direction, 20.6 ± 6.8% not moving in any direction and the remainder of PM time moving backward, lateral, diagonal and arced directions. The players performed the equivalent of 726 ± 203 turns during the match; 609 ± 193 of these being of 0° to 90° to the left or right. Players were involved in the equivalent of 111 ± 77 on the ball movement activities per match with no significant differences between the positions for total involvement in on the ball activity (p > 0.05). This study has provided an indication of the different physical demands of different playing positions in FA Premier League match-play through assessment of movements performed by players. Key pointsPlayers spent ~40% of the match performing Pur-poseful Movement (PM).Position had a significant influence on %PM time spent performing each motion class except walking and jogging. Players performed >700 turns in PM, most of these being of 0°-90°.Strikers performed most high to very high intensity activity and most contact situations.Defenders also spent a significantly greater %PM time moving backwards than the other two posi-tions.Different positions could benefit from more specific conditioning programs.
76737d93659b31d5a6ce07a4e9e5107bc0c39adf
A CNS-permeable Hsp90 inhibitor rescues synaptic dysfunction and memory loss in APP-overexpressing Alzheimer’s mouse model via an HSF1-mediated mechanism
Induction of neuroprotective heat-shock proteins via pharmacological Hsp90 inhibitors is currently being investigated as a potential treatment for neurodegenerative diseases. Two major hurdles for therapeutic use of Hsp90 inhibitors are systemic toxicity and limited central nervous system permeability. We demonstrate here that chronic treatment with a proprietary Hsp90 inhibitor compound (OS47720) not only elicits a heat-shock-like response but also offers synaptic protection in symptomatic Tg2576 mice, a model of Alzheimer’s disease, without noticeable systemic toxicity. Despite a short half-life of OS47720 in mouse brain, a single intraperitoneal injection induces rapid and long-lasting (>3 days) nuclear activation of the heat-shock factor, HSF1. Mechanistic study indicates that the remedial effects of OS47720 depend upon HSF1 activation and the subsequent HSF1-mediated transcriptional events on synaptic genes. Taken together, this work reveals a novel role of HSF1 in synaptic function and memory, which likely occurs through modulation of the synaptic transcriptome.
d65c2cbc0980d0840b88b569516ae9c277d9d200
Credit card fraud detection using machine learning techniques: A comparative analysis
Financial fraud is an ever growing menace with far consequences in the financial industry. Data mining had played an imperative role in the detection of credit card fraud in online transactions. Credit card fraud detection, which is a data mining problem, becomes challenging due to two major reasons — first, the profiles of normal and fraudulent behaviours change constantly and secondly, credit card fraud data sets are highly skewed. The performance of fraud detection in credit card transactions is greatly affected by the sampling approach on dataset, selection of variables and detection technique(s) used. This paper investigates the performance of naïve bayes, k-nearest neighbor and logistic regression on highly skewed credit card fraud data. Dataset of credit card transactions is sourced from European cardholders containing 284,807 transactions. A hybrid technique of under-sampling and oversampling is carried out on the skewed data. The three techniques are applied on the raw and preprocessed data. The work is implemented in Python. The performance of the techniques is evaluated based on accuracy, sensitivity, specificity, precision, Matthews correlation coefficient and balanced classification rate. The results shows of optimal accuracy for naïve bayes, k-nearest neighbor and logistic regression classifiers are 97.92%, 97.69% and 54.86% respectively. The comparative results show that k-nearest neighbour performs better than naïve bayes and logistic regression techniques.
fc4bd8f4db91bbb4053b8174544f79bf67b96b3b
Bangladeshi Number Plate Detection: Cascade Learning vs. Deep Learning
This work investigated two different machine learning techniques: Cascade Learning and Deep Learning, to find out which algorithm performs better to detect the number plate of vehicles registered in Bangladesh. To do this, we created a dataset of about 1000 images collected from a security camera of Independent University, Bangladesh. Each image in the dataset were then labelled manually by selecting the Region of Interest (ROI). In the Cascade Learning approach, a sliding window technique was used to detect objects. Then a cascade classifier was employed to determine if the window contained object of interest or not. In the Deep Learning approach, CIFAR-10 dataset was used to pre-train a 15-layer Convolutional Neural Network (CNN). Using this pretrained CNN, a Regions with CNN (R-CNN) was then trained using our dataset. We found that the Deep Learning approach (maximum accuracy 99.60% using 566 training images) outperforms the detector constructed using Cascade classifiers (maximum accuracy 59.52% using 566 positive and 1022 negative training images) for 252 test images.
049c15a106015b287fec6fc3e8178d4c3f4adf67
Combining Poisson singular integral and total variation prior models in image restoration
In this paper, a novel Bayesian image restoration method based on a combination of priors is presented. It is well known that the Total Variation (TV) image prior preserves edge structures while imposing smoothness on the solutions. However, it tends to oversmooth textured areas. To alleviate this problem we propose to combine the TV and the Poisson Singular Integral (PSI) models, which, as we will show, preserves the image textures. The PSI prior depends on a parameter that controls the shape of the filter. A study on the behavior of the filter as a function of this parameter is presented. Our restoration model utilizes a bound for the TV image model based on the majorization–minimization principle, and performs maximum a posteriori Bayesian inference. In order to assess the performance of the proposed approach, in the experimental section we compare it with other restoration methods. & 2013 Elsevier B.V. All rights reserved.
ebf35073e122782f685a0d6c231622412f28a53b
A High-Quality Denoising Dataset for Smartphone Cameras
The last decade has seen an astronomical shift from imaging with DSLR and point-and-shoot cameras to imaging with smartphone cameras. Due to the small aperture and sensor size, smartphone images have notably more noise than their DSLR counterparts. While denoising for smartphone images is an active research area, the research community currently lacks a denoising image dataset representative of real noisy images from smartphone cameras with high-quality ground truth. We address this issue in this paper with the following contributions. We propose a systematic procedure for estimating ground truth for noisy images that can be used to benchmark denoising performance for smartphone cameras. Using this procedure, we have captured a dataset - the Smartphone Image Denoising Dataset (SIDD) - of ~30,000 noisy images from 10 scenes under different lighting conditions using five representative smartphone cameras and generated their ground truth images. We used this dataset to benchmark a number of denoising algorithms. We show that CNN-based methods perform better when trained on our high-quality dataset than when trained using alternative strategies, such as low-ISO images used as a proxy for ground truth data.
156e7730b8ba8a08ec97eb6c2eaaf2124ed0ce6e
THE CONTROL OF THE FALSE DISCOVERY RATE IN MULTIPLE TESTING UNDER DEPENDENCY By
Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than comparable procedures which control the traditional familywise error rate. We prove that this same procedure also controls the false discovery rate when the test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses. This condition for positive dependency is general enough to cover many problems of practical interest, including the comparisons of many treatments with a single control, multivariate normal test statistics with positive correlation matrix and multivariate t. Furthermore, the test statistics may be discrete, and the tested hypotheses composite without posing special difficulties. For all other forms of dependency, a simple conservative modification of the procedure controls the false discovery rate. Thus the range of problems for which a procedure with proven FDR control can be offered is greatly increased.
7f47767d338eb39664844c94833b52ae73d964ef
Gesture Recognition with a Convolutional Long Short-Term Memory Recurrent Neural Network
Inspired by the adequacy of convolutional neural networks in implicit extraction of visual features and the efficiency of Long Short-Term Memory Recurrent Neural Networks in dealing with long-range temporal dependencies, we propose a Convolutional Long Short-Term Memory Recurrent Neural Network (CNNLSTM) for the problem of dynamic gesture recognition. The model is able to successfully learn gestures varying in duration and complexity and proves to be a significant base for further development. Finally, the new gesture command TsironiGR-dataset for human-robot interaction is presented for the evaluation of CNNLSTM.
a9b533329845d5d1a31c3ff2821ce9865c440158
Mirroring others' emotions relates to empathy and interpersonal competence in children
The mirror neuron system (MNS) has been proposed to play an important role in social cognition by providing a neural mechanism by which others' actions, intentions, and emotions can be understood. Here functional magnetic resonance imaging was used to directly examine the relationship between MNS activity and two distinct indicators of social functioning in typically-developing children (aged 10.1 years+/-7 months): empathy and interpersonal competence. Reliable activity in pars opercularis, the frontal component of the MNS, was elicited by observation and imitation of emotional expressions. Importantly, activity in this region (as well as in the anterior insula and amygdala) was significantly and positively correlated with established behavioral measures indexing children's empathic behavior (during both imitation and observation) and interpersonal skills (during imitation only). These findings suggest that simulation mechanisms and the MNS may indeed be relevant to social functioning in everyday life during typical human development.
64887b38c382e331cd2b045f7a7edf05f17586a8
Genetic and environmental influences on sexual orientation and its correlates in an Australian twin sample.
We recruited twins systematically from the Australian Twin Registry and assessed their sexual orientation and 2 related traits: childhood gender nonconformity and continuous gender identity. Men and women differed in their distributions of sexual orientation, with women more likely to have slight-to-moderate degrees of homosexual attraction, and men more likely to have high degrees of homosexual attraction. Twin concordances for nonheterosexual orientation were lower than in prior studies. Univariate analyses showed that familial factors were important for all traits, but were less successful in distinguishing genetic from shared environmental influences. Only childhood gender nonconformity was significantly heritable for both men and women. Multivariate analyses suggested that the causal architecture differed between men and women, and, for women, provided significant evidence for the importance of genetic factors to the traits' covariation.
4f2d62eaf7559b91b97bab3076fcd5f306da57f2
A texture-based method for modeling the background and detecting moving objects
This paper presents a novel and efficient texture-based method for modeling the background and detecting moving objects from a video sequence. Each pixel is modeled as a group of adaptive local binary pattern histograms that are calculated over a circular region around the pixel. The approach provides us with many advantages compared to the state-of-the-art. Experimental results clearly justify our model.
23ddae93514a47b56dcbeed80e67fab62e8b5ec9
Retro: Targeted Resource Management in Multi-tenant Distributed Systems
In distributed systems shared by multiple tenants, effective resource management is an important pre-requisite to providing quality of service guarantees. Many systems deployed today lack performance isolation and experience contention, slowdown, and even outages caused by aggressive workloads or by improperly throttled maintenance tasks such as data replication. In this work we present Retro, a resource management framework for shared distributed systems. Retro monitors per-tenant resource usage both within and across distributed systems, and exposes this information to centralized resource management policies through a high-level API. A policy can shape the resources consumed by a tenant using Retro’s control points, which enforce sharing and ratelimiting decisions. We demonstrate Retro through three policies providing bottleneck resource fairness, dominant resource fairness, and latency guarantees to high-priority tenants, and evaluate the system across five distributed systems: HBase, Yarn, MapReduce, HDFS, and Zookeeper. Our evaluation shows that Retro has low overhead, and achieves the policies’ goals, accurately detecting contended resources, throttling tenants responsible for slowdown and overload, and fairly distributing the remaining cluster capacity.
8f81d1854da5f6254780f00966d0c00d174b9881
Significant Change Spotting for Periodic Human Motion Segmentation of Cleaning Tasks Using Wearable Sensors
The proportion of the aging population is rapidly increasing around the world, which will cause stress on society and healthcare systems. In recent years, advances in technology have created new opportunities for automatic activities of daily living (ADL) monitoring to improve the quality of life and provide adequate medical service for the elderly. Such automatic ADL monitoring requires reliable ADL information on a fine-grained level, especially for the status of interaction between body gestures and the environment in the real-world. In this work, we propose a significant change spotting mechanism for periodic human motion segmentation during cleaning task performance. A novel approach is proposed based on the search for a significant change of gestures, which can manage critical technical issues in activity recognition, such as continuous data segmentation, individual variance, and category ambiguity. Three typical machine learning classification algorithms are utilized for the identification of the significant change candidate, including a Support Vector Machine (SVM), k-Nearest Neighbors (kNN), and Naive Bayesian (NB) algorithm. Overall, the proposed approach achieves 96.41% in the F1-score by using the SVM classifier. The results show that the proposed approach can fulfill the requirement of fine-grained human motion segmentation for automatic ADL monitoring.