Dataset Viewer
Auto-converted to Parquet
Multi-document Summarization Using Support Vector Regression
Most multi-document summarization systems follow the extractive framework based on various features. While more and more sophisticated features are designed, the reasonable combination of features becomes a challenge. Usually the features are combined by a linear function whose weights are tuned manually. In this task, Support Vector Regression (SVR) model is used for automatically combining the features and scoring the sentences. Two important problems are inevitably involved. The first one is how to acquire the training data. Several automatic generation methods are introduced based on the standard reference summaries generated by human. Another indispensable problem in SVR application is feature selection, where various features will be picked out and combined into different feature sets to be tested. With the aid of DUC 2005 and 2006 data sets, comprehensive experiments are conducted with consideration of various SVR kernels and feature sets. Then the trained SVR model is used in the main task of DUC 2007 to get the extractive summaries.
A smooth-walled spline-profile horn as an alternative to the corrugated horn for wide band millimeter-wave applications
At millimeter-wave frequencies, corrugated horns can be difficult and expensive to manufacture. As an alternative we present here the results of a theoretical and measurement study of a smooth-walled spline-profile horn for specific application in the 80-120 GHz band. While about 50% longer than its corrugated counterpart, the smooth-walled horn is shown to give improved performance across the band as well as being much easier to manufacture.
An Analysis and Comparison of CDN-P2P-hybrid Content Delivery System and Model
In order to fully utilize the stable edge transmission capability of CDN and the scalable last-mile transmission capability of P2P, while at the same time avoiding ISP-unfriendly policies and unlimited usage of P2P delivery, some researches have begun focusing on CDN-P2P-hybrid architecture and ISP-friendly P2P content delivery technology in recent years. In this paper, we first survey CDN-P2P-hybrid architecture technology, including current industry efforts and academic efforts in this field. Second, we make comparisons between CDN and P2P. And then we explore and analyze main issues, including overlay route hybrid issues, and playing buffer hybrid issues. After that we focus on CDN-P2P-hybrid model analysis and design, we compare the tightlycoupled hybrid model with the loosely-coupled hybrid model, and we propose that there are some main common models which need further study. At last, we analyze the prospective research direction and propose our future work. KeywordsCDN, P2P, P2P Streaming, CDN-P2P-hybrid Architecture, Live Streaming, VoD Streaming Note: This work is supported by 2009 National Science Foundation of China (60903164): Research on Model and Algorithm of New-Generation Controllable, Trustworthy, Network-friendly CDN-P2P hybrid Content Delivery.
Optimality principles in sensorimotor control
The sensorimotor system is a product of evolution, development, learning and adaptation—which work on different time scales to improve behavioral performance. Consequently, many theories of motor function are based on 'optimal performance': they quantify task goals as cost functions, and apply the sophisticated tools of optimal control theory to obtain detailed behavioral predictions. The resulting models, although not without limitations, have explained more empirical phenomena than any other class. Traditional emphasis has been on optimizing desired movement trajectories while ignoring sensory feedback. Recent work has redefined optimality in terms of feedback control laws, and focused on the mechanisms that generate behavior online. This approach has allowed researchers to fit previously unrelated concepts and observations into what may become a unified theoretical framework for interpreting motor function. At the heart of the framework is the relationship between high-level goals, and the real-time sensorimotor control strategies most suitable for accomplishing those goals.
New Developments in Space Syntax Software
The Spatial Positioning tool (SPOT) is an isovist-based spatial analysis software, and is written in Java working as a stand-alone program. SPOT differs from regular Space syntax software as it can produce integration graphs and intervisibility graphs from a selection of positions. The concept of the software originates from a series of field studies on building interiors highly influenced by organizations and social groups. We have developed SPOT as a prototype. Basic SPOT operations use selections of positions and creations of isovist sets. The sets are color-coded and layered; the layers can be activated and visible by being turned on or off. At this point, there are two graphs produced in SPOT, the isovist overlap graph that shows intervisibility between overlapping isovist fields and the network integration analysis built on visibility relations. The program aims to be used as a fast and interactive sketch tool as well as a precise analysis tool. Data, images, and diagrams can be exported for use in conjunction with other CAD or illustration programs. The first stage of development is to have a functioning prototype with the implementation of all the basic algorithms and a minimal basic functionality in respect to user interaction.
A collaborative filtering approach to ad recommendation using the query-ad click graph
Search engine logs contain a large amount of click-through data that can be leveraged as soft indicators of relevance. In this paper we address the sponsored search retrieval problem which is to find and rank relevant ads to a search query. We propose a new technique to determine the relevance of an ad document for a search query using click-through data. The method builds on a collaborative filtering approach to discover new ads related to a query using a click graph. It is implemented on a graph with several million edges and scales to larger sizes easily. The proposed method is compared to three different baselines that are state-of-the-art for a commercial search engine. Evaluations on editorial data indicate that the model discovers many new ads not retrieved by the baseline methods. The ads from the new approach are on average of better quality than the baselines.
Example-Based Methods for Estimating 3 D Human Pose from Silhouette Image using Approximate Chamfer Distance and Kernel Subspace
Toward More Efficient NoC Arbitration : A Deep Reinforcement Learning Approach
The network on-chip (NoC) is a critical resource shared by various on-chip components. An efficient NoC arbitration policy is crucial in providing global fairness and improving system performance. In this preliminary work, we demonstrate an idea of utilizing deep reinforcement learning to guide the design of more efficient NoC arbitration policies. We relate arbitration to a self-learning decision making process. Results show that the deep reinforcement learning approach can effectively reduce packet latency and has potential for identifying interesting features that could be utilized in more practical hardware designs.
TCA: An Efficient Two-Mode Meta-Heuristic Algorithm for Combinatorial Test Generation (T)
Covering arrays (CAs) are often used as test suites for combinatorial interaction testing to discover interaction faults of real-world systems. Most real-world systems involve constraints, so improving algorithms for covering array generation (CAG) with constraints is beneficial. Two popular methods for constrained CAG are greedy construction and meta-heuristic search. Recently, a meta-heuristic framework called two-mode local search has shown great success in solving classic NPhard problems. We are interested whether this method is also powerful in solving the constrained CAG problem. This work proposes a two-mode meta-heuristic framework for constrained CAG efficiently and presents a new meta-heuristic algorithm called TCA. Experiments show that TCA significantly outperforms state-of-the-art solvers on 3-way constrained CAG. Further experiments demonstrate that TCA also performs much better than its competitors on 2-way constrained CAG.
Methods and protocols of modern solid phase Peptide synthesis.
The purpose of this article is to delineate strategic considerations and provide practical procedures to enable non-experts to synthesize peptides with a reasonable chance of success. This article is not encyclopedic but rather devoted to the Fmoc/tBu approach of solid phase peptide synthesis (SPPS), which is now the most commonly used methodology for the production of peptides. The principles of SPPS with a review of linkers and supports currently employed are presented. Basic concepts for the different steps of SPPS such as anchoring, deprotection, coupling reaction and cleavage are all discussed along with the possible problem of aggregation and side-reactions. Essential protocols for the synthesis of fully deprotected peptides are presented including resin handling, coupling, capping, Fmoc-deprotection, final cleavage and disulfide bridge formation.
Understanding the Intention of Information Contribution to Online Feedback Systems from Social Exchange and Motivation Crowding Perspectives
The online feedback system (OFS) has been touted to be an effective artifact for electronic word-of-mouth (EWOM). Accumulating sufficient detailed consumption information in the OFS is essential to the success of OFS. Yet, past research has focused on the effects of OFS on building trust and promoting sales and little knowledge about information provision to OFS has been developed. This study attempts to fill this gap by developing and testing a theoretical model to identify the possible antecedents that lead to the intention of consumers' information contribution to OFS. The model employs social exchange theory to identify benefit and cost factors influencing consumer intention, and motivation crowding theory to explore the moderating effects from environmental interventions that are embodied in OFS. Our preliminary results in general provide empirical support for the model. Practical implications are offered to OFS designers for system customization
Paragon: QoS-aware scheduling for heterogeneous datacenters
Large-scale datacenters (DCs) host tens of thousands of diverse applications each day. However, interference between colocated workloads and the difficulty to match applications to one of the many hardware platforms available can degrade performance, violating the quality of service (QoS) guarantees that many cloud workloads require. While previous work has identified the impact of heterogeneity and interference, existing solutions are computationally intensive, cannot be applied online and do not scale beyond few applications. We present Paragon, an online and scalable DC scheduler that is heterogeneity and interference-aware. Paragon is derived from robust analytical methods and instead of profiling each application in detail, it leverages information the system already has about applications it has previously seen. It uses collaborative filtering techniques to quickly and accurately classify an unknown, incoming workload with respect to heterogeneity and interference in multiple shared resources, by identifying similarities to previously scheduled applications. The classification allows Paragon to greedily schedule applications in a manner that minimizes interference and maximizes server utilization. Paragon scales to tens of thousands of servers with marginal scheduling overheads in terms of time or state. We evaluate Paragon with a wide range of workload scenarios, on both small and large-scale systems, including 1,000 servers on EC2. For a 2,500-workload scenario, Paragon enforces performance guarantees for 91% of applications, while significantly improving utilization. In comparison, heterogeneity-oblivious, interference-oblivious and least-loaded schedulers only provide similar guarantees for 14%, 11% and 3% of workloads. The differences are more striking in oversubscribed scenarios where resource efficiency is more critical.
Bubble-up: Increasing utilization in modern warehouse scale computers via sensible co-locations
As much of the world's computing continues to move into the cloud, the overprovisioning of computing resources to ensure the performance isolation of latency-sensitive tasks, such as web search, in modern datacenters is a major contributor to low machine utilization. Being unable to accurately predict performance degradation due to contention for shared resources on multicore systems has led to the heavy handed approach of simply disallowing the co-location of high-priority, latency-sensitive tasks with other tasks. Performing this precise prediction has been a challenging and unsolved problem. In this paper, we present Bubble-Up, a characterization methodology that enables the accurate prediction of the performance degradation that results from contention for shared resources in the memory subsystem. By using a bubble to apply a tunable amount of "pressure" to the memory subsystem on processors in production datacenters, our methodology can predict the performance interference between co-locate applications with an accuracy within 1% to 2% of the actual performance degradation. Using this methodology to arrive at "sensible" co-locations in Google's production datacenters with real-world large-scale applications, we can improve the utilization of a 500-machine cluster by 50% to 90% while guaranteeing a high quality of service of latency-sensitive applications.
Factored recurrent neural network language model in TED lecture transcription
In this study, we extend recurrent neural network-based language models (RNNLMs) by explicitly integrating morphological and syntactic factors (or features). Our proposed RNNLM is called a factored RNNLM that is expected to enhance RNNLMs. A number of experiments are carried out on top of state-of-the-art LVCSR system that show the factored RNNLM improves the performance measured by perplexity and word error rate. In the IWSLT TED test data sets, absolute word error rate reductions over RNNLM and n-gram LM are 0.4∼0.8 points.
Discriminative Multi-View Interactive Image Re-Ranking
Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users’ intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies.
PTBI: An efficient privacy-preserving biometric identification based on perturbed term in the cloud
Biometric identification has played an important role in achieving user authentication. For efficiency and economic savings, biometric data owners are motivated to outsource the biometric data and identification tasks to a third party, which however introduces potential threats to user’s privacy. In this paper, we propose a new privacy-preserving biometric identification scheme which can release the database owner from heavy computation burden. In the proposed scheme, we design concrete biometric data encryption and matching algorithms, and introduce perturb terms in each biometric data. A thorough analysis indicates that our schemes are secure, and the ultimate scheme offers a high level of privacy protection. In addition, the performance evaluations via extensive simulations demonstrate our schemes’ efficiency. © 2017 Elsevier Inc. All rights reserved.
An Energy-Efficient Architecture for the Internet of Things (IoT)
Internet of things (IoT) is a smart technology that connects anything anywhere at any time. Such ubiquitous nature of IoT is responsible for draining out energy from its resources. Therefore, the energy efficiency of IoT resources has emerged as a major research issue. In this paper, an energy-efficien t architecture for IoT has been proposed, which consists of three layers, namely, sensing and control, information processing, and presentation. The architectural design allows the system to predict the sleep interval of sensors based upon their remaining battery level, their previous usage history, and quality of information required for a particular application. The predicted value can be used to boost the utilization of cloud resources by reprovisioning the allocated resources when the corresponding sensory nodes are in sleep mode. This mechanism allows the energy-efficient utilization of all the IoT resources. The experimental results show a significant amount of energy saving in the case of sensor nodes and improved resource utilization of cloud resources.
Tight bounds for rumor spreading in graphs of a given conductance
We study the connection between the rate at which a rumor spreads throughout a graph and the conductance of the graph—a standard measure of a graph’s expansion properties. We show that for any n-node graph with conductance φ, the classical PUSH-PULL algorithm distributes a rumor to all nodes of the graph in O(φ log n) rounds with high probability (w.h.p.). This bound improves a recent result of Chierichetti, Lattanzi, and Panconesi [6], and it is tight in the sense that there exist graphs where Ω(φ log n) rounds of the PUSH-PULL algorithm are required to distribute a rumor w.h.p. We also explore the PUSH and the PULL algorithms, and derive conditions that are both necessary and sufficient for the above upper bound to hold for those algorithms as well. An interesting finding is that every graph contains a node such that the PULL algorithm takes O(φ log n) rounds w.h.p. to distribute a rumor started at that node. In contrast, there are graphs where the PUSH algorithm requires significantly more rounds for any start node. 1998 ACM Subject Classification G.3 [Mathematics of Computing]: Probability and Statistics
Coccydynia: an overview of the anatomy, etiology, and treatment of coccyx pain.
BACKGROUND Despite its small size, the coccyx has several important functions. Along with being the insertion site for multiple muscles, ligaments, and tendons, it also serves as one leg of the tripod-along with the ischial tuberosities-that provides weight-bearing support to a person in the seated position. The incidence of coccydynia (pain in the region of the coccyx) has not been reported, but factors associated with increased risk of developing coccydynia include obesity and female gender. METHODS This article provides an overview of the anatomy, physiology, and treatment of coccydynia. RESULTS Conservative treatment is successful in 90% of cases, and many cases resolve without medical treatment. Treatments for refractory cases include pelvic floor rehabilitation, manual manipulation and massage, transcutaneous electrical nerve stimulation, psychotherapy, steroid injections, nerve block, spinal cord stimulation, and surgical procedures. CONCLUSION A multidisciplinary approach employing physical therapy, ergonomic adaptations, medications, injections, and, possibly, psychotherapy leads to the greatest chance of success in patients with refractory coccyx pain. Although new surgical techniques are emerging, more research is needed before their efficacy can be established.
Passive ultrasonic irrigation of the root canal: a review of the literature.
Ultrasonic irrigation of the root canal can be performed with or without simultaneous ultrasonic instrumentation. When canal shaping is not undertaken the term passive ultrasonic irrigation (PUI) can be used to describe the technique. In this paper the relevant literature on PUI is reviewed from a MEDLINE database search. Passive ultrasonic irrigation can be performed with a small file or smooth wire (size 10-20) oscillating freely in the root canal to induce powerful acoustic microstreaming. PUI can be an important supplement for cleaning the root canal system and, compared with traditional syringe irrigation, it removes more organic tissue, planktonic bacteria and dentine debris from the root canal. PUI is more efficient in cleaning canals than ultrasonic irrigation with simultaneous ultrasonic instrumentation. PUI can be effective in curved canals and a smooth wire can be as effective as a cutting K-file. The taper and the diameter of the root canal were found to be important parameters in determining the efficacies of dentine debris removal. Irrigation with sodium hypochlorite is more effective than with water and ultrasonic irrigation is more effective than sonic irrigation in the removal of dentine debris from the root canal. The role of cavitation during PUI remains inconclusive. No detailed information is available on the influence of the irrigation time, the volume of the irrigant, the penetration depth of the instrument and the shape and material properties of the instrument. The influence of irrigation frequency and intensity on the streaming pattern as well as the complicated interaction of acoustic streaming with the adherent biofilm needs to be clarified to reveal the underlying physical mechanisms of PUI.
Web-based expert systems: benefits and challenges
Convergence of technologies in the Internet and the field of expert systems has offered new ways of sharing and distributing knowledge. However, there has been a general lack of research in the area of web-based expert systems (ES). This paper addresses the issues associated with the design, development, and use of web-based ES from a standpoint of the benefits and challenges of developing and using them. The original theory and concepts in conventional ES were reviewed and a knowledge engineering framework for developing them was revisited. The study considered three web-based ES: WITS-Advisor – for e-business strategy development, Fish-Expert – For fish disease diagnosis, and IMIS – to promote intelligent interviews. The benefits and challenges in developing and using ES are discussed by comparing them with traditional standalone systems from development and application perspectives.
Identification, characterization, and grounding of gradable terms in clinical text
Gradable adjectives are inherently vague and are used by clinicians to document medical interpretations (e.g., severe reaction, mild symptoms). We present a comprehensive study of gradable adjectives used in the clinical domain. We automatically identify gradable adjectives and demonstrate that they have a substantial presence in clinical text. Further, we show that there is a specific pattern associated with their usage, where certain medical concepts are more likely to be described using these adjectives than others. Interpretation of statements using such adjectives is a barrier in medical decision making. Therefore, we use a simple probabilistic model to ground their meaning based on their usage in context.
A compact printed wide-slot UWB antenna with band-notched characteristics
In this paper, we present an offset microstrip-fed ultrawideband antenna with band notched characteristics. The antenna structure consists of rectangular radiating patch and ground plane with rectangular shaped slot, which increases impedance bandwidth upto 123.52%(2.6–11GHz). A new modified U slot is etched in the radiating patch to create band-notched properties in the WiMAX (3.3–3.7GHz) and C-band satellite communication (3.7–4.15GHz). Furthermore, parametric studies have been conducted using EM simulation software CADFEKO suite(7.0). A prototype of antenna is fabricated on 1.6mm thick FR-4 substrate with dielectric constant of 4.4 and loss tangent of 0.02. The proposed antenna exhibits directional and omnidirectional radiation patterns along E and H-plane with stable efficiency over the frequency band from 2.6GHz to 11GHz with VSWR less than 2, except 3.3–4.15GHz notched frequency band. The proposed antenna shows good time domain analysis.
Causes and consequences of microRNA dysregulation in cancer
Over the past several years it has become clear that alterations in the expression of microRNA (miRNA) genes contribute to the pathogenesis of most — if not all — human malignancies. These alterations can be caused by various mechanisms, including deletions, amplifications or mutations involving miRNA loci, epigenetic silencing or the dysregulation of transcription factors that target specific miRNAs. Because malignant cells show dependence on the dysregulated expression of miRNA genes, which in turn control or are controlled by the dysregulation of multiple protein-coding oncogenes or tumour suppressor genes, these small RNAs provide important opportunities for the development of future miRNA-based therapies.
A review on memristor applications
This article presents a review on the main applications of the fourth fundamental circuit element, named "memristor", which had been proposed for the first time by Leon Chua and has recently been developed by a team at HP Laboratories led by Stanley Williams. In particular, after a brief analysis of memristor theory with a description of the first memristor, manufactured at HP Laboratories, we present its main applications in the circuit design and computer technology, together with future developments.
A Customer Loyalty Model for E-Service Context
While the importance of customer loyalty has been recognized in the marketing literature for at least three decades, the conceptualization and empirical validation of a customer loyalty model for e-service context has not been addressed. This paper describes a theoretical model for investigating the three main antecedent influences on loyalty (attitudinal commitment and behavioral loyalty) for e-service context: trust, customer satisfaction, and perceived value. Based on the theoretical model, a comprehensive set of hypotheses were formulated and a methodology for testing them was outlined. These hypotheses were tested empirically to demonstrate the applicability of the theoretical model. The results indicate that trust, customer satisfaction, perceived value, and commitment are separate constructs that combine to determine the loyalty, with commitment exerting a stronger influence than trust, customer satisfaction, and perceived value. Customer satisfaction and perceived value were also indirectly related to loyalty through commitment. Finally, the authors discuss the managerial and theoretical implications of these results.
A multi-scale target detection method for optical remote sensing images
Faster RCNN is a region proposal based object detection approach. It integrates the region proposal stage and classification stage into a single pipeline, which has both rapid speed and high detection accuracy. However, when the model is applied to the target detection of remote sensing imagery, faced with multi-scale targets, its performance is degraded. We analyze the influences of pooling operation and target size on region proposal, then a modified solution for region proposal is introduced to improve recall rate of multi-scale targets. To speed up the convergence of the region proposal networks, an improved generation strategy of foreground samples is proposed, which could suppresses the generation of non-effective foreground samples. Extensive evaluations on the remote sensing image dataset show that the proposed model can obviously improve detection accuracy for multi-scale targets, moreover the training of the model is rapid and high-efficient.
A Linearly Relaxed Approximate Linear Program for Markov Decision Processes
Approximate linear programming (ALP) and its variants have been widely applied to Markov decision processes (MDPs) with a large number of states. A serious limitation of ALP is that it has an intractable number of constraints, as a result of which constraint approximations are of interest. In this paper, we define a linearly relaxed approximation linear program (LRALP) that has a tractable number of constraints, obtained as positive linear combinations of the original constraints of the ALP. The main contribution is a novel performance bound for LRALP.
Data science is science's second chance to get causal inference right: A classification of data science tasks
1. Departments of Epidemiology and Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA 2. Harvard-MIT Division of Health Sciences and Technology, Boston, MA 3. Mongan Institute, Massachusetts General Hospital, Boston, MA 4. Department of Health Care Policy, Harvard Medical School, Boston, MA 5. Department of Neurology, Harvard Medical School, Partners MS Center, Brigham and Women’s Hospital, Boston, MA 6. Biostatistics Center, Massachusetts General Hospital, Boston, MA
Attention Alignment Multimodal LSTM for Fine-Gained Common Space Learning
We address the problem common space learning approach that maps all related multimodal information into a common space for multimodal data. To establish a fine-grained common space, the aligned relevant local information of different modalities is used to learn a common subspace where the projected fragmented information is further integrated according to intra-modal semantic relationships. Specifically, we propose a novel multimodal LSTM with an attention alignment mechanism, namely attention alignment multimodal LSTM (AAM-LSTM), which mainly includes attentional alignment recurrent network (AA-R) and hierarchical multimodal LSTM (HM-LSTM). Different from the traditional methods which operate on the full modal data directly, the proposed model exploits the inter-modal and intra-modal semantic relationships of local information, to jointly establish a uniform representation of multi-modal data. Specifically, AA-R automatically captures semantic-aligned local information to learn common subspace without the need of supervised labels, then HM-LSTM leverages the potential relationships of these local information to learn a fine-grained common space. The experimental results on Filker30K, Filker8K, and Filker30K entities verify the performance and effectiveness of our model, which compares favorably with the state-of-the-art methods. In particular, the experiment of phrase localization on AA-R with Filker30K entities shows the expected accurate attention alignment. Moreover, from the experiment results of image-sentence retrieval tasks, it can be concluded that the proposed AAM-LSTM outperforms benchmark algorithms by a large margin.
Chip-level and board-level CDM ESD tests on IC products
The electrostatic discharge (ESD) transient currents and failure analysis (FA) between chip-level and board-level charged-device-model (CDM) ESD tests are investigated in this work. The discharging current waveforms of three different printed circuit boards (PCBs) are characterized first. Then, the chip-level and board-level CDM ESD tests are performed to an ESD-protected dummy NMOS and a high-speed receiver front-end circuit, respectively. Scanning electron microscope (SEM) failure pictures show that the board-level CDM ESD test causes much severer failure than that caused by the chip-level CDM ESD test.
Classifying Objectionable Websites Based on Image Content
This paper describes IBCOW (Image-based Classi cation of Objectionable Websites), a system capable of classifying a website as objectionable or benign based on image content. The system uses WIPETM (Wavelet Image Pornography Elimination) and statistics to provide robust classi cation of on-line objectionable World Wide Web sites. Semantically-meaningful feature vector matching is carried out so that comparisons between a given on-line image and images marked as "objectionable" and "benign" in a training set can be performed efciently and e ectively in the WIPE module. If more than a certain number of images sampled from a site is found to be objectionable, then the site is considered to be objectionable. The statistical analysis for determining the size of the image sample and the threshold number of objectionable images is given in this paper. The system is practical for real-world applications, classifying a Web site at a speed of less than 2 minutes each, including the time to compute the feature vector for the images downloaded from the site, on a Pentium Pro PC. Besides its exceptional speed, it has demonstrated 97% sensitivity and 97% speci city in classifying a Web site based solely on images. Both the sensitivity and the speci city in real-world applications is expected to be higher because our performance evaluation is relatively conservative and surrounding text can be used to assist the classi cation process.
Knowledge-Based Distant Regularization in Learning Probabilistic Models
Exploiting the appropriate inductive bias based on the knowledge of data is essential for achieving good performance in statistical machine learning. In practice, however, the domain knowledge of interest often provides information on the relationship of data attributes only distantly, which hinders direct utilization of such domain knowledge in popular regularization methods. In this paper, we propose the knowledge-based distant regularization framework, in which we utilize the distant information encoded in a knowledge graph for regularization of probabilistic model estimation. In particular, we propose to impose prior distributions on model parameters specified by knowledge graph embeddings. As an instance of the proposed framework, we present the factor analysis model with the knowledge-based distant regularization. We show the results of preliminary experiments on the improvement of the generalization capability of such model.
Representation Learning for Grounded Spatial Reasoning
The interpretation of spatial references is highly contextual, requiring joint inference over both language and the environment. We consider the task of spatial reasoning in a simulated environment, where an agent can act and receive rewards. The proposed model learns a representation of the world steered by instruction text. This design allows for precise alignment of local neighborhoods with corresponding verbalizations, while also handling global references in the instructions. We train our model with reinforcement learning using a variant of generalized value iteration. The model outperforms state-of-the-art approaches on several metrics, yielding a 45% reduction in goal localization error.
Lower Extremity Exoskeletons and Active Orthoses: Challenges and State-of-the-Art
In the nearly six decades since researchers began to explore methods of creating them, exoskeletons have progressed from the stuff of science fiction to nearly commercialized products. While there are still many challenges associated with exoskeleton development that have yet to be perfected, the advances in the field have been enormous. In this paper, we review the history and discuss the state-of-the-art of lower limb exoskeletons and active orthoses. We provide a design overview of hardware, actuation, sensory, and control systems for most of the devices that have been described in the literature, and end with a discussion of the major advances that have been made and hurdles yet to be overcome.
Brain–machine interfaces: past, present and future
Since the original demonstration that electrical activity generated by ensembles of cortical neurons can be employed directly to control a robotic manipulator, research on brain-machine interfaces (BMIs) has experienced an impressive growth. Today BMIs designed for both experimental and clinical studies can translate raw neuronal signals into motor commands that reproduce arm reaching and hand grasping movements in artificial actuators. Clearly, these developments hold promise for the restoration of limb mobility in paralyzed subjects. However, as we review here, before this goal can be reached several bottlenecks have to be passed. These include designing a fully implantable biocompatible recording device, further developing real-time computational algorithms, introducing a method for providing the brain with sensory feedback from the actuators, and designing and building artificial prostheses that can be controlled directly by brain-derived signals. By reaching these milestones, future BMIs will be able to drive and control revolutionary prostheses that feel and act like the human arm.
A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects.
Multiple factors simultaneously affect the spiking activity of individual neurons. Determining the effects and relative importance of these factors is a challenging problem in neurophysiology. We propose a statistical framework based on the point process likelihood function to relate a neuron's spiking probability to three typical covariates: the neuron's own spiking history, concurrent ensemble activity, and extrinsic covariates such as stimuli or behavior. The framework uses parametric models of the conditional intensity function to define a neuron's spiking probability in terms of the covariates. The discrete time likelihood function for point processes is used to carry out model fitting and model analysis. We show that, by modeling the logarithm of the conditional intensity function as a linear combination of functions of the covariates, the discrete time point process likelihood function is readily analyzed in the generalized linear model (GLM) framework. We illustrate our approach for both GLM and non-GLM likelihood functions using simulated data and multivariate single-unit activity data simultaneously recorded from the motor cortex of a monkey performing a visuomotor pursuit-tracking task. The point process framework provides a flexible, computationally efficient approach for maximum likelihood estimation, goodness-of-fit assessment, residual analysis, model selection, and neural decoding. The framework thus allows for the formulation and analysis of point process models of neural spiking activity that readily capture the simultaneous effects of multiple covariates and enables the assessment of their relative importance.
Power assist method for HAL-3 using EMG-based feedback controller
We have developed the exoskeletal robotics suite HAL (Hybrid Assisitve Leg) which is integrated with human and assists suitable power for ‘lower limb of people with goit disorder. This study proposes the method of assist motion and ossist torque to realize a power assist corresponding to the operator’s intention. In the method of a,ssist motion, we adopted Phase Sequence control which generates a series of assist m,otions hi/ transiti.n,q .some simple basic motions called Phase. w e used the feedback controller to adjust the assist torque to m.ointain myoelectricity signals which were generated while performing the power assist uiolking. The experiment xsults showed the effective power assist according to operator’s intention b y using these control methods.
Adaptive control of a variable-impedance ankle-foot orthosis to assist drop-foot gait
An active ankle-foot orthoses (AAFO) is presented where the impedance of the orthotic joint is modulated throughout the walking cycle to treat drop-foot gait. During controlled plantar flexion, a biomimetic torsional spring control is applied where orthotic joint stiffness is actively adjusted to minimize forefoot collisions with the ground. Throughout late stance, joint impedance is minimized so as not to impede powered plantar flexion movements, and during the swing phase, a torsional spring-damper control lifts the foot to provide toe clearance. To assess the clinical effects of variable-impedance control, kinetic and kinematic gait data were collected on two drop-foot participants wearing the AAFO. For each participant, zero, constant, and variable impedance control strategies were evaluated and the results were compared to the mechanics of three age, weight, and height matched normals. We find that actively adjusting joint impedance reduces the occurrence of slap foot allows greater powered plantar flexion and provides for less kinematic difference during swing when compared to normals. These results indicate that a variable-impedance orthosis may have certain clinical benefits for the treatment of drop-foot gait compared to conventional ankle-foot orthoses having zero or constant stiffness joint behaviors.
Comparison of Parametric and Nonparametric Techniques for Non-peak Traffic Forecasting
Accurately predicting non-peak traffic is crucial to daily traffic for all forecasting models. In the paper, least squares support vector machines (LS-SVMs) are investigated to solve such a practical problem. It is the first time to apply the approach and analyze the forecast performance in the domain. For comparison purpose, two parametric and two non-parametric techniques are selected because of their effectiveness proved in past research. Having good generalization ability and guaranteeing global minima, LS-SVMs perform better than the others. Providing sufficient improvement in stability and robustness reveals that the approach is practically promising. Keywords—Parametric and Nonparametric Techniques, Non-peak Traffic Forecasting
Ultrasound-Assisted Evaluation ofthe Airway in Clinical AnesthesiaPractice: Past, Present and Future
Introduction: The incidence of difficulties encountered in perioperative airway management has been reported to range from 1% to 4%. In patients with head and neck cancers, the incidence can be dramatically higher. Because of high quality of imaging, non-invasiveness and relatively low cost, ultrasonography has been utilized as a valuable adjunct to the clinical assessment of the airway. A review of the literature was conducted with the objective of summarizing the available evidence concerning the use of ultrasound (US) for assessment of the airway, with special emphasis on head and neck cancers. Methods and Materials: A systematic search of the literature in the MEDLINE database was performed. A total of 42 manuscripts from 329 searched articles were included in this review. Results: Ultrasonography was found to give high-resolution images of the anatomic structures of the upper airway comparable to computed tomography and magnetic resonance imaging. Several ultrasonographic parameters (soft tissue thickness at level of hyoid bone, epiglottis and vocal cords, visibility of hyoid bone in sublingual ultrasound, hyomental distance in head-extended position and hyomental distance ratio) were found to be independent predictors of difficult laryngoscopy in obese and non-obese patients. In conjunction with elastosonography, it also provided valuable information regarding tumors, infiltration, and edema as well as fibrosis of the head and neck. Conclusion: Ultrasound-assisted evaluation of the difficult airway offers many important advantages. The ready availability of US machines in anesthesiology departments, familiarity of anesthesia providers with USguided procedures and the portability of US machines allow real-time, point-of-care assessment. It will undoubtedly become more popular and will greatly contribute to improve perioperative patient safety.
Time series forecasting using Artificial Neural Networks vs. evolving models
Time series forecasting plays an important role in many fields such as economics, finance, business intelligence, natural sciences, and the social sciences. This forecasting task can be achieved by using different techniques such as statistical methods or Artificial Neural Networks (ANN). In this paper, we present two different approaches to time series forecasting: evolving Takagi-Sugeno (eTS) fuzzy model and ANN. These two different methods will be compared taking into account the different characteristic of each approach.
Practice Makes Perfect ? When Does Massed Learning Improve Product Usage Proficiency ?
Previous research has shown that spacing of information (over time) leads to better learning of product information. We develop a theoretical framework to describe how massed or spaced learning schedules interact with different learning styles to influence product usage proficiency. The core finding is that with experiential learning, proficiency in a product usage task is better under massed conditions, whereas with verbal learning, spacing works better. This effect is demonstrated for usage proficiency assessed via speed as well as quality of use. Further, massed learning also results in better usage proficiency on transfer tasks, for both experiential and verbal learning. We also find that massed learning in experiential learning conditions leads not only to better usage proficiency but also to positive perceptions of the product. Overall, the pattern of results is consistent with a conceptual mapping account, with massed experiences leading to a superior mental model of usage and thus to better usage proficiency.
Multi-Modal Fashion Product Retrieval
Finding a product in the fashion world can be a daunting task. Everyday, e-commerce sites are updating with thousands of images and their associated metadata (textual information), deepening the problem. In this paper, we leverage both the images and textual metadata and propose a joint multi-modal embedding that maps both the text and images into a common latent space. Distances in the latent space correspond to similarity between products, allowing us to effectively perform retrieval in this latent space. We compare against existing approaches and show significant improvements in retrieval tasks on a largescale e-commerce dataset.
A 3.4 – 6.2 GHz Continuously tunable electrostatic MEMS resonator with quality factor of 460–530
In this paper we present the first MEMS electrostatically-tunable loaded-cavity resonator that simultaneously achieves a very high continuous tuning range of 6.2 GHz:3.4 GHz (1.8:1) and quality factor of 460–530 in a volume of 18×30×4 mm3 including the actuation scheme and biasing lines. The operating principle relies on tuning the capacitance of the loaded-cavity by controlling the gap between an electrostatically-actuated membrane and the cavity post underneath it. Particular attention is paid on the fabrication of the tuning mechanism in order to avoid a) quality factor degradation due to the biasing lines and b) hysteresis and creep issues. A single-crystal silicon membrane coated with a thin gold layer is the key to the success of the design.
Recommendations for the Assessment of Blend and Content Uniformity: Modifications to Withdrawn FDA Draft Stratified Sampling Guidance
The following paper describes the International Society for Pharmaceutical Engineering (ISPE)-sponsored Blend Uniformity and Content Uniformity Group’s proposed modifications to the withdrawn FDA draft guidance document for industry “Powder Blends and Finished Dosage Units—Stratified In-Process Dosage Unit Sampling and Assessment.” The modifications targeted FDA’s primary concerns that led to the withdrawal of the draft guidance document, which were insufficient blend uniformity testing and that a one-time passing of the criteria stated in USP General Chapter <905> Uniformity of Dosage Units testing lacks confidence to ensure the content uniformity of a batch. The Group’s approach discusses when triplicate blend samples should be analyzed and the importance of performing variance component analysis on the data to identify root causes of non-uniformity. The Group recommends the use of statistically based approaches, acceptance criteria, and sampling plans for assessing content uniformity for batch release that provide increased confidence that future samples drawn from the batch will comply with USP <905>. Alternative statistical approaches, sampling plans, and acceptance criteria, including modern analytical method (e.g., process analytical technology (PAT)) sampling plans, may be substituted for those mentioned in this paper, with justification. This approach also links blend and content uniformity testing to the three stages of the life cycle process validation approach. A framework for the assessment of blend and content uniformity that provides greater assurance of passing USP <905> is presented.
Symmetric Nonnegative Matrix Factorization: Algorithms and Applications to Probabilistic Clustering
Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression.
Child maltreatment and the developing brain : A review of neuroscience perspectives ☆
a r t i c l e i n f o Keywords: Child maltreatment Neuroscience Brain plasticity Stress system dysregulation Brain development In this article we review neuroscience perspectives on child maltreatment to facilitate understanding of the rapid integration of neuroscience knowledge into the academic, clinical, and lay literature on this topic. Seminal articles from developmental psychology and psychiatry, a discussion of brain plasticity, and a summary of recent reviews of research on stress system dysregulation are presented with some attention to methodological issues. A common theme is that maltreatment during childhood is an experience that may affect the course of brain development, potentially leading to differences in brain anatomy and functioning with lifelong consequences for mental health. The design of prevention and intervention strategies for child maltreatment may benefit from considering neuroscience perspectives along with those of other disciplines.
—Null hypothesis testing of correlational predictions from weak substantive theories in soft psychology is subject to the influence of ten obfuscating factors whose effects are usually (1) sizeable, (2) opposed, (3) variable, and (4) unknown The net epistemic effect of these ten obfuscating influences is that the usual research literature review is well nigh uninterpretable Major changes in graduate education, conduct of research, and editorial policy are proposed
An Improved Variable On-Time Control Strategy for a CRM Flyback PFC Converter
The traditional critical conduction mode (CRM) flyback PFC converter with constant on-time control strategy usually suffers low power factor (PF) and high total harmonic distortion (THD) due to the nonsinusoidal input current waveform. In order to solve this problem, an improved variable on-time control strategy for the CRM flyback PFC converter is proposed in this letter. A simple analog divider circuit consisting of an operational amplifier, two signal switches, and an RC filter is proposed to modulate the turn-on time of the primary switch, and the PF and THD of the CRM flyback PFC converter can be evidently improved. The theoretical analysis is presented and the experiment results verify the advantages of the proposed control scheme.
Range-Based Localization in Wireless Networks Using Density-Based Outlier Detection
Node localization is commonly employed in wireless networks. For example, it is used to improve routing and enhance security. Localization algorithms can be classified as range-free or range-based. Range-based algorithms use location metrics such as ToA, TDoA, RSS, and AoA to estimate the distance between two nodes. Proximity sensing between nodes is typically the basis for range-free algorithms. A tradeoff exists since range-based algorithms are more accurate but also more complex. However, in applications such as target tracking, localization accuracy is very important. In this paper, we propose a new range-based algorithm which is based on the density-based outlier detection algorithm (DBOD) from data mining. It requires selection of the K-nearest neighbours (KNN). DBOD assigns density values to each point used in the location estimation. The mean of these densities is calculated and those points having a density larger than the mean are kept as candidate points. Different performance measures are used to compare our approach with the linear least squares (LLS) and weighted linear least squares based on singular value decomposition (WLS-SVD) algorithms. It is shown that the proposed algorithm performs better than these algorithms even when the anchor geometry about an unlocalized node is poor.
Zika virus impairs growth in human neurospheres and brain organoids
Since the emergence of Zika virus (ZIKV), reports of microcephaly have increased considerably in Brazil; however, causality between the viral epidemic and malformations in fetal brains needs further confirmation. We examined the effects of ZIKV infection in human neural stem cells growing as neurospheres and brain organoids. Using immunocytochemistry and electron microscopy, we showed that ZIKV targets human brain cells, reducing their viability and growth as neurospheres and brain organoids. These results suggest that ZIKV abrogates neurogenesis during human brain development.
Research on non-invasive glucose concentration measurement by NIR transmission
Diabetes is a widely spreading disease which is known as one of the life threatening disease in the world. It occurs not only among adults and elderly, but also among infants and children. Blood glucose measurements are indispensable to diabetes patients to determine their insulin dose intake. Invasive blood glucose measurement ways which are high in accuracy are common but they are uncomfortable and have higher risk of infections especially for elders, pregnant and children. As a change, non-invasive blood glucose measurement techniques are introduced to provide a reliable and pain free method for monitoring glucose level without puncturing the skin. In this paper, a non-invasive glucose monitoring setup was developed using near infrared by detecting the transmission laser power. The detecting system included the semiconductor laser diode as light source, the S302C light power probe which detected the incident light and, the PM100USB transmit data to the computer. The specific infrared spectrum (1310 nm) was used as the incident beam. A proportional relationship between the laser power and the glucose concentration was proved by comparing the resulting laser power for a few of glucose aqueous solution samples with glucose concentration estimated value at the same circumstances.
LSD-induced entropic brain activity predicts subsequent personality change.
Personality is known to be relatively stable throughout adulthood. Nevertheless, it has been shown that major life events with high personal significance, including experiences engendered by psychedelic drugs, can have an enduring impact on some core facets of personality. In the present, balanced-order, placebo-controlled study, we investigated biological predictors of post-lysergic acid diethylamide (LSD) changes in personality. Nineteen healthy adults underwent resting state functional MRI scans under LSD (75µg, I.V.) and placebo (saline I.V.). The Revised NEO Personality Inventory (NEO-PI-R) was completed at screening and 2 weeks after LSD/placebo. Scanning sessions consisted of three 7.5-min eyes-closed resting-state scans, one of which involved music listening. A standardized preprocessing pipeline was used to extract measures of sample entropy, which characterizes the predictability of an fMRI time-series. Mixed-effects models were used to evaluate drug-induced shifts in brain entropy and their relationship with the observed increases in the personality trait openness at the 2-week follow-up. Overall, LSD had a pronounced global effect on brain entropy, increasing it in both sensory and hierarchically higher networks across multiple time scales. These shifts predicted enduring increases in trait openness. Moreover, the predictive power of the entropy increases was greatest for the music-listening scans and when "ego-dissolution" was reported during the acute experience. These results shed new light on how LSD-induced shifts in brain dynamics and concomitant subjective experience can be predictive of lasting changes in personality. Hum Brain Mapp 37:3203-3213, 2016. © 2016 Wiley Periodicals, Inc.
Data Structures and Algorithms for Nearest Neighbor Search in General Metric Spaces
We consider the computational problem of nding nearest neighbors in general metric spaces Of particular interest are spaces that may not be conveniently embedded or approxi mated in Euclidian space or where the dimensionality of a Euclidian representation is very high Also relevant are high dimensional Euclidian settings in which the distribution of data is in some sense of lower di mension and embedded in the space The vp tree vantage point tree is introduced in several forms together with associated algorithms as an improved method for these di cult search problems Tree construc tion executes in O n log n time and search is under certain circumstances and in the limit O log n expected time The theoretical basis for this approach is developed and the results of several experiments are reported In Euclidian cases kd tree performance is compared
Optimization by simulated annealing.
There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.
Deriving optimal weights in deep neural networks
Training deep neural networks generally requires massive amounts of data and is very computation intensive. We show here that it may be possible to circumvent the expensive gradient descent procedure and derive the parameters of a neural network directly from properties of the training data. We show that, near convergence, the gradient descent equations for layers close to the input can be linearized and become stochastic equations with noise related to the covariance of data for each class. We derive the distribution of solutions to these equations and discover that it is related to a “supervised principal component analysis.” We implement these results on image datasets MNIST, CIFAR10 and CIFAR100 and find that, indeed, pretrained layers using our findings performs comparable or superior to neural networks of the same size and architecture trained with gradient descent. Moreover, our pretrained layers can often be calculated using a fraction of the training data, owing to the quick convergence of the covariance matrix. Thus, our findings indicate that we can cut the training time both by requiring only a fraction of the data used for gradient descent, and by eliminating layers in the costly backpropagation step of the training. Additionally, these findings partially elucidate the inner workings of deep neural networks and allow us to mathematically calculate optimal solutions for some stages of classification problems, thus significantly boosting our ability to solve such problems efficiently.