title
stringlengths
8
300
abstract
stringlengths
0
10k
An advanced channel framework for improved Underwater Acoustic Network simulations
Underwater Acoustic Networks (UANs) are an emerging technology that are being used to facilitate new aquatic applications in our water world. This is accomplished by linking underwater sensors, vehicles and devices together using acoustic communication. Network protocol development for UANs often relies on simulations because deploying real systems in the ocean is a resource heavy operation. However, acoustic communication performance is dynamic and dependent upon the environment. Therefore, simulations may not be entirely accurate. In this paper we introduce an advanced channel model that considers environmental impacts and real system characteristics on acoustic communication performance for use in an established open-source simulation environment, Aqua-Sim. We provide detailed simulation results and compare the results of our channel model with the existing simulation environment and against the results from a field experiment in the Chesapeake Bay from 2011.
Virtual machines vs. containers in cloud gaming systems
In cloud gaming the game is rendered on a distant cloud server and the resulting video stream is sent back to the user who controls the game via a thin client. The high resource usage of cloud gaming servers is a challenge. Expensive hardware including GPUs have to be efficiently shared among multiple simultaneous users. The cloud servers use virtualization techniques to isolate users and share resources among dedicated servers. The traditional virtualization techniques can however inflict notable performance overhead limiting the user count for a single server. Operating-system-level virtualization instances known as containers are an emerging trend in cloud computing. Containers don't need to virtualize the entire operating system still providing most of the benefits of virtualization. In this paper, we evaluate the container-based alternative to traditional virtualization in cloud gaming systems through extensive experiments. We also discuss the differences needed in system implementation using the container approach and identify the existing limitations.
Economics of environmental quality
The authors use the term environmental quality to refer to the conditions associated with those resources that have not been assigned to the market for allocation. Though the focus is on air and water quality, it could just as well include conditions of crowding, visual stimuli, and odors within the same framework. This analysis addresses itself to the economic aspects of environmental quality as the term is currently defined. First, in an introduction to the nature of the problem, the subject is put in a perspective of time and place. Subsequent chapters provide an economic analysis of the problem, present discussions of environmental demand, and analyze such topics as the conflict between economic development and environmental quality, legal solutions to the problem, and the uses and effects of taxes and subsidies as means for ameliorating conflict over environmental quality. Most of the discussion here revolves around the question of allocational efficiency: the old problem of scarce resources and who gets them. In this sense, the discussion is market-oriented. The study takes the view that since the quality of the environment is recognized as a scarce resource, it should be treated accordingly. This approach should provide additional insight into the taskmore » of formulating policies to deal with environmental resources.« less
Depth Recovery Using an Adaptive Color-Guided Auto-Regressive Model
This paper proposes an adaptive color-guided auto-regressive (AR) model for high quality depth recovery from low quality measurements captured by depth cameras. We formulate the depth recovery task into a minimization of AR prediction errors subject to measurement consistency. The AR predictor for each pixel is constructed according to both the local correlation in the initial depth map and the nonlocal similarity in the accompanied high quality color image. Experimental results show that our method outperforms existing state-of-the-art schemes, and is versatile for both mainstream depth sensors: ToF camera and Kinect.
Safe for Generations to Come: Considerations of Safety for Millimeter Waves in Wireless Communications
With the increasing demand for higher data rates and more reliable service capabilities for wireless devices, wireless service providers are facing an unprecedented challenge to overcome a global bandwidth shortage. Early global activities on beyond fourth-generation (B4G) and fifth-generation (5G) wireless communication systems suggest that millimeter-wave (mmWave) frequencies are very promising for future wireless communication networks due to the massive amount of raw bandwidth and potential multigigabit-per-second (Gb/s) data rates [1]?[3]. Both industry and academia have begun the exploration of the untapped mmWave frequency spectrum for future broadband mobile communication networks. In April 2014, the Brooklyn 5G Summit [4], sponsored by Nokia and the New York University (NYU) WIRELESS research center, drew global attention to mmWave communications and channel modeling. In July 2014, the IEEE 802.11 next-generation 60-GHz study group was formed to increase the data rates to over 20 Gb/s in the unlicensed 60-GHz frequency band while maintaining backward compatibility with the emerging IEEE 802.11ad wireless local area network (WLAN) standard [5].
Why do people use information technology? A critical review of the technology acceptance model
Information systems (IS) implementation is costly and has a relatively low success rate. Since the seventies, IS research has contributed to a better understanding of this process and its outcomes. The early efforts concentrated on the identification of factors that facilitated IS use. This produced a long list of items that proved to be of little practical value. It became obvious that, for practical reasons, the factors had to be grouped into a model in a way that would facilitate analysis of IS use. In 1985, Fred Davis suggested the technology acceptance model (TAM). It examines the mediating role of perceived ease of use and perceived usefulness in their relation between systems characteristics (external variables) and the probability of system use (an indicator of system success). More recently, Davis proposed a new version of his model: TAM2. It includes subjective norms, and was tested with longitudinal research designs. Overall the two explain about 40% of system’s use. Analysis of empirical research using TAM shows that results are not totally consistent or clear. This suggests that significant factors are not included in the models. We conclude that TAM is a useful model, but has to be integrated into a broader one which would include variables related to both human and social change processes, and to the adoption of the innovation model. # 2002 Elsevier Science B.V. All rights reserved.
The Delphi Method for Graduate Research
Executive Summary The Delphi method is an attractive method for graduate students completing masters and PhD level research. It is a flexible research technique that has been successfully used in our program at the University of Calgary to explore new concepts within and outside of the information systems body of knowledge. The Delphi method is an iterative process to collect and distill the anonymous judgments of experts using a series of data collection and analysis techniques interspersed with feedback. The Delphi method is well suited as a research instrument when there is incomplete knowledge about a problem or phenomenon; however it is not a method for all types of IS research questions. The Delphi method works especially well when the goal is to improve our understanding of problems, opportunities, solutions, or to develop forecasts. In this paper, we provide a brief background of the Classical Delphi followed by a presentation of how it has evolved into a flexible research method appropriate for a wide variety of IS research projects, such as determining the criteria for IS prototyping decisions, ranking technology management issues in new product development projects, and developing a descriptive framework of knowledge manipulation activities. To illustrate the method's flexibility, we summarize distinctive non-IS, IS, and graduate studies Delphi research projects. We end by discussing what we have learned from using the Delphi method in our own research regarding this method's design factors and how it may be applied to those conducting graduate studies research: i) methodological choices such as a qualitative , quantitative or mixed methods approach; ii) initial question degree of focus whether it be broad or narrowly focused; iii) expertise criteria such as technical knowledge and experience, capacity and willingness to participate, sufficient time, and communication skills; vi) number of participants in the heterogeneous or homogeneous sample, v) number of Delphi rounds varying from one to 6, vi) mode of interaction such as through email, online surveys or groupware, vii) methodological rigor and a research audit trail, viii) results analysis, ix) further verification through triangulation or with another sample, and x) publishing of the results. We include an extensive bibliography and an appendix with a wide-ranging list of dissertations that have used the Delphi method (including brief research description, number of rounds and sample size). The Delphi method is a flexible , effective and efficient research method that can be successful used by IS graduate students …
Web Crawlers : Taxonomy , Issues & Challenges
with increase in the size of Web, the search engine relies on Web Crawlers to build and maintain the index of billions of pages for efficient searching. The creation and maintenance of Web indices is done by Web crawlers, the crawlers recursively traverses and downloads Web pages on behalf of search engines. The exponential growth of Web poses many challenges for crawlers.This paper makes an attempt to classify all the existing crawlers on certain parameters and also identifies the various challenges to web crawlers. Keywords— WWW, URL, Mobile Crawler, Mobile Agents, Web Crawler.
GTS: A Fast and Scalable Graph Processing Method based on Streaming Topology to GPUs
A fast and scalable graph processing method becomes increasingly important as graphs become popular in a wide range of applications and their sizes are growing rapidly. Most of distributed graph processing methods require a lot of machines equipped with a total of thousands of CPU cores and a few terabyte main memory for handling billion-scale graphs. Meanwhile, GPUs could be a promising direction toward fast processing of large-scale graphs by exploiting thousands of GPU cores. All of the existing methods using GPUs, however, fail to process large-scale graphs that do not fit in main memory of a single machine. Here, we propose a fast and scalable graph processing method GTS that handles even RMAT32 (64 billion edges) very efficiently only by using a single machine. The proposed method stores graphs in PCI-E SSDs and executes a graph algorithm using thousands of GPU cores while streaming topology data of graphs to GPUs via PCI-E interface. GTS is fast due to no communication overhead and scalable due to no data duplication from graph partitioning among machines. Through extensive experiments, we show that GTS consistently and significantly outperforms the major distributed graph processing methods, GraphX, Giraph, and PowerGraph, and the state-of-the-art GPU-based method TOTEM.
Effects of gait speed on the body's center of mass motion relative to the center of pressure during over-ground walking.
Preferred walking speed (PWS) reflects the integrated performance of the relevant physiological sub-systems, including energy expenditure. It remains unclear whether the PWS during over-ground walking is chosen to optimize one's balance control because studies on the effects of speed on the body's balance control have been limited. The current study aimed to bridge the gap by quantifying the effects of the walking speed on the body's center of mass (COM) motion relative to the center of pressure (COP) in terms of the changes and directness of the COM-COP inclination angle (IA) and its rate of change (RCIA). Data of the COM and COP were measured from fifteen young healthy males at three walking speeds including PWS using a motion capture system. The values of IAs and RCIAs at key gait events and their average values over gait phases were compared between speeds using one-way repeated measures ANOVA. With increasing walking speed, most of the IA and RCIA related variables were significantly increased (p<0.05) but not for those of the frontal IA. Significant quadratic trends (p<0.05) with highest directness at PWS were found in IA during single-limb support, and in RCIA during single-limb and double-limb support. The results suggest that walking at PWS corresponded to the COM-COP control maximizing the directness of the RCIAs over the gait cycle, a compromise between the effects of walking speed and the speed of weight transfer. The data of IA and RCIA at PWS may be used in future assessment of balance control ability in people with different levels of balance impairments.
Clinical, MRI and arthroscopic correlation in internal derangement of knee.
BACKGROUND The traumatic or degenerative internal derangement of the knee requires certain investigations for the establishment of diagnosis, in addition to clinical history and a thorough physical examination. The use of arthrography and arthroscopy improves the accuracy of the diagnosis. MRI scanning of the knee joint has often been regarded as the noninvasive alternative to diagnostic arthroscopy. OBJECTIVE The purpose of the study was to correlate clinical and low field MRI findings with arthroscopy in internal derangement of the knee. METHODS Forty one patients with suspected internal derangement of the knee were subjected to MR examination followed by arthroscopy. Clinical criteria used were history, mode of injury, Mc Murray, Apley grinding, Thessaly test for meniscal injury. Drawer test was considered to be essential for clinical diagnosis of cruciate ligament injury. MRI of the knee was performed in low field open magnet (0.35T, Magnetom C, Seimens). Arthroscopy was done within two months of MR examination and was considered gold standard for the internal derangement of the knee. RESULTS The sensitivity, specificity, diagnostic accuracy of clinical examination were 96.1%, 33.3% and 73.1% respectively for medial meniscal tear; 38.4%, 96.4% and 78.1% respectively for lateral meniscal tear. The sensitivity, specificity, diagnostic accuracy of MRI were 92.3%,100% and 95.1% for medial meniscal tear; 84.6%96.4% and 92.6% respectively for lateral meniscal tear. CONCLUSION Clinical examination showed higher sensitivity for medial meniscal tear compared to MRI, however with low specificity and diagnostic accuracy. Low field MRI showed high sensitivity, specificity, diagnostic accuracy for meniscal and cruciate ligament injury, in addition to associated derangement like articular cartilage damage, synovial thickening.
The management of adolescents with neurogenic urinary tract and bowel dysfunction.
Most children with neurogenic bladder dysfunction arrive into adolescence with reasonably managed lower urinary tract function only to experience bladder and kidney function deterioration after puberty. The aim of this article is to identify issues that contribute to adverse changes in bladder and renal function during adolescence and to highlight strategies to preserve urinary tract integrity, social continence, patient autonomy, and independence. Surveillance of bladder function requires patient attendance at review appointments and compliance with treatment plans. While encouraging independence and treatment compliance the clinician also needs to consider altered mental concentrating ability and fine motor skills of these patients. A keen eye for imminent loss of patient compliance to treatment protocol should be the mainstay of each encounter during puberty and adolescence. Annual surveillance of adolescent neurogenic bladder patients facilitates early identification of risk factors for urinary tract deterioration. Investigations include renal and bladder ultrasonography, urodynamic study when indicated, substantiated by videocystometry when anatomical status dictates. Serum creatinine should be measured and renal scintigraphy performed when upper urinary tract dilation, renal scarring, or atrophy are suspected. Optimal management of adolescents with neurologic disease of the urinary tract included strategies to reduce elevated detrusor pressure, maintain bladder compliance, and maximize dryness. Antimuscarinic medications, botulinum toxin A, and surgical procedures are enhanced by bowel management regimens and regular nurse or urotherapist patient contact. Caring for the patient as a whole requires discussion of sexuality, fertility status, and behaviors that increase the risk of progressive urinary tract damage.
Composable Planning with Attributes
The tasks that an agent will need to solve often are not known during training. However, if the agent knows which properties of the environment are important then, after learning how its actions affect those properties, it may be able to use this knowledge to solve complex tasks without training specifically for them. Towards this end, we consider a setup in which an environment is augmented with a set of user defined attributes that parameterize the features of interest. We propose a method that learns a policy for transitioning between “nearby” sets of attributes, and maintains a graph of possible transitions. Given a task at test time that can be expressed in terms of a target set of attributes, and a current state, our model infers the attributes of the current state and searches over paths through attribute space to get a high level plan, and then uses its low level policy to execute the plan. We show in 3D block stacking, gridworld games, and StarCraft that our model is able to generalize to longer, more complex tasks at test time by composing simpler learned policies.
Estimation of the Cramer-Rao Bound for Radio Direction-Finding on the Azimuth and Elevation of the Cylindical Antenna Arrays
In this paper the problem of direction-of-arrival (DOA) estimation for conformal antenna arrays consisting of directive emitters for the azimuth and elevation cases is studied. The expression for the Cramer-Rao lower bound of the DOA estimates variance depending on the antenna directivity and the geometry is presented. The boundaries of the Cramer-Rao are evaluated across several scenarios, including the different signal sources locations in terms of the azimuth and elevation angles and signal-to-noise ratios using different antenna array configurations. The influence of the cylindrical conformal antenna array on the direction-of-arrival estimation accuracy is researched.
Iterative Machine Learning for Output Tracking
This paper develops a frequency-domain iterative machine learning (IML) approach for output tracking. Frequency-domain iterative learning control allows bounded noncausal inversion of system dynamics and is, therefore, applicable to nonminimum phase systems. The model used in the frequency-domain control update can be obtained from the input–output data acquired during the iteration process. However, such data-based approaches can have challenges if the noise-to-output-signal ratio is large. The main contribution of this paper is the use of kernel-based machine learning during the iterations to estimate both the model (and its inverse) for the control update, as well as the model uncertainty needed to establish bounds on the iteration gain for ensuring convergence. Another contribution is the proposed use of augmented inputs with persistency of excitation to promote learning of the model during iterations. The improved model can be used to better infer the inverse input resulting in lower initial error for new output trajectories. The proposed IML approach with the augmented input is illustrated with simulations for a benchmark nonminimum phase example.
A fully pipelined hardware architecture for convolutional neural network with low memory usage and DRAM bandwidth
As a typical deep learning model, Convolutional Neural Network (CNN) has shown excellent ability in solving complex classification problems. To apply CNN models in mobile ends and wearable devices, a fully pipelined hardware architecture adopting a Row Processing Tree (RPT) structure with small memory resource consumption between convolutional layers is proposed. A modified Row Stationary (RS) dataflow is implemented to evaluate the RPT architecture. Under the the same work frequency requirement for these two architectures, the experimental results show that the RPT architecture reduces 91% on-chip memory and 75% DRAM bandwidth compared with the modified RS dataflow, but the throughput of the modified RS dataflow is 3 times higher than the our proposed RPT architecture. The RPT architecture can achieve 121fps at 100MHZ while processing a CNN including 4 convolutional layers.
Cutting your nerve changes your brain.
Following upper limb peripheral nerve transection and surgical repair, some patients regain good sensorimotor function while others do not. Understanding peripheral and central mechanisms that contribute to recovery may facilitate the development of new therapeutic interventions. Plasticity following peripheral nerve transection has been demonstrated throughout the neuroaxis in animal models of nerve injury. However, the brain changes that occur following peripheral nerve transection and surgical repair in humans have not been examined. Furthermore, the extent to which peripheral nerve regeneration influences functional and structural brain changes has not been characterized. Therefore, we asked whether functional changes are accompanied by grey and/or white matter structural changes and whether these changes relate to sensory recovery? To address these key issues we (i) assessed peripheral nerve regeneration; (ii) measured functional magnetic resonance imaging brain activation (blood oxygen level dependent signal; BOLD) in response to a vibrotactile stimulus; (iii) examined grey and white matter structural brain plasticity; and (iv) correlated sensory recovery measures with grey matter changes in peripheral nerve transection and surgical repair patients. Compared to each patient's healthy contralesional nerve, transected nerves have impaired nerve conduction 1.5 years after transection and repair, conducting with decreased amplitude and increased latency. Compared to healthy controls, peripheral nerve transection and surgical repair patients had altered blood oxygen level dependent signal activity in the contralesional primary and secondary somatosensory cortices, and in a set of brain areas known as the 'task positive network'. In addition, grey matter reductions were identified in several brain areas, including the contralesional primary and secondary somatosensory cortices, in the same areas where blood oxygen level dependent signal reductions were identified. Furthermore, grey matter thinning in the post-central gyrus was negatively correlated with measures of sensory recovery (mechanical and vibration detection) demonstrating a clear link between function and structure. Finally, we identified reduced white matter fractional anisotropy in the right insula in a region that also demonstrated reduced grey matter. These results provide insight into brain plasticity and structure-function-behavioural relationships following nerve injury and have important therapeutic implications.
A premenarcheal girl with urogenital bleeding.
An 8-year-old Caucasian girl was referred with perineal bleeding of sudden onset during micturition. There was no history of trauma, fever or dysuria, but she had a history of constipation. Family history was unremarkable. Physical examination showed a prepubertal girl with a red ‘doughnut’-shaped lesion surrounding the urethral meatus (figure 1). Laboratory findings, including platelet count and coagulation, were normal. A vaginoscopy, performed using sedation, was negative. Swabs tested negative for sexually transmitted pathogens. A diagnosis of urethral prolapse (UP) was made on clinical appearance. Treatment with topical oestrogen cream was started and constipation treated with oral polyethylene glycol. On day 10, the bleeding stopped, and at week 5 there was a moderate regression of the UP. However, occasional mild bleeding persisted at 10 months, so she was referred to a urologist (figure 2). UP is an eversion of the distal urethral mucosa through the external meatus. It is most commonly seen in postmenopausal women and is uncommon in prepubertal girls. UP is rare in Caucasian children and more common in patients of African descent. 2 It may be asymptomatic or present with bleeding, spotting or urinary symptoms. The exact pathophysiological process of UP is unknown. Increased intra-abdominal pressure with straining, inadequate periurethral supporting tissue, neuromuscular dysfunction and a relative oestrogen deficiency are possible predisposing factors. Differential diagnoses include ureterocele, polyps, tumours and non-accidental injury. 3 Management options include conservative treatments such as tepid water baths and topical oestrogens. Surgery is indicated if bleeding, dysuria or pain persist. 5 Vaginoscopy in this case was possibly unnecessary, as there were no signs of trauma to the perineal area or other concerning signs or history of abuse. In the presence of typical UP, invasive diagnostic procedures should not be considered as first-line investigations and they should be reserved for cases of diagnostic uncertainty.
Cerebrovascular and blood-brain barrier impairments in Huntington's disease: Potential implications for its pathophysiology.
OBJECTIVE Although the underlying cause of Huntington's disease (HD) is well established, the actual pathophysiological processes involved remain to be fully elucidated. In other proteinopathies such as Alzheimer's and Parkinson's diseases, there is evidence for impairments of the cerebral vasculature as well as the blood-brain barrier (BBB), which have been suggested to contribute to their pathophysiology. We investigated whether similar changes are also present in HD. METHODS We used 3- and 7-Tesla magnetic resonance imaging as well as postmortem tissue analyses to assess blood vessel impairments in HD patients. Our findings were further investigated in the R6/2 mouse model using in situ cerebral perfusion, histological analysis, Western blotting, as well as transmission and scanning electron microscopy. RESULTS We found mutant huntingtin protein (mHtt) aggregates to be present in all major components of the neurovascular unit of both R6/2 mice and HD patients. This was accompanied by an increase in blood vessel density, a reduction in blood vessel diameter, as well as BBB leakage in the striatum of R6/2 mice, which correlated with a reduced expression of tight junction-associated proteins and increased numbers of transcytotic vesicles, which occasionally contained mHtt aggregates. We confirmed the existence of similar vascular and BBB changes in HD patients. INTERPRETATION Taken together, our results provide evidence for alterations in the cerebral vasculature in HD leading to BBB leakage, both in the R6/2 mouse model and in HD patients, a phenomenon that may, in turn, have important pathophysiological implications.
A Highly Accurate Feature Fusion Network For Vehicle Detection In Surveillance Scenarios
In this paper we present a novel vehicle detection method in traffic surveillance scenarios. This work is distinguished by three key contributions. First, a feature fusion backbone network is proposed to extract vehicle features which has the capability of modeling geometric transformations. Second, a vehicle proposal sub-network is applied to generate candidate vehicle proposals based on multi-level semantic feature maps. Finally, a head network is used to refine the categories and locations of these proposals. Benefits from the above cues, vehicles with large variation in occlusion and lighting conditions can be detected with high accuracy. Furthermore, the method also demonstrates robustness in the case of motion blur caused by rapid movement of vehicles. We test our network on DETRAC[21] benchmark detection challenge and it shows the state-of-theart performance. Specifically, the proposed method gets the best performances not only at 4 different level: overall, easy, medium and hard, but also in sunny, cloudy and night conditions.
Orthotopic liver transplantation with preservation of the inferior vena cava.
Piggyback orthotopic liver transplantation was performed in 24 patients during a period of 4 months. This represented 19% of the liver transplantation at our institution during that time. The piggyback method of liver insertion compared favorably with the standard operation in terms of patient survival, blood loss, incidence of vascular and biliary complications, and rate of retransplantation. The piggyback operation cannot be used in all cases, but when indicated and feasible its advantages are important enough to warrant its inclusion in the armamentarium of the liver transplant surgeon.
Fascia lliaca block for analgesia after hip arthroplasty: a randomized double-blind, placebo-controlled trial.
BACKGROUND AND OBJECTIVES Fascia iliaca block (FIB) is often used to treat pain after total hip arthroplasty (THA), despite a lack of randomized trials to evaluate its efficacy for this indication. The objective of this study was to assess the analgesic benefit of FIB after THA. Our primary hypothesis was administration of FIB decreases the intensity of postoperative pain (numeric rating scale [NRS-11] score) compared with sham block (SB) in patients after THA. METHODS After institutional review board approval and informed consent, 32 eligible patients having THA were recruited. In the postoperative care unit, although all patients received intravenous morphine sulfate patient-controlled analgesia, patients reporting pain of 3 or greater on the NRS-11 scale were randomized to receive ultrasound-guided fascia iliaca (30 mL 0.5% ropivacaine) or SB (30 mL 0.9% NaCl) using identical technique, below fascia iliaca. The primary outcome was pain intensity (NRS-11) after FIB. RESULTS Thirty-two patients (16 in each group) completed the study; all patients received an FIB. There was no difference in pain intensity (NRS-11 = 5.0 ± 0.6 vs 4.7 ± 0.6, respectively) after FIB versus SB or in opioid consumption (8.97 ± 1.6 vs 5.7 ± 1.6 mg morphine, respectively) between the groups at 1 hour. The morphine consumption after 24 hours was similar in both groups (49.0 ± 29.9 vs 50.4 ± 34.5 mg, P = 0.88, respectively). CONCLUSIONS The evidence in these data suggests that the difference in average pain intensity after FIB versus SB was not significant (95% confidence interval, -2.2-1.4 NRS units).
Modeling crowdsourcing systems: design and analysis of incentive mechanism and rating system
Over the past few years, we have seen an increasing popularity of crowdsourcing services [5]. Many companies are now providing such services, e.g., Amazon Mechanical Turk [1], Google Helpouts [3], and Yahoo! Answers [8], etc. Briefly speaking, crowdsourcing is an online, distributed problem solving paradigm and business production platform. It uses the power of today’s Internet to solicit the collective intelligence of large number of users. Relying on the wisdom of the crowd to solve posted tasks (or problems), crowdsourcing has become a promising paradigm to obtain “solutions” which can have higher quality or lower costs than the conventional method of solving problems via specialized employees or contractors in a company. Typically, a crowdsourcing system operates with three basic components: Users, tasks and rewards. Users are classified into requesters and workers. A user can be a requester or a worker, and in some cases, a user can be a requester/worker at the same time. Requesters outsource tasks to workers and associate each task with certain rewards, which will be granted to the workers who solve the task. Workers, on the other hand, solve the assigned tasks and reply to requesters with solutions, and then take the reward, which can be in form of money [1], entertainment [7] or altruism [8], etc. To have a successful crowdsourcing website, it is pertinent to attract high volume of participation of users (requesters and workers), and at the same time, solutions by workers have to be of high quality. In this paper we design a rating system and a mechanism to encourage users to participate, and incentivize workers to high quality solutions. First, we develop a game-theoretic model to characterize workers’ strategic behavior. We then design a class effective incentive mechanisms which consist of a task bundling scheme and a rating system, and pay workers according to solution ratings from requesters. We develop a model to characterize the design space of a class of commonly users rating systems– threshold based rating system. We quantify the impact of such rating systems, and the bundling scheme on reducing requesters’ reward payment in guaranteeing high quality solutions. We find out that a simplest rating system, e.g., two rating points, is an effective system in which requesters only need to provide binary feedbacks to indicate whether they are satisfied or not with a solution.
Electronic switching in phase-change memories
A detailed investigation of electronic switching in chalcogenide-based phase-change memory devices is presented. An original bandgap model consistent with the microscopic structure of both crystalline and amorphous chalcogenide is described, and a physical picture of the switching mechanism is proposed. Numerical simulations provide, for the first time, a quantitative description of the peculiar current-voltage curve of a Ge/sub 2/Sb/sub 2/Te/sub 5/ resistor, in good agreement with measurements performed on test devices.
A Compact Dual-Polarized Printed Dipole Antenna With High Isolation for Wideband Base Station Applications
A compact dual-polarized printed dipole antenna for wideband base station applications is presented in this communication. The proposed dipole antenna is etched on three assembled substrates. Four horizontal triangular patches are introduced to form two dipoles in two orthogonal polarizations. Two integrated baluns connected with 50 Ω SMA launchers are used to excite the dipole antenna. The proposed dipole antenna achieves a more compact size than many reported wideband printed dipole and magneto-electric dipole antennas. Both simulated and measured results show that the proposed antenna has a port isolation higher than 35 dB over 52% impendence bandwidth (VSWR <; 1.5). Moreover, stable radiation pattern with a peak gain of 7 dBi - 8.6 dBi is obtained within the operating band. The proposed dipole antenna is suitable as an array element and can be used for wideband base station antennas in the next generation IMT-advanced communications.
Sparse Convolved Gaussian Processes for Multi-output Regression
We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent functions, we establish an approximation scheme using a conditional independence assumption between the output processes, leading to an approximation of the full covariance which is determined by the locations at which the latent functions are evaluated. We show results of the proposed methodology for synthetic data and real world applications on pollution prediction and a sensor network.
Improving Verb Metaphor Detection by Propagating Abstractness to Words, Phrases and Individual Senses
Abstract words refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) we overcome the limitation of learning a single rating per word and show that multisense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3 million English words and multi-words as wellwords refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) we overcome the limitation of learning a single rating per word and show that multisense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3 million English words and multi-words as well as automatically created sense-specific abstractness ratings.
THE CHANGING SIZE DISTRIBUTION OF U.S. TRADE UNIONS AND ITS DESCRIPTION BY PARETO'S DISTRIBUTION
The size distribution of trade unions in the United States and changes in this distribution are documented. Because the most profound changes are taking place among very large unions, these are subject to special analysis by invoking Pareto’s distribution. This represents a new application of this distribution. Extensions to trade union wealth and to Britain are broached. The role of the public sector in these changes receives particular attention. A simple model helps account both for the logarithmic distribution of union membership and for the contrasting experiences of public and private sector unions since the 1970s.
Gastrojejunostomy without partial gastrectomy to manage duodenal stenosis in a dog
A nine-year-old female Rottweiler with a history of repeated gastrointestinal ulcerations and three previous surgical interventions related to gastrointestinal ulceration presented with symptoms of anorexia and intermittent vomiting. Benign gastric outflow obstruction was diagnosed in the proximal duodenal area. The initial surgical plan was to perform a pylorectomy with gastroduodenostomy (Billroth I procedure), but owing to substantial scar tissue and adhesions in the area a palliative gastrojejunostomy was performed. This procedure provided a bypass for the gastric contents into the proximal jejunum via the new stoma, yet still allowed bile and pancreatic secretions to flow normally via the patent duodenum. The gastrojejunostomy technique was successful in the surgical management of this case, which involved proximal duodenal stricture in the absence of neoplasia. Regular telephonic followup over the next 12 months confirmed that the patient was doing well.
Public Sector Sponsored Continuous Vocational Training in East Germany: Institutional Arrangements, Participants, and Results of Empirical Evaluations
After unification of the East and West German economies in July 1990 the public sector devoted substantial resources to train the labour force of the former centrally planned East German economy. In this paper we describe the basic trends of the rules and regulations governing these efforts. We supplement this description with empirical stylized facts. Additionally, we report evaluations of the effects of this policy for training participants beginning their training between mid 1990 and early 1993. These evaluations are based on micro data from the Socio-economic Panel (1990-1994) which allows us to follow the individuals ́ labour market status before and after training on a monthly and yearly basis, respectively. Our general findings of these evaluations suggest that there are no positive effects on such measures as post-training unemployment risk or earnings.
Implementation of Back Propagation Algorithm ( of neural networks ) in VHDL
authentic record of my own work carried under the supervision of Mr. SANJAY SHARMA and Mrs. MANU BANSAL at TIET, PATIALA. The matter presented in this thesis has not been submitted in any other University or Institute for the award of Master of Engineering. Signature of the student This is certified that the above statement made by the candidate is correct to the best of my knowledge. ACKNOWLEGEMENT Words are often too less to reveal one's deep regards. An understanding of the work like this is never the outcome of the efforts of a single person. I take this opportunity to express my profound sense of gratitude and respect to all those who helped me through the duration of this thesis. Lecturer, TIET, PATIALA. Their enthusiasm and optimism made this experience both rewarding and enjoyable. Most of the novel ideas and solutions found in this thesis are the result of our numerous stimulating discussions. His feedback and editorial comments were also invaluable for the writing of this thesis. facilities for the completion of thesis. I take pride of my self being daughter of ideal great parents whose everlasting desire, sacrifice, affectionate blessing and help without which it would have not been possible for me to complete my studies. At last, I would like to thank all the members and employees of Electronics and Communication Department, TIET Patiala whose love and affection made this possible.
Domain-dependent knowledge in answer set planning
In this article we consider three different kinds of domain-dependent control knowledge (temporal, procedural and HTN-based) that are useful in planning. Our approach is declarative and relies on the language of logic programming with answer set semantics (AnsProlog*). AnsProlog* is designed to plan without control knowledge. We show how temporal, procedural and HTN-based control knowledge can be incorporated into AnsProlog* by the modular addition of a small number of domain-dependent rules, without the need to modify the planner. We formally prove the correctness of our planner, both in the absence and presence of the control knowledge. Finally, we perform some initial experimentation that demonstrates the potential reduction in planning time that can be achieved when procedural domain knowledge is used to solve planning problems with large plan length.
CR-GAN: Learning Complete Representations for Multi-view Generation
Generating multi-view images from a single-view input is an essential yet challenging problem. It has broad applications in vision, graphics, and robotics. Our study indicates that the widely-used generative adversarial network (GAN) may learn “incomplete” representations due to the single-pathway framework: an encoder-decoder network followed by a discriminator network. We propose CR-GAN to address this problem. In addition to the single reconstruction path, we introduce a generation sideway to maintain the completeness of the learned embedding space. The two learning pathways collaborate and compete in a parameter-sharing manner, yielding considerably improved generalization ability to “unseen” dataset. More importantly, the two-pathway framework makes it possible to combine both labeled and unlabeled data for self-supervised learning, which further enriches the embedding space for realistic generations. The experimental results prove that CR-GAN significantly outperforms stateof-the-art methods, especially when generating from “unseen” inputs in wild conditions. 1
Small Moving Vehicle Detection in a Satellite Video of an Urban Area
Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously.
IEC 61850 substation configuration language as a basis for automated security and SDN configuration
IEC61850 has revolutionized the way substations are configured and maintained. Substation Configuration Language (SCL) defines the parameters needed to configure individual devices and combine them into a working system. Security is implemented by IEC62351 and there are potential vulnerabilities. Best practice recommendations are for defense in depth. SCL contains sufficient information to auto-configure network equipment, firewalls, IDS and SDN based networks.
Self-Reported and Serum Cotinine-Validated Smoking in Pregnant Women in Estonia
Objectives: Although widely used in epidemiological studies, self-report has been shown to underestimate the prevalence of smoking among pregnant women. Objectives of this study were to examine the discrepancy between self-reported and cotinine-validated smoking status, and the sociodemographic characteristics associated with the misclassification of real smoking status among pregnant women in Tallinn, the capital of Estonia. Methods: Serum cotinine assays were performed on a subsample (n= 1360) of the pregnant women, who had participated in a recent study of human papillomavirus type 16 (HPV-16) seroprevalence in Estonia. In the present study, serum concentrations ≥15 ng/ml were used to distinguish current smokers from nonsmokers. The serum-validated smoking level was compared with the self-reported level in the records of the Estonian Medical Birth Registry. For the group of self-reported non-smokers, the differences between the cotinine-validated smokers and the cotinine-validated nonsmokers, with respect to their sociodemographic characteristics (age, ethnicity, educational level, employment status, marital status, parity), were estimated by logistic regression. Results: Of 1239 women who reported being nonsmokers, 259 (20.9%) had serum cotinine levels ≥15 ng/ml, and can be regarded as current smokers. Among self-reported nonsmokers, nondisclosure of current smoking was significantly more frequent in non-Estonian, less educated, socially inactive, cohabiting and multiparous women. Conclusions: Self-reported data on smoking in pregnant women underestimates the real smoking prevalence in Estonia. Maternal unwillingness to declare smoking during pregnancy needs to be taken into account in the practice of maternal and child health to better target prenatal smoking cessation interventions.
Toxicity of three phenolic compounds and their mixtures on the gram-positive bacteria Bacillus subtilis in the aquatic environment.
Although phenolic compounds are intensively studied for their toxic effects on the environment, the toxicity of catechol, resorcinol and hydroquinone mixtures are still not well understood because most previous bioassays are conducted solely using single compound based on acute tests. In this work, the adverse effect of individual phenolic compounds (catechol, resorcinol and hydroquinone) and the interactive effect of the binary and tertiary mixtures on Bacillus subtilis (B. subtilis) using microcalorimetric method were examined. The toxicity of individual phenolic compounds follows the order catechol>resorcinol>hydroquinone with their respective half inhibitory concentration as 437, 728 and 934 microg mL(-)(1). The power-time curve of B. subtilis growth obtained by microcalorimetry is in complete agreement with the change in turbidity of B. subtilis against time, demonstrating that microcalorimetric method agrees well with the routine microbiological method. The toxicity data obtained from phenolic compound mixtures show that catechol and hydroquinone mixture possess synergistic effect while the other mixtures display additive joint actions. Furthermore, the concentration addition (CA) and independent action (IA) models were employed to predict the toxicities of the phenolic compounds. The experimental results of microcalorimetry show no significant difference on the toxicity of the phenolic compound mixtures from that predicted by CA. However, IA prediction underestimated the mixture effects in all the experiments.
Automatic Extraction and Classification of Footwear Patterns
Identification of the footwear traces from crime scenes is an important yet largely forgotten aspect of forensic intelligence and evidence. We present initial results from a developing automatic footwear classification system. The underlying methodology is based on large numbers of localized features located using MSER feature detectors. These features are transformed into robust SIFT or GLOH descriptors with the ranked correspondence between footwear patterns obtained through the use of constrained spectral correspondence methods. For a reference dataset of 368 different footwear patterns, we obtain a first rank performance of 85% for full impressions and 84% for partial impressions.
Resource management for OFDMA based next generation 802.11 WLANs
Recently, IEEE 802.11ax Task Group has adapted OFDMA as a new technique for enabling multi-user transmission. It has been also decided that the scheduling duration should be same for all the users in a multi-user OFDMA so that the transmission of the users should end at the same time. In order to realize that condition, the users with insufficient data should transmit null data (i.e. padding) to fill the duration. While this scheme offers strong features such as resilience to Overlapping Basic Service Set (OBSS) interference and ease of synchronization, it also poses major side issues of degraded throughput performance and waste of devices' energy. In this work, for OFDMA based 802.11 WLANs we first propose practical algorithm in which the scheduling duration is fixed and does not change from time to time. In the second algorithm the scheduling duration is dynamically determined in a resource allocation framework by taking into account the padding overhead, airtime fairness and energy consumption of the users. We analytically investigate our resource allocation problems through Lyapunov optimization techniques and show that our algorithms are arbitrarily close to the optimal performance at the price of reduced convergence rate. We also calculate the overhead of our algorithms in a realistic setup and propose solutions for the implementation issues.
A Performance Comparison of RAID-5 and Log-Structured Arrays
Increasmg performance of CPUs and memorres wrll be squandered lf not matched by a sunrlm peformance ourease m II0 Whde the capactty of Smgle Large Expenstve D&T (SLED) has grown rapuily, the performance rmprovement of SLED has been modest Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic duk technology developed for personal computers, offers an attractive alternattve IO SLED, promtang onprovements of an or&r of mogm&e m pctformance, rehabdlty, power consumption, and scalalnlrty Thu paper rntroducesfivc levels of RAIDS, grvmg rheu relative costlpetfotmance, and compares RAID to an IBM 3380 and a Fupisu Super Eagle 1 Background: Rlsrng CPU and Memory Performance The users of computers are currently enJoymg unprecedented growth m the speed of computers Gordon Bell said that between 1974 and 1984. smgle chip computers improved m performance by 40% per year, about twice the rate of mmlcomputers [Bell 841 In the followmg year B111 Joy predicted an even faster growth [Joy 851 Mamframe and supercomputer manufacturers, havmg &fficulty keeping pace with the rapId growth predicted by “Joy’s Law,” cope by offermg m&processors as theu top-of-the-lme product. But a fast CPU does not a fast system make Gene Amdahl related CPU speed to mam memory s12e usmg this rule [Siewmrek 821 Each CPU mnstrucaon per second requues one byte of moan memory, If computer system costs are not to be dommated by the cost of memory, then Amdahl’s constant suggests that memory chip capacity should grow at the same rate Gordon Moore pr&cted that growth rate over 20 years fransuforslclup = 2y*-1%4 AK predzted by Moore’s Law, RAMs have quadrupled m capacity every twotMoom75110threeyeaFIyers861 Recently the rauo of megabytes of mam memory to MIPS ha9 been defti as ahha [Garcm 841. vvlth Amdahl’s constant meanmg alpha = 1 In parl because of the rapti drop of memory prices, mam memory we.9 have grownfastexthanCPUspeedsandmanymachmesare~ppedtoday~th alphas of 3 or tigha To mamtam the balance of costs m computer systems, secondary storage must match the advances m other parts of the system A key measPemuswn to copy mthout fee all or w of &IS matcnal IS granted pronded that the COP!S zzrc not made or lstnbuted for dwct commernal advantage, the ACM copyright notIce and the tltk of the pubbcatuon and IW da’, appear, and notxe IS @“en that COPYI"K IS by pemtrs~on of the Association for Computing Machtnery To COPY otherwIse, or to repubbsh, requres B fee and/or spenfic perm~ss~o”
IEC 61850 Communication Networks and Systems In Substations : An Overview for Users
Over the last decade, the “digitization” of the electron enterprise has grown at exponential rates. Utility, industrial, commercial, and even residential consumers are transforming all aspects of their lives into the digital domain. Moving forward, it is expected that every piece of equipment, every receptacle, every switch, and even every light bulb will possess some type of setting, monitoring and/or control. In order to be able to manage the large number of devices and to enable the various devices to communicate with one another, a new communication model was needed. That model has been developed and standardized as IEC61850 – Communication Networks and Systems in Substations. This paper looks at the needs of next generation communication systems and provides an overview of the IEC61850 protocol and how it meets these needs. I. Communication System Needs Communication has always played a critical role in the real-time operation of the power system. In the beginning, the telephone was used to communicate line loadings back to the control center as well as to dispatch operators to perform switching operations at substations. Telephoneswitching based remote control units were available as early as the 1930’s and were able to provide status and control for a few points. As digital communications became a viable option in the 1960’s, data acquisition systems (DAS) were installed to automatically collect measurement data from the substations. Since bandwidth was limited, DAS communication protocols were optimized to operate over low-bandwidth communication channels. The “cost” of this optimization was the time it took to configure, map, and document the location of the various data bits received by the protocol. As we move into the digital age, literally thousands of analog and digital data points are available in a single Intelligent Electronic Device (IED) and communication bandwidth is no longer a limiting factor. Substation to master communication data paths operating at 64,000 bits per second are becoming commonplace with an obvious migration path to much high rates. With this migration in technology, the “cost” component of a data acquisition system has now become the configuration and documentation component. Consequently, a key component of a communication system is the ability to describe themselves from both a data and services (communication functions that an IED performs) perspective. Other “key” requirements include: • High-speed IED to IED communication
Probabilistic Relations between Words : Evidence from Reduction in Lexical Production
The ideas of frequency and predictability have played a fundamental role in models of human language processing for well over a hundred years (Schuchardt, 1885; Jespersen, 1922; Zipf, 1929; Martinet, 1960; Oldfield & Wingfield, 1965; Fidelholz, 1975; Jescheniak & Levelt, 1994; Bybee, 1996). While most psycholinguistic models have thus long included word frequency as a component, recent models have proposed more generally that probabilistic information about words, phrases, and other linguistic structure is represented in the minds of language users and plays a role in language comprehension (Jurafsky, 1996; MacDonald, 1993; McRae, Spivey-Knowlton, & Tanenhaus, 1998; Narayanan & Jurafsky, 1998; Trueswell & Tanenhaus, 1994) production (Gregory, Raymond, Bell, Fosler-Lussier, & Jurafsky, 1999; Roland & Jurafsky, 2000) and learning (Brent & Cartwright, 1996; Landauer & Dumais, 1997; Saffran, Aslin, & Newport, 1996; Seidenberg & MacDonald, 1999). In recent papers (Bell, Jurafsky, Fosler-Lussier, Girand, & Gildea, 1999; Gregory et al., 1999; Jurafsky, Bell, Fosler-Lussier, Girand, & Raymond, 1998), we have been studying the role of predictability and frequency in lexical production. Our goal is to understand the many factors that affect production variability as reflected in reduction processes such as vowel reduction, durational shortening, or final segmental deletion of words in spontaneous speech. One proposal that has resulted from this work is the Probabilistic Reduction Hypothesis: word forms are reduced when they have a higher probability. The probability of a word is conditioned on many aspects of its context, including neighboring words, syntactic and lexical structure, semantic expectations, and discourse factors. This proposal thus generalizes over earlier models which refer only to word frequency (Zipf, 1929; Fidelholz, 1975; Rhodes, 1992, 1996) or predictability (Fowler & Housum, 1987). In this paper we focus on a particular domain of probabilistic linguistic knowledge in lexical production: the role of local probabilistic relations between words.
Transforming Singapore health care: public-private partnership.
Prudent health care policies that encourage public-private participation in health care financing and provisioning have conferred on Singapore the advantage of flexible response as it faces the potentially conflicting challenges of becoming a regional medical hub attracting foreign patients and ensuring domestic access to affordable health care. Both the external and internal health care markets are two sides of the same coin, the competition to be decided on price and quality. For effective regulation, a tripartite model, involving not just the government and providers but empowered consumers, is needed. Government should distance itself from the provider role while providers should compete - and cooperate - to create higher-value health care systems than what others can offer. Health care policies should be better informed by health policy research.
Construction and Practice of Surveying and Mapping Engineering Specialty Practice Teaching System
The four dimensions are rule system dimension, organization dimension, practice base dimension and innovation dimension of Surveying and Mapping Engineering. "Four Dimensional Integration" practice teaching system is summarized in this article. The methods of construct practical teaching quality guarantee system is described in detail. The construction of whole process, all aspects, multi-method integrated practice teaching quality monitoring system is introduced. The implementation achievements of the "Four Dimensional Integration" practice teaching system is summed up, in order to provide references to general university in strengthening the practice teaching. Keywords-Surveying and Mapping Engineering; Four Dimensions Integration; Practice teaching; Quality guarantee system; Quality monitoring system
A Comparison of Equality in Computer Algebra and Correctness in Mathematical Pedagogy
How do we recognize when an answer is "right"? This is a question that has bedevilled the use of computer systems in mathematics (as opposed to arithmetic) ever since their introduction. A computer system can certainly say that some answers are definitely wrong, in the sense that they are provably not an answer to the question posed. However, an answer can be mathematically right without being pedagogically right. Here we explore the differences and show that, despite the apparent distinction, it is possible to make many of the differences amenable to formal treatment, by asking "under which congruence is the pupil's answer equal to the teacher's?".
A randomized trial evaluating tobacco possession-use-purchase laws in the USA.
Tobacco Purchase-Use-Possession laws (PUP) are being implemented throughout the US, but it is still unclear whether they are effective in reducing smoking prevalence among the youth targeted by these public health policies. In the present study, 24 towns in Northern Illinois were randomly assigned to one of two conditions. One condition involved reducing commercial sources of youth access to tobacco (Control), whereas the second involved both reducing commercial sources of youth access to tobacco as well as fining minors for possessing or using tobacco (Experimental). Students in 24 towns in Northern Illinois in the United States completed a 74 item self-report survey in 2002, 2003, 2004 and 2005. At the start of the study, students were in grades 7-10. During each time period, students were classified as current smokers or nonsmokers (i.e., completely abstinent for the 30 consecutive days prior to assessment). The analyses included 25,404 different students and 50,725 assessments over the four time periods. A hierarchical linear modeling analytical approach was selected due to the multilevel data (i.e., town-level variables and individual-level variables), and nested design of sampling of youth within towns. Findings indicated that the rates of current smoking were not significantly different between the two conditions at baseline, but over time, rates increased significantly less quickly for adolescents in Experimental than those in Control towns. The implications of these findings are discussed.
Major Depression Detection from EEG Signals Using Kernel Eigen-Filter-Bank Common Spatial Patterns
Major depressive disorder (MDD) has become a leading contributor to the global burden of disease; however, there are currently no reliable biological markers or physiological measurements for efficiently and effectively dissecting the heterogeneity of MDD. Here we propose a novel method based on scalp electroencephalography (EEG) signals and a robust spectral-spatial EEG feature extractor called kernel eigen-filter-bank common spatial pattern (KEFB-CSP). The KEFB-CSP first filters the multi-channel raw EEG signals into a set of frequency sub-bands covering the range from theta to gamma bands, then spatially transforms the EEG signals of each sub-band from the original sensor space to a new space where the new signals (i.e., CSPs) are optimal for the classification between MDD and healthy controls, and finally applies the kernel principal component analysis (kernel PCA) to transform the vector containing the CSPs from all frequency sub-bands to a lower-dimensional feature vector called KEFB-CSP. Twelve patients with MDD and twelve healthy controls participated in this study, and from each participant we collected 54 resting-state EEGs of 6 s length (5 min and 24 s in total). Our results show that the proposed KEFB-CSP outperforms other EEG features including the powers of EEG frequency bands, and fractal dimension, which had been widely applied in previous EEG-based depression detection studies. The results also reveal that the 8 electrodes from the temporal areas gave higher accuracies than other scalp areas. The KEFB-CSP was able to achieve an average EEG classification accuracy of 81.23% in single-trial analysis when only the 8-electrode EEGs of the temporal area and a support vector machine (SVM) classifier were used. We also designed a voting-based leave-one-participant-out procedure to test the participant-independent individual classification accuracy. The voting-based results show that the mean classification accuracy of about 80% can be achieved by the KEFP-CSP feature and the SVM classifier with only several trials, and this level of accuracy seems to become stable as more trials (i.e., <7 trials) are used. These findings therefore suggest that the proposed method has a great potential for developing an efficient (required only a few 6-s EEG signals from the 8 electrodes over the temporal) and effective (~80% classification accuracy) EEG-based brain-computer interface (BCI) system which may, in the future, help psychiatrists provide individualized and effective treatments for MDD patients.
Recognizing Visual Signatures of Spontaneous Head Gestures
Head movements are an integral part of human nonverbal communication. As such, the ability to detect various types of head gestures from video is important for robotic systems that need to interact with people or for assistive technologies that may need to detect conversational gestures to aid communication. To this end, we propose a novel Multi-Scale Deep Convolution-LSTM architecture, capable of recognizing short and long term motion patterns found in head gestures, from video data of natural and unconstrained conversations. In particular, our models use Convolutional Neural Networks (CNNs) to learn meaningful representations from short time windows over head motion data. To capture longer term dependencies, we use Recurrent Neural Networks (RNNs) that extract temporal patterns across the output of the CNNs. We compare against classical approaches using discriminative and generative graphical models and show that our model is able to significantly outperform baseline models.
Rhythm and Beat Perception in Motor Areas of the Brain
When we listen to rhythm, we often move spontaneously to the beat. This movement may result from processing of the beat by motor areas. Previous studies have shown that several motor areas respond when attending to rhythms. Here we investigate whether specific motor regions respond to beat in rhythm. We predicted that the basal ganglia and supplementary motor area (SMA) would respond in the presence of a regular beat. To establish what rhythm properties induce a beat, we asked subjects to reproduce different types of rhythmic sequences. Improved reproduction was observed for one rhythm type, which had integer ratio relationships between its intervals and regular perceptual accents. A subsequent functional magnetic resonance imaging study found that these rhythms also elicited higher activity in the basal ganglia and SMA. This finding was consistent across different levels of musical training, although musicians showed activation increases unrelated to rhythm type in the premotor cortex, cerebellum, and SMAs (pre-SMA and SMA). We conclude that, in addition to their role in movement production, the basal ganglia and SMAs may mediate beat perception.
Effect of adjunct fluticasone propionate on airway physiology during rest and exercise in COPD.
RATIONALE Combination therapy with corticosteroid and long-acting β(2)-agonists (LABA) in a single inhaler is associated with superior effects on airway function and exercise performance in COPD compared with LABA monotherapy. The physiological effects of adding inhaled corticosteroid monotherapy to maintenance bronchodilator therapy (long-acting anticholinergics and LABA singly or in combination) in COPD are unknown. METHODS This was a randomized, double-blind, placebo-controlled, crossover study (NCT00387036) to compare the effects of inhaled fluticasone propionate 500 μg (FP500) twice-daily and placebo (PLA) on airway function during rest and exercise, measured during constant work rate cycle exercise at 75% of maximum incremental cycle work rate, in 17 patients with COPD (FEV(1) ≤ 70% predicted). RESULTS After treatment with FP500 compared to PLA, there were significant increases in post-dose measurements of FEV(1) (+115 mL, P = 0.006) and the FEV(1)/FVC ratio (+2.5%, P = 0.017), along with decreases in plethysmographic residual volume (-0.32L; P = 0.031), functional residual capacity (-0.30L, P = 0.033), and total lung capacity (-0.30L, P = 0.027) but no changes in vital capacity or inspiratory capacity (IC). Post-treatment comparisons demonstrated a significant improvement in endurance time by 188 ± 362 s with FP500 (P = 0.047) with no concomitant increase in dyspnea intensity. End-inspiratory and end-expiratory lung volumes were reduced at rest and throughout exercise with FP500 compared with PLA (P < 0.05). CONCLUSION Inhaled FP500 monotherapy was associated with consistent and clinically important improvements in FEV(1), static lung volumes, dynamic operating lung volumes, and exercise endurance when added to established maintenance long-acting bronchodilator therapy in patients with moderate to severe COPD.
Study and Handling Methods of Power IGBT Module Failures in Power Electronic Converter Systems
Power electronics plays an important role in a wide range of applications in order to achieve high efficiency and performance. Increasing efforts are being made to improve the reliability of power electronics systems to ensure compliance with more stringent constraints on cost, safety, and availability in different applications. This paper presents an overview of the major failure mechanisms of IGBT modules and their handling methods in power converter systems improving reliability. The major failure mechanisms of IGBT modules are presented first, and methods for predicting lifetime and estimating the junction temperature of IGBT modules are then discussed. Subsequently, different methods for detecting open- and short-circuit faults are presented. Finally, fault-tolerant strategies for improving the reliability of power electronic systems under field operation are explained and compared in terms of performance and cost.
Programmable rendering of line drawing from 3D scenes
This article introduces a programmable approach to nonphotorealistic line drawings from 3D models, inspired by programmable shaders in traditional rendering. This approach relies on the assumption generally made in NPR that style attributes (color, thickness, etc.) are chosen depending on generic properties of the scene such as line characteristics or depth discontinuities, etc. We propose a new image creation model where all operations are controlled through user-defined procedures in which the relations between style attributes and scene properties are specified. A view map describing all relevant support lines in the drawing and their topological arrangement is first created from the 3D model so as to ensure the continuity of all scene properties along its edges; a number of style modules operate on this map, by procedurally selecting, chaining, or splitting lines, before creating strokes and assigning drawing attributes. Consistent access to properties of the scene is provided from the different elements of the map that are manipulated throughout the whole process. The resulting drawing system permits flexible control of all elements of drawing style: First, different style modules can be applied to different types of lines in a view; second, the topology and geometry of strokes are entirely controlled from the programmable modules; and third, stroke attributes are assigned procedurally and can be correlated at will with various scene or view properties. We illustrate the components of our system and show how style modules successfully encode stylized visual characteristics that can be applied across a wide range of models.
Multimodal Network Embedding via Attention based Multi-view Variational Autoencoder
Learning the embedding for social media data has attracted extensive research interests as well as boomed a lot of applications, such as classification and link prediction. In this paper, we examine the scenario of a multimodal network with nodes containing multimodal contents and connected by heterogeneous relationships, such as social images containing multimodal contents (e.g., visual content and text description), and linked with various forms (e.g., in the same album or with the same tag). However, given the multimodal network, simply learning the embedding from the network structure or a subset of content results in sub-optimal representation. In this paper, we propose a novel deep embedding method, i.e., Attention-based Multi-view Variational Auto-Encoder (AMVAE), to incorporate both the link information and the multimodal contents for more effective and efficient embedding. Specifically, we adopt LSTM with attention model to learn the correlation between different data modalities, such as the correlation between visual regions and the specific words, to obtain the semantic embedding of the multimodal contents. Then, the link information and the semantic embedding are considered as two correlated views. A multi-view correlation learning based Variational Auto-Encoder (VAE) is proposed to learn the representation of each node, in which the embedding of link information and multimodal contents are integrated and mutually reinforced. Experiments on three real-world datasets demonstrate the superiority of the proposed model in two applications, i.e., multi-label classification and link prediction.
Are ceramic implants a viable alternative to titanium implants? A systematic literature review.
AIM The aim of this systematic review was to screen the literature in order to locate animal and clinical data on bone-implant contact (BIC) and clinical survival/success that would help to answer the question 'Are ceramic implants a viable alternative to titanium implants?' MATERIAL AND METHODS A literature search was performed in the following databases: (1) the Cochrane Oral Health Group's Trials Register, (2) the Cochrane Central Register of Controlled Trials (CENTRAL), (3) MEDLINE (Ovid), and (4) PubMed. To evaluate biocompatibility, animal investigations were scrutinized regarding the amount of BIC and to assess implant longevity clinical data were evaluated. RESULTS The PubMed search yielded 349 titles and the Cochrane/MEDLINE search yielded 881 titles. Based upon abstract screening and discarding duplicates from both searches, 100 full-text articles were obtained and subjected to additional evaluation. A further publication was included based on the manual search. The selection process resulted in the final sample of 25 studies. No (randomized) controlled clinical trials regarding the outcome of zirconia and alumina ceramic implants could be found. The systematic review identified histological animal studies showing similar BIC between alumina, zirconia and titanium. Clinical investigations using different alumina oral implants up to 10 years showed survival/success rates in the range of 23 to 98% for different indications. The included zirconia implant studies presented a survival rate from 84% after 21 months to 98% after 1 year. CONCLUSIONS No difference was found in the rate of osseointegration between the different implant materials in animal experiments. Only cohort investigations were located with questionable scientific value. Alumina implants did not perform satisfactorily and therefore, based on this review, are not a viable alternative to titanium implants. Currently, the scientific clinical data for ceramic implants in general and for zirconia implants in particular are not sufficient to recommend ceramic implants for routine clinical use. Zirconia, however, may have the potential to be a successful implant material, although this is as yet unsupported by clinical investigations.
Empirically Analyzing the Effect of Dataset Biases on Deep Face Recognition Systems
It is unknown what kind of biases modern in the wild face datasets have because of their lack of annotation. A direct consequence of this is that total recognition rates alone only provide limited insight about the generalization ability of a Deep Convolutional Neural Networks (DCNNs). We propose to empirically study the effect of different types of dataset biases on the generalization ability of DCNNs. Using synthetically generated face images, we study the face recognition rate as a function of interpretable parameters such as face pose and light. The proposed method allows valuable details about the generalization performance of different DCNN architectures to be observed and compared. In our experiments, we find that: 1) Indeed, dataset bias has a significant influence on the generalization performance of DCNNs. 2) DCNNs can generalize surprisingly well to unseen illumination conditions and large sampling gaps in the pose variation. 3) Using the presented methodology we reveal that the VGG-16 architecture outperforms the AlexNet architecture at face recognition tasks because it can much better generalize to unseen face poses, although it has significantly more parameters. 4) We uncover a main limitation of current DCNN architectures, which is the difficulty to generalize when different identities to not share the same pose variation. 5) We demonstrate that our findings on synthetic data also apply when learning from real-world data. Our face image generator is publicly available to enable the community to benchmark other DCNN architectures.
Reliability of measurements of muscle tone and muscle power in stroke patients.
OBJECTIVES to establish the reliability of the modified Ashworth scale for measuring muscle tone in a range of muscle groups (elbow, wrist, knee and ankle; flexors and extensors) and of the Medical Research Council scale for measuring muscle power in the same muscle groups and their direct antagonists. DESIGN a cross-sectional study involving repeated measures by two raters. We estimated reliability using the kappa statistic with quadratic weights (Kw). SETTING an acute stroke ward, a stroke rehabilitation unit and a continuing care facility. SUBJECTS people admitted to hospital with an acute stroke-35 patients, median age 73 (interquartile range 65-80), 20 men and 15 women. RESULTS inter- and intra-rater agreement for the measurement of power was good to very good for all tested muscle groups (Kw = 0.84-0.96, Kw = 0.70-0.96). Inter- and intra-rater agreement for the measurement of tone in the elbow, wrist and knee flexors was good to very good (Kw = 0.73-0.96, Kw = 0.77-0.94). Inter- and intra-rater agreement for the measurement of tone in the ankle plantarflexors was moderate to good (Kw = 0.45-0.51, Kw = 0.59-0.64). CONCLUSIONS the Medical Research Council scale was reliable in the tested muscle groups. The modified Ashworth scale demonstrated reliability in all tested muscle groups except the ankle plantarflexors. If reliable measurement of tone at the ankle is required for a specific purpose (e.g. to measure the effect of therapeutic intervention), further work will be necessary.
Efficacy of cranial electric stimulation for the treatment of insomnia: a randomized pilot study.
OBJECTIVES This pilot study examined the potential efficacy of cranial electric stimulation for the treatment of insomnia. DESIGN The researchers tested the hypothesis through a randomized, double-blind, and placebo controlled clinical trial. The researchers approached eligible subjects who scored 21 or above on the Pittsburgh Insomnia Rating Scale. The researchers then randomly assigned the subjects to receive either an active or sham device. Each study subject received 60min of active or sham treatment for five days. Following each intervention the subjects completed a sleep log, as well as three and ten days later. SETTING The researchers conducted the study among active duty service members receiving mental health care on the Psychiatry Continuity Service (PCS), Walter Reed National Military Medical Center in Bethesda, MD. MAIN OUTCOME MEASURES The study's primary outcome variables were the time to sleep onset, total time slept, and number of awakenings as reported by the subjects in the serial sleep logs. The researchers identified a nearly significant increase in total time slept after three cranial electric stimulation treatments among all study subjects. A closer examination of this group revealed an interesting gender bias, with men reporting a robust increase in total time slept after one treatment, decay in effect over the next two interventions, and then an increase in total time slept after the fourth treatment. The researchers speculate that the up and down effect on total time slept could be the result of an insufficient dose of cranial electric stimulation.
Automatic verification of EMC immunity by simulation
Immunity of analog circuit blocks is becoming a major design risk. This paper presents an automated methodology to simulate the susceptibility of a circuit during the design phase. More specifically, we propose a CAD tool which determines the fail/pass criteria of a signal under direct power injection (DPI). This contribution describes the function of the tool which is validated by a LDO regulator.
Combining equilibrium , resampling , and analysts ’ views in portfolio optimization
Portfolio optimization methodologies play a central role in strategic asset allocation (SAA), where it is desirable to have portfolios that are efficient, diversified, and stable. Since the development of the traditional mean-variance approach of Markowitz (1952), many improvements have been made to overcome problems such as lack of diversification and strong sensitivity of optimal portfolio weights to expected returns.
Ang Social Network sa Facebook ng mga Taga-Batangas at ng mga Taga-Laguna: Isang Paghahambing
[English Abstract] Online social networking (OSN) has become of great influence to Filipinos, where Facebook, Twitter, LinkedIn, Google+, and Instagram are among the popular ones. Their popularity, coupled with their intuitive and interactive use, allow one’s personal information such as gender, age, address, relationship status, and list of friends to become publicly available. The accessibility of information from these sites allow, with the aid of computers, for the study of a wide population’s characteristics even in a provincial scale. Aside from being neighbouring locales, the respective residents of Laguna and Batangas both derive their livelihoods from two lakes, Laguna de Bay and Taal Lake. Both residents experience similar problems, such as that, among many others, of fish kill. The goal of this research is to find out similarities in their respective online populations, particularly that of Facebook’s. With the use of computational dynamic social network analysis (CDSNA), we found out that the two communities are similar, among others, as follows: 1. Both populations are dominated by single young female; 2. Homophily was observed when choosing a friend in terms of age (i.e., friendships were created more often between people whose ages do not differ by at most five years); and 3. Heterophily was observed when choosing friends in terms of gender (i.e., more friendships were created between a male and a female than between both people of the same gender). This paper also presents the differences in the structure of the two social networks, such as degrees of separation and preferential attachment. [Filipino Abstract] Ang online social networking (OSN) sa Internet ay kasalukuyang may malaganap na impluwensiya sa buhay ng mga Pilipino, kung saan ang Facebook, Twitter, LinkedIn, Google+, at Instagram ay ilan sa mga sikat. Ang kasikatan at pagiging madaling magamit ng mga OSN ay nagbigay daan para ang personal na impormasyon katulad ng kasarian, gulang, pook na tinitirahan, estadong pang-sibil at mga listahan ng mga kaibigan ay maging pampublikong kaalaman. Dahil dito, sa tulong ng paggamit ng kompyuter, napapadali ang pag-aaral sa mga katangian ng mga populasyon sa mas malawak na iskala, kahit kasing lawak ng mga lalawigan. Surian ng Agham Pang-kompyuter Surian ng Kapnayan
Unsupervised Sparse Vector Densification for Short Text Similarity
Sparse representations of text such as bag-ofwords models or extended explicit semantic analysis (ESA) representations are commonly used in many NLP applications. However, for short texts, the similarity between two such sparse vectors is not accurate due to the small term overlap. While there have been multiple proposals for dense representations of words, measuring similarity between short texts (sentences, snippets, paragraphs) requires combining these token level similarities. In this paper, we propose to combine ESA representations and word2vec representations as a way to generate denser representations and, consequently, a better similarity measure between short texts. We study three densification mechanisms that involve aligning sparse representation via many-to-many, many-to-one, and oneto-one mappings. We then show the effectiveness of these mechanisms on measuring similarity between short texts.
Effects of a recruitment maneuver on plasma levels of soluble RAGE in patients with diffuse acute respiratory distress syndrome: a prospective randomized crossover study
The soluble form of the receptor for advanced glycation end-products (sRAGE) is a promising marker for epithelial dysfunction, but it has not been fully characterized as a biomarker of acute respiratory distress syndrome (ARDS). Whether sRAGE could inform on the response to ventilator settings has been poorly investigated, and whether a recruitment maneuver (RM) may influence plasma sRAGE remains unknown. Twenty-four patients with moderate/severe, nonfocal ARDS were enrolled in this prospective monocentric crossover study and randomized into a “RM-SHAM” group when a 6-h-long RM sequence preceded a 6-h-long sham evaluation period, or a “SHAM-RM” group (inverted sequences). Protective ventilation was applied, and RM consisted of the application of 40 cmH2O airway pressure for 40 s. Arterial blood was sampled for gas analyses and sRAGE measurements, 5 min pre-RM (or 40-s-long sham period), 5, 30 min, 1, 4, and 6 h after the RM (or 40-s-long sham period). Mean PaO2/FiO2, tidal volume, PEEP, and plateau pressure were 125 mmHg, 6.8 ml/kg (ideal body weight), and 13 and 26 cmH2O, respectively. Median baseline plasma sRAGE levels were 3,232 pg/ml. RM induced a significant decrease in sRAGE (−1,598 ± 859 pg/ml) in 1 h (p = 0.043). At 4 and 6 h post-RM, sRAGE levels increased back toward baseline values. Pre-RM sRAGE was associated with RM-induced oxygenation improvement (AUC 0.84). We report the first kinetics study of plasma sRAGE after RM in ARDS. Our findings reinforce the value of plasma sRAGE as a biomarker of ARDS.
Active Disk Architecture for Databases
Today’s commodity disk drives, the basic unit of storage for computer systems large and small, are actually small computers, with a processor, memory and a network connection, in addition to the spinning magnetic material that stores the data. Large collections of data are becoming larger, and people are beginning to analyze, rather than simply store-and-forget, these masses of data. At the same time, advances in I/O performance have lagged the rapid development of commodity processor and memory technology. This paper describes the use of Active Disks to take advantage of the processing power on individual disk drives to run a carefully chosen portion of a relational database system. Moving a portion of the database processing to execute directly at the disk drives improves performance by: 1) dramatically reducing data traffic; and 2) exploiting the parallelism in large storage systems. It provides a new point of leverage to overcome the I/O bottleneck. This paper discusses how to map all the basic database operations select, project, and join onto an Active Disk system. The changes required are small and the performance gains are dramatic. A prototype based on the Postgres database system demonstrates a factor of 2x performance improvement on a small system using a portion of the TPC-D decision support benchmark, with the promise of larger improvements in more realistically-sized systems. Active Disk Architecture for Databases Erik Riedel1, Christos Faloutsos, David Nagle April 2000 CMU-CS-00-145 This research was sponsored by DARPA/ITO through ARPA Order D306, and issued by Indian Head Division, NSWC under contract N00174-96-0002. Partial funding was provided by the National Science Foundation under grants IRI-9625428, DMS-9873442, IIS9817496, and IIS-9910606. Additional funding was provided by donations from NEC and Intel. We are indebted to generous contributions from the member companies of the Parallel Data Consortium. At the time of this writing, these companies include Hewlett-Packard Laboratories, LSI Logic, Data General, EMC, Compaq, Intel, 3Com, Quantum, IBM, Seagate Technology, Hitachi, Infineon, Novell, and Wind River Systems. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of any supporting organization or the U.S. Government. 1. now with Hewlett-Packard Labs, riedel@hpl.hp.com
Neural Networks for Joint Sentence Classification in Medical Paper Abstracts
Existing models based on artificial neural networks (ANNs) for sentence classification often do not incorporate the context in which sentences appear, and classify sentences individually. However, traditional sentence classification approaches have been shown to greatly benefit from jointly classifying subsequent sentences, such as with conditional random fields. In this work, we present an ANN architecture that combines the effectiveness of typical ANN models to classify sentences in isolation, with the strength of structured prediction. Our model outperforms the state-ofthe-art results on two different datasets for sequential sentence classification in medical abstracts.
An Experience Developing an IDS Stimulator for the Black-Box Testing of Network Intrusion Detection Systems
Signature-based intrusion detection systems use a set of attack descriptions to analyze event streams, looking for evidence of malicious behavior. If the signatures are expressed in a well-defined language, it is possible to analyze the attack signatures and automatically generate events or series of events that conform to the attack descriptions. This approach has been used in tools whose goal is to force intrusion detection systems to generate a large number of detection alerts. The resulting “alert storm” is used to desensitize intrusion detection system administrators and hide attacks in the event stream. We apply a similar technique to perform testing of intrusion detection systems. Signatures from one intrusion detection system are used as input to an event stream generator that produces randomized synthetic events that match the input signatures. The resulting event stream is then fed to a number of different intrusion detection systems and the results are analyzed. This paper presents the general testing approach and describes the first prototype of a tool, called Mucus, that automatically generates network traffic using the signatures of the Snort network-based intrusion detection system. The paper describes preliminary cross-testing experiments with both an open-source and a commercial tool and reports the results. An evasion attack that was discovered as a result of analyzing the test results is also presented.
WearIA: Wearable device implicit authentication based on activity information
Privacy and authenticity of data pushed by or into wearable devices are of important concerns. Wearable devices equipped with various sensors can capture user's activity in fine-grained level. In this work, we investigate the possibility of using user's activity information to develop an implicit authentication approach for wearable devices. We design and implement a framework that does continuous and implicit authentication based on ambulatory activities performed by the user. The system is validated using data collected from 30 participants with wearable devices worn across various regions of the body. The evaluation results show that the proposed approach can achieve as high as 97% accuracy rate with less than 1% false positive rate to authenticate a user using a single wearable device. And the accuracy rate can go up to 99.6% when we use the fusion of multiple wearable devices.
An Efficient FDTD Algorithm Based on the Equivalence Principle for Analyzing Onbody Antenna Performance
In this paper, on body antenna performance and its effect on the radio channel is analyzed. An efficient numerical technique based on the finite-difference time-domain technique and the equivalence principle is developed. The proposed technique begins with the problem decomposition by separately computing the wearable antennas and on body propagation involving the digital human phantom. The equivalence principle is used as an interface between the two computational domains. We apply this technique to analyze on body antenna and channel characteristics for three different planar body-worn antennas operating at the industrial-scientific-medical frequency band of 2.4 GHz. Simulated results are validated with measurement data with good agreement.
Detecting algorithmically generated malicious domain names
Recent Botnets such as Conficker, Kraken and Torpig have used DNS based "domain fluxing" for command-and-control, where each Bot queries for existence of a series of domain names and the owner has to register only one such domain name. In this paper, we develop a methodology to detect such "domain fluxes" in DNS traffic by looking for patterns inherent to domain names that are generated algorithmically, in contrast to those generated by humans. In particular, we look at distribution of alphanumeric characters as well as bigrams in all domains that are mapped to the same set of IP-addresses. We present and compare the performance of several distance metrics, including KL-distance, Edit distance and Jaccard measure. We train by using a good data set of domains obtained via a crawl of domains mapped to all IPv4 address space and modeling bad data sets based on behaviors seen so far and expected. We also apply our methodology to packet traces collected at a Tier-1 ISP and show we can automatically detect domain fluxing as used by Conficker botnet with minimal false positives.
Public education in a multicultural society: TEACHING LITERATURE
Part I. Introduction and Critique: 1. Multicultural education: concepts, policies and controversies ROBERT K. FULLINWIDER 2. Antiracist civic education in the California history-social science framework LAWRENCE A. BLUM 3. A conflict of visions: multiculturalism and the social studies GILBERT T. SEWALL Part II. Culture and Identity: 4. Culture, subculture, multiculturalism: education options K. ANTHONY APPIAH 5. Multiculturalism and melange JEREMY WALDRON Part III. Relativism, reason, and public education: 6. Locke and multiculturalism: toleration, relativism, and reason SUSAN KHIN ZAW 7. Challenges of multiculturalism in democratic education AMY GUTMANN Part IV. Teaching History: 8. Multiculturalism and history: historical perspectives and present prospects GARY B. NASH 9. Patriotic history ROBERT K. FULLINWIDER Part V. Teaching Literature: 10. Multicultural literature and civic education: a problematic relationship with possibilities SANDRA STOTSKY 11. Teaching American literary history ARTHUR EVENCHIK.
Vehicle Tracking, Monitoring and Alerting System: A Review
The goal of this paper is to review the past work of vehicle tracking, monitoring and alerting system, to categorize various methodologies and identify new trends. Vehicle tracking, monitoring and alerting system is challenging problem. There are various challenges encounter in vehicle tracking, monitoring and alerting due to deficiency in proper real time vehicle location and problem of alerting system. GPS (Global Positioning System) is most widely used technology for vehicle tracking and keep regular monitoring of vehicle. The objective of tracking system is to manage and control the transport using GPS transreceiver to know the current location of vehicle. In number of system, RFID (Radio Frequency Identification) is chosen as one of technology implemented for bus monitoring system. GSM (Global System for Mobile Communication) is most widely used for alerting system. Alerting system is essential for providing the location and information about vehicle to passenger, owner or user.
Production and evaluation of ZnS thin films by the MOCVD technique as alpha-particle detectors
Abstract Zinc sulphide thin films are deposited on several substrates such as glass, quartz, silicon, Teflon and Mylar. The chemical reaction of hydrogen sulphide with dimethylzinc is utilised for the deposition process. The optimum working conditions (deposition rate versus flow rate, temperature and pressure) are obtained. The acquired films are characterised and the films are examined for alpha-particle sensitivity. In general, the deposited films have very good adhesion to the substrate surfaces. X-ray diffraction data indicates that the films have a sphalerite structure with (111) preferred orientation. Rutherford backscattering spectrometry and electron microprobe analysis data show that films are stoichiometric and relatively free of impurities. It seems that the intrinsic defect concentration in the deposited film is insufficient to make the self-activated ZnS film an efficient alpha-particle counter.
A Survey on Data-Flow Testing
Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable’s definition and its uses. Such a test objective of interest is referred to as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT’s complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.
Naked eye star visibility and limiting magnitude mapped from DMSP-OLS satellite data
We extend the method introduced by Cinzano et al. (2000a) to map the artificial sky brightness in large territories from DMSP satellite data, in order to map the naked eye star visibility and telescopic limiting magnitudes. For these purposes we take into account the altitude of each land area from GTOPO30 world elevation data, the natural sky brightness in the chosen sky direction, based on Garstang modelling, the eye capability with naked eye or a telescope, based on the Schaefer (1990) and Garstang (2000b) approach, and the stellar extinction in the visual photometric band. For near zenith sky directions we also take into account screening by terrain elevation. Maps of naked eye star visibility and telescopic limiting magnitudes are useful to quantify the capability of the population to perceive our Universe, to evaluate the future evolution, to make cross correlations with statistical parameters and to recognize areas where astronomical observations or popularisation can still acceptably be made. We present, as an application, maps of naked eye star visibility and total sky brightness in V band in Europe at the zenith with a resolution of approximately 1 km.
CSLIM: contextual SLIM recommendation algorithms
Context-aware recommender systems (CARS) take contextual conditions into account when providing item recommendations. In recent years, context-aware matrix factorization (CAMF) has emerged as an extension of the matrix factorization technique that also incorporates contextual conditions. In this paper, we introduce another matrix factorization approach for contextual recommendations, the contextual SLIM (CSLIM) recommendation approach. It is derived from the sparse linear method (SLIM) which was designed for Top-N recommendations in traditional recommender systems. Based on the experimental evaluations over several context-aware data sets, we demonstrate that CLSIM can be an effective approach for context-aware recommendations, in many cases outperforming state-of-the-art CARS algorithms in the Top-N recommendation task.
Countering Adversarial Images using Input Transformations
This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier. Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images. The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses. Our best defense eliminates 60% of strong gray-box and 90% of strong black-box attacks by a variety of major attack methods.
Effectiveness, relapse prevention and mechanisms of change of cognitive therapy vs. interpersonal therapy for depression: Study protocol for a randomised controlled trial
BACKGROUND Major depression is a common mental disorder that substantially impairs quality of life and has high societal costs. Although psychotherapies have proven to be effective antidepressant treatments, initial response rates are insufficient and the risk of relapse and recurrence is high. Improvement of treatments is badly needed. Studying the mechanisms of change in treatment might be a good investment for improving everyday mental health care. However, the mechanisms underlying therapeutic change remain largely unknown. The objective of the current study is to assess both the effectiveness of two commonly used psychotherapies for depression in terms of reduction of symptoms and prevention of relapse on short and long term, as well as identifying underlying mechanisms of change. METHODS In a randomised trial we will compare (a) Cognitive Therapy (CT) with (b) Interpersonal therapy (IPT), and (c) an 8-week waiting list condition followed by treatment of choice. One hundred eighty depressed patients (aged 18-65) will be recruited in a mental health care centre in Maastricht (the Netherlands). Eligible patients will be randomly allocated to one of the three intervention groups. The primary outcome measure of the clinical evaluation is depression severity measured by the Beck Depression Intenvory-II (BDI-II). Other outcomes include process variables such as dysfunctional beliefs, negative attributions, and interpersonal problems. All self-report outcome assessments will take place on the internet at baseline, three, seven, eight, nine, ten, eleven, twelve and twenty-four months. At 24 months a retrospective telephone interview will be administered. Furthermore, a rudimentary analysis of the cost-effectiveness will be embedded. The study has been ethically approved and registered. DISCUSSION By comparing CT and IPT head-to-head and by investigating multiple potential mediators and outcomes at multiple time points during and after therapy, we hope to provide new insights in the effectiveness and mechanisms of change of CT and IPT for depression, and contribute to the improvement of mental health care for adults suffering from depression. TRIAL REGISTRATION The study has been registered at the Netherlands Trial Register, part of the Dutch Cochrane Centre (ISRCTN67561918).
Discovering affective regions in deep convolutional neural networks for visual sentiment prediction
In this paper, we address the problem of automatically recognizing emotions in still images. While most of current work focus on improving whole-image representations using CNNs, we argue that discovering affective regions and supplementing local features will boost the performance, which is inspired by the observation that both global distributions and salient objects carry massive sentiments. We propose an algorithm to discover affective regions via deep framework, in which we use an off-the-shelf tool to generate N object proposals from a query image and rank these proposals with their objectness scores. Then, each proposal's sentiment score is computed using a pre-trained and fine-tuned CNN model. We combine both scores and select top K regions from the N candidates. These K regions are regarded as the most affective ones of the input image. Finally, we extract deep features from the whole-image and the selected regions, respectively, and sentiment label is predicted. The experiments show that our method is able to detect the affective local regions and achieve state-of-the-art performances on several popular datasets.
Stochastic Block-Coordinate Frank-Wolfe Optimization for Structural SVMs
We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate FrankWolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.
Viewing the Kenyan health system through an equity lens: implications for universal coverage
INTRODUCTION Equity and universal coverage currently dominate policy debates worldwide. Health financing approaches are central to universal coverage. The way funds are collected, pooled, and used to purchase or provide services should be carefully considered to ensure that population needs are addressed under a universal health system. The aim of this paper is to assess the extent to which the Kenyan health financing system meets the key requirements for universal coverage, including income and risk cross-subsidisation. Recommendations on how to address existing equity challenges and progress towards universal coverage are made. METHODS An extensive review of published and gray literature was conducted to identify the sources of health care funds in Kenya. Documents were mainly sourced from the Ministry of Medical Services and the Ministry of Public Health and Sanitation. Country level documents were the main sources of data. In cases where data were not available at the country level, they were sought from the World Health Organisation website. Each financing mechanism was analysed in respect to key functions namely, revenue generation, pooling and purchasing. RESULTS The Kenyan health sector relies heavily on out-of-pocket payments. Government funds are mainly allocated through historical incremental approach. The sector is largely underfunded and health care contributions are regressive (i.e. the poor contribute a larger proportion of their income to health care than the rich). Health financing in Kenya is fragmented and there is very limited risk and income cross-subsidisation. The country has made little progress towards achieving international benchmarks including the Abuja target of allocating 15% of government's budget to the health sector. CONCLUSIONS The Kenyan health system is highly inequitable and policies aimed at promoting equity and addressing the needs of the poor and vulnerable have not been successful. Some progress has been made towards addressing equity challenges, but universal coverage will not be achieved unless the country adopts a systemic approach to health financing reforms. Such an approach should be informed by the wider health system goals of equity and efficiency.
Continuous arc rotation of the couch therapy for the delivery of accelerated partial breast irradiation: a treatment planning analysis.
PURPOSE We present a novel form of arc therapy: continuous arc rotation of the couch (C-ARC) and compare its dosimetry with three-dimensional conformal radiotherapy (3D-CRT), intensity-modulated radiotherapy (IMRT), and volumetric-modulated arc therapy (VMAT) for accelerated partial breast irradiation (APBI). C-ARC, like VMAT, uses a modulated beam aperture and dose rate, but with the couch, not the gantry, rotating. METHODS AND MATERIALS Twelve patients previously treated with APBI using 3D-CRT were replanned with (1) C-ARC, (2) IMRT, and (3) VMAT. C-ARC plans were designed with one medial and one lateral arc through which the couch rotated while the gantry was held stationary at a tangent angle. Target dose coverage was normalized to the 3D-CRT plan. Comparative endpoints were dose to normal breast tissue, lungs, and heart and monitor units prescribed. RESULTS Compared with 3D-CRT, C-ARC, IMRT, and VMAT all significantly reduced the ipsilateral breast V50% by the same amount (mean, 7.8%). Only C-ARC and IMRT plans significantly reduced the contralateral breast maximum dose, the ipsilateral lung V5Gy, and the heart V5%. C-ARC used on average 40%, 30%, and 10% fewer monitor units compared with 3D-CRT, IMRT, and VMAT, respectively. CONCLUSIONS C-ARC provides improved dosimetry and treatment efficiency, which should reduce the risks of toxicity and secondary malignancy. Its tangent geometry avoids irradiation of critical structures that is unavoidable using the en face geometry of VMAT.
Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent
For large scale learning problems, it is desirable if we can obtain the optimal model parameters by going through the data in only one pass. Polyak and Juditsky (1992) showed that asymptotically the test performance of the simple average of the parameters obtained by stochastic gradient descent (SGD) is as good as that of the parameters which minimize the empirical cost. However, to our knowledge, despite its optimal asymptotic convergence rate, averaged SGD (ASGD) received little attention in recent research on large scale learning. One possible reason is that it may take a prohibitively large number of training samples for ASGD to reach its asymptotic region for most real problems. In this paper, we present a finite sample analysis for the method of Polyak and Juditsky (1992). Our analysis shows that it indeed usually takes a huge number of samples for ASGD to reach its asymptotic region for improperly chosen learning rate. More importantly, based on our analysis, we propose a simple way to properly set learning rate so that it takes a reasonable amount of data for ASGD to reach its asymptotic region. We compare ASGD using our proposed learning rate with other well known algorithms for training large scale linear classifiers. The experiments clearly show the superiority of ASGD.
Hybrid transformer-based tunable integrated duplexer with antenna impedance tracking loop
Electrical balance between the antenna and the balance network impedances is crucial for achieving high isolation in a hybrid transformer duplexer. In this paper an auto calibration loop for tuning a novel integrated balance network to track the antenna impedance variations is introduced. It achieves an isolation of more than 50 dB in the transmit and receive bands with an antenna VSWR within 2:1 and between 1.7 and 2.2 GHz. The duplexer along with a cascaded direct-conversion receiver achieves a noise figure of 5.3 dB a conversion gain of 45 dB and consumes 34 mA. The insertion loss in the transmit path was less than 3.8 dB. Implemented in a 65-nm CMOS process the chip occupies an active area of 2.2 mm2.
Mining Frequent Patterns by Pattern-Growth: Methodology and Implications
Mining frequent patterns has been a focused topic in data mining research in recent years, with the developmeht of numerous interesting algorithms for mining association, correlation, causality, sequential patterns, partial periodicity, constraint-based frequent pattern mining, associative classification, emerging patterns, etc. Most of the previous studies adopt an Apriori-like, candidate generation-and-test approach. However, based on our analysis, candidate generation and test may still be expensive, especially when encountering long and numerous patterns. A new methodology, called f r e q u e n t p a t t e r n g rowth , which mines frequent patterns without candidate generation, has been developed. The method adopts a divide-andconquer philosophy to project and partition databases based on the currently discovered frequent patterns and grow such patterns to longer ones in the projected databases. Moreover, efficient data structures have been developed for effective database compression and fast in-memory traversal. Such a methodology may eliminate or substantially reduce the number of candidate sets to be generated and also reduce the size of the database to be iteratively examined, and, therefore, lead to high performance. In this paper, we provide an overview of this approach and examine its methodology and implications for mining several kinds of frequent patterns, including association, frequent closed itemsets, max-patterns, sequential patterns, and constraint-based mining of frequent patterns. We show that frequent pattern growth is efficient at mining large databases and its further development may lead to scalable mining of many other kinds of patterns as well.
Deep Discrete Hashing with Self-supervised Pairwise Labels
Hashing methods have been widely used for applications of large-scale image retrieval and classification. Non-deep hashing methods using handcrafted features have been significantly outperformed by deep hashing methods due to their better feature representation and end-to-end learning framework. However, the most striking successes in deep hashing have mostly involved discriminative models, which require labels. In this paper, we propose a novel unsupervised deep hashing method, named Deep Discrete Hashing (DDH), for large-scale image retrieval and classification. In the proposed framework, we address two main problems: 1) how to directly learn discrete binary codes? 2) how to equip the binary representation with the ability of accurate image retrieval and classification in an unsupervised way? We resolve these problems by introducing an intermediate variable and a loss function steering the learning process, which is based on the neighborhood structure in the original space. Experimental results on standard datasets (CIFAR-10, NUS-WIDE, and Oxford-17) demonstrate that our DDH significantly outperforms existing hashing methods by large margin in terms of mAP for image retrieval and object recognition. Code is available at https://github.com/htconquer/ddh.
Pedagogies Against the State
In our everyday conversations and relations with others we treat individuals and ourselves as stable entities. We relate to each other as though we are responding to a particular person with a specific identity, if we know them or even if we do not. We tend to assume a clear distinction between people and the world in which they live. Frequently we use language as though it was a transparent medium of communication in which we express clear meaning, epitomised by phrases such as, ‘do you see what I mean,’ or ‘I see what you mean.’ Frequently we regard vision as a ‘natural’ universal process in the sense that in our cultural settings we assume we see the world in a similar way. We suppose that knowledge is neutral and associated with ideas of human progress and development. However, just about all of these ‘everyday’ suppositions that facilitate social interaction have been the subject of detailed interrogation in the worlds of philosophy, sociology, art, science, anthropology, literary theory, cultural studies, psychoanalysis and other disciplines, concerned with trying to understand how the human subject is formed. Indeed the term ‘human subject’ is indicative of a shift from viewing people as free-thinking individuals functioning independently in society towards understanding them as subjects who are largely affected and regulated as subjects by their social contexts and conditions.
Impaired coronary blood flow in nonculprit arteries in the setting of acute myocardial infarction. The TIMI Study Group. Thrombolysis in myocardial infarction.
OBJECTIVES AND BACKGROUND While attention has focused on coronary blood flow in the culprit artery in acute myocardia infarction (MI), flow in the nonculprit artery has not been studied widely, in part because it has been assumed to be normal. We hypothesized that slower flow in culprit arteries, larger territories infarcted and hemodynamic perturbations may be associated with slow flow in nonculprit arteries. METHODS The number of frames for dye to first reach distal landmarks (corrected TIMI [Thrombolysis in Acute Myocardial Infarction] frame count [CTFC]) were counted in 1,817 nonculprit arteries from the TIMI 4, 10A, 10B and 14 thrombolytic trials. RESULTS Nonculprit artery flow was slowed to 30.9 +/- 15.0 frames at 90 min after thrombolytic administration, which is 45% slower than normal flow in the absence of acute MI (21 +/- 3.1, p < 0.0001). Patients with TIMI grade 3 flow in the culprit artery had faster nonculprit artery CTFCs than those patients with TIMI grades 0, 1 or 2 flow (29.1 +/- 13.7, n = 1,050 vs. 33.3 +/- 16.1, n = 752, p < 0.0001). The nonculprit artery CTFC improved between 60 and 90 min (3.3 +/- 17.9 frames, n = 432, p = 0.0001), and improvements were related to improved culprit artery flow (p = 0.0005). Correlates of slower nonculprit artery flow included a pulsatile flow pattern (i.e., systolic flow reversal) in the nonculprit artery (p < 0.0001) and in the culprit artery (p = 0.01), a left anterior descending artery culprit artery location (p < 0.0001), a decreased systolic blood pressure (p = 0.01), a decreased ventriculographic cardiac output (p = 0.02), a decreased double product (p = 0.0002), a greater percent diameter stenosis of the nonculprit artery (p = 0.01) and a greater percent of the culprit artery bed lying distal to the stenosis (p = 0.04). Adjunctive percutaneous transluminal coronary angioplasty (PTCA) of the culprit artery restored a culprit artery CTFC (30.4 +/- 22.2) that was similar to that in the nonculprit artery at 90 min (30.2 +/- 13.5), but both were slower than normal CTFCs (21 +/- 3.1, p < 0.0005 for both). If flow in the nonculprit artery was abnormal (CTFC > or = 28 frames) then the CTFC after PTCA in the culprit artery was 17% slower (p = 0.01). Patients who died had slower global CTFCs (mean CTFC for the three arteries) than patients who survived (46.8 +/- 21.3, n = 47 vs. 39.4 +/- 16.7, n = 1,055, p = 0.02). CONCLUSIONS Acute MI slows flow globally, and slower global flow is associated with adverse outcomes. Relief of the culprit artery stenosis by PTCA restored culprit artery flow to that in the nonculprit artery, but both were 45% slower than normal flow.
Transcatheter aortic valve implantation in patients on corticosteroid therapy
Transcatheter aortic valve implantation (TAVI) is recommended for patients who are inoperable or at high risk for surgical aortic valve replacement (SAVR). Corticosteroid therapy is considered to be a risk factor for SAVR, but there is a paucity of information about TAVI in patients taking corticosteroids. The aim of this study is to elucidate the outcome of TAVI in patients on chronic corticosteroid therapy, compared with SAVR. We retrospectively analyzed patients on corticosteroid therapy who underwent TAVI (n = 21) or SAVR (n = 30) for severe aortic stenosis in Sakakibara Heart Institute. Primary outcome was a 30-day composite endpoint consisting of early safety endpoints (death, stroke, life-threatening bleeding, acute kidney injury, coronary obstruction, major vascular complication, and valve-related dysfunction) and corticosteroid-specific endpoints (adrenal insufficiency, sepsis, and hyperglycemic complication). There were no differences between two groups in background factors, other than patient age and serum albumin level (age 81.0 ± 5.5 vs. 74.7 ± 9.9 years, p = 0.0061, albumin 3.6 ± 0.4 vs. 4.0 ± 0.4 g/dl, p = 0.0076). Device success rate for TAVI was 95.2%. In TAVI group, operative time was shorter (100.2 ± 46.2 vs. 250.0 ± 92.2 min, p < 0.0001), and the amount of blood transfusion was less (0.67 ± 1.8 vs. 3.5 ± 2.4 units, p < 0.0001) than in SAVR group. There was no difference in primary outcome (19.0 vs. 20.0%, p = 1.0). Rate of prosthesis-patient mismatch was lower in TAVI group (4.8 vs. 33.3%, p = 0.017), and no moderate or severe post-procedural aortic regurgitation was observed in both groups. The post-procedural survival was similar in the two groups (p = 0.67, mean follow-up 986 ± 922 days). TAVI may be a viable therapeutic option in patients taking corticosteroids.
Enhancing the cultural competence of women's health nurses via online continuing education
...................................................................................................................... iii ACKNOWLEDGEMENTS .................................................................................................v CHAPTER
Cumulative trauma: the impact of child sexual abuse, adult sexual assault, and spouse abuse.
The present study investigated the relationship between trauma symptoms and a history of child sexual abuse, adult sexual assault, and physical abuse by a partner as an adult. While there has been some research examining the correlation between individual victimization experiences and traumatic stress, the cumulative impact of multiple victimization experiences has not been addressed. Subjects were recruited from psychological clinics and community advocacy agencies. Additionally, a nonclinical undergraduate student sample was evaluated. The results of this study indicate not only that victimization and revictimization experiences are frequent, but also that the level of trauma specific symptoms are significantly related to the number of different types of reported victimization experiences. The research and clinical implications of these findings are discussed.
Georg Simmel and naturalist interactivist epistemology of science
Abstract In 1895 sociologist and philosopher Georg Simmel published a paper: ‘On a connection of selection theory to epistemology’. It was focussed on the question of how behavioural success and the evolution of the cognitive capacities that underlie it are to be related to knowing and truth. Subsequently, Simmel’s ideas were largely lost, but recently (2002) an English translation was published by Coleman in this journal. While Coleman’s contextual remarks are solely concerned with a preceding evolutionary epistemology, it will be argued here that Simmel pursues a more unorthodox, more radically biologically based and pragmatist, approach to epistemology in which the presumption of a wholly interests-independent truth is abandoned, concepts are accepted as species-specific and truth tied intimately to practical success. Moreover, Simmel’s position, shorn of one too-radical commitment, shares its key commitments with the recently developed interactivist–constructivist framework for understanding biological cognition and naturalistic epistemology. There Simmel’s position can be given a natural, integrated, three-fold elaboration in interactivist re-analysis, unified evolutionary epistemology and learnable normativity.
Giving Infants an Identity: Fingerprint Sensing and Recognition
There is a growing demand for biometrics-based recognition of children for a number of applications, particularly in developing countries where children do not have any form of identification. These applications include tracking child vaccination schedules, identifying missing children, preventing fraud in food subsidies, and preventing newborn baby swaps in hospitals. Our objective is to develop a fingerprint-based identification system for infants (age range: 0-12 months)1. Our ongoing research has addressed the following issues: (i) design of a compact, comfortable, high-resolution (>1,000 ppi) fingerprint reader; (ii) image enhancement algorithms to improve quality of infant fingerprint images; and (iii) collection of longitudinal infant fingerprint data to evaluate identification accuracy over time. This collaboration between Michigan State University, Dayalbagh Educational Institute, Saran Ashram Hospital, Agra, India and NEC Corporation, has demonstrated the feasibility of recognizing infants older than 4 weeks using fingerprints.
This looks like that: deep learning for interpretable image recognition
When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification. The model thus reasons in a way that is qualitatively similar to the way ornithologists, physicians, geologists, architects, and others would explain to people on how to solve challenging image classification tasks. The network uses only image-level labels for training, meaning that there are no labels for parts of images. We demonstrate our method on the CUB-200-2011 dataset and the CBIS-DDSM dataset. Our experiments show that our interpretable network can achieve comparable accuracy with its analogous standard non-interpretable counterpart as well as other interpretable deep models.
The Robust Beauty of Improper Linear Models in Decision Making
Proper linear models are those in which predictor variables are given weights in such a way that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge regression analysis. Research summarized in Paul Meehl's book on clinical versus statistical prediction—and a plethora of research stimulated in part by that book—all indicates that when a numerical criterion variable (e.g., graduate grade point average) is to be predicted from numerical predictor variables, proper linear models outperform clinical intuition. Improper linear models are those in which the weights of the predictor variables are obtained by some nonoptimal method; for example, they may be obtained on the basis of intuition, derived from simulating a clinical judge's predictions, or set to be equal. This article presents evidence that even such improper linear models are superior to clinical intuition when predicting a numerical criterion from numerical predictors. In fact, unit (i.e., equal) weighting is quite robust for making such predictions. The article discusses, in some detail, the application of unit weights to decide what bullet the Denver Police Department should use. Finally, the article considers commonly raised technical, psychological, and ethical resistances to using linear models to make important social decisions and presents arguments that could weaken these resistances. Paul MeehPs (1954) book Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence appeared 25 years ago. It reviewed studies indicating that the prediction of numerical criterion variables of psychological interest (e.g., faculty ratings of graduate students who had just obtained a PhD) from numerical predictor variables (e.g., scores on the Graduate Record Examination, grade point averages, ratings of letters of recommendation) is better done by a proper linear model than by the clinical intuition of people presumably skilled in such prediction. The point of this article is to review evidence that even improper linear models may be superior to clinical predictions. Vol. 34, No. 7,571-582 A proper linear model is one in which the weights given to the predictor variables are chosen in such a way as to optimize the relationship between the prediction and the criterion. Simple regression analysis is the most common example of a proper linear model; the predictor variables are weighted in such a way as to maximize the correlation between the subsequent weighted composite and the actual criterion. Discriminant function analysis is another example of a proper linear model; weights are given to the predictor variables in such a way that the resulting linear composites maximize the discrepancy between two or more groups. Ridge regression analysis, another example (Darlington, 1978; Marquardt & Snee, 1975), attempts to assign weights in such a way that the linear composites correlate maximally with the criterion of interest in a new set of data. Thus, there are many types of proper linear models and they have been used in a variety of contexts. One example (Dawes, 1971) was presented in this Journal; it involved the prediction of faculty ratings of graduate students. All graduWork on this article was started at the University of Oregon and Decision Research, Inc., Eugene, Oregon; it was completed while I was a James McKeen Cattell Sabbatical Fellow at the Psychology Department at the University of Michigan and at the Research Center for Group Dynamics at the Institute for Social Research there, I thank all these institutions for their assistance, and I especially thank my friends at them who helped. This article is based in part on invited talks given at the American Psychological Association (August 1977), the University of Washington (February 1978), the Aachen Technological Institute (June 1978), the University of Groeningen (June 1978), the University of Amsterdam (June 1978), the Institute for Social Research at the University of Michigan (September 1978), Miami University, Oxford, Ohio (November 1978), and the University of Chicago School of Business (January 1979). I received valuable feedback from most of the audiences. Requests for reprints should be sent to Robyn M. Dawes, Department of Psychology, University of Oregon, Eugene, Oregon 97403. AMERICAN PSYCHOLOGIST • JULY 1979 • 571 Copyright 1979 by the American Psychological Association, Inc. 0003-066X/79/3407-0571$00.75 ate students at the University of Oregon's Psychology Department who had been admitted between the fall of 1964 and the fall of 1967—and who had not dropped out of the program for nonacademic reasons (e.g., psychosis or marriage)— were rated by the faculty in the spring of 1969; faculty members rated only students whom they felt comfortable rating. The following rating scale was used: S, outstanding; 4, above average; 3, average; 2, below average; 1, dropped out of the program in academic difficulty. Such overall ratings constitute a psychologically interesting criterion because the subjective impressions of faculty members are the main determinants of the job (if any) a student obtains after leaving graduate school. A total of 111 students were in the sample; the number of faculty members rating each of these students ranged from 1 to 20, with the mean number being 5.67 and the median being 5. The ratings were reliable. (To determine the reliability, the ratings were subjected to a oneway analysis of variance in which each student being rated was regarded as a treatment. The resulting between-treatments variance ratio (»j) was .67, and it was significant beyond the .001 level.) These faculty ratings were predicted from a proper linear model based on the student's Graduate Record Examination (GRE) score, the student's undergraduate grade point average (GPA), and a measure of the selectivity of the student's undergraduate institution. The cross-validated multiple correlation between the faculty ratings and predictor variables was .38. Congruent with Meehl's results, the correlation of these latter faculty ratings with the average rating of the people on the admissions committee who selected the students was .19; 2 that is, it accounted for one fourth as much variance. This example is typical of those found in psychological research in this area in that (a) the correlation with the model's predictions is higher than the correlation with clinical prediction, but (b) both correlations are low. These characteristics often lead psychologists to interpret the findings as meaning that while the low correlation of the model indicates that linear modeling is deficient as a method, the even lower correlation of the judges indicates only that the wrong judges were used. An improper linear model is one in which the weights are chosen by some nonoptimal method. They may be chosen to be equal, they may be chosen on the basis of the intuition of the person making the prediction, or they may be chosen at random. Nevertheless, improper models may have great utility. When, for example, the standardized GREs, GPAs, and selectivity indices in the previous example were weighted equally, the resulting linear composite correlated .48 with later faculty rating. Not only is the correlation of this linear composite higher than that with the clinical judgment of the admissions committee (.19), it is also higher than that obtained upon cross-validating the weights obtained from half the sample. An example of an improper model that might be of somewhat more interest—at least to the general public—was motivated by a physician who was on a panel with me concerning predictive systems. Afterward, at the bar with his' wife and me, he said that my paper might be of some interest to my colleagues, but success in graduate school in psychology was not of much general interest: "Could you, for example, use one of your improper linear models to predict how well my wife and I get along together?" he asked. I realized that I could—or might. At that time, the Psychology Department at the University of Oregon was engaged in sex research, most of which was behavioristically oriented. So the subjects of this research monitored when they made love, when they had fights, when they had social engagements (e.g., with in-laws), and so on. These subjects also made subjective ratings about how happy they were in their marital or coupled situation. I immediately thought of an improper linear model to predict self-ratings of marital happiness: rate of lovemaking minus rate of fighting. My colleague John Howard had collected just such data on couples when he was an undergraduate at the University of Missouri—Kansas City, where he worked with Alexander (1971). After establishing the intercouple reliability of judgments of lovemaking and fighting, Alexander had one partner from each of 42 couples monitor these events. She allowed us to analyze her data, with the following results: "In the thirty happily married ^This index was based on Cass and Birnbaum's (1968) rating of selectivity given at the end of their book Comparative Guide to American Colleges. The verbal categories of selectivity were given numerical values according to the following rale: most selective, 6; highly selective, 5; very selective (+), 4; very selective, 3; selective, 2 ; not mentioned, 1. Unfortunately, only 23 of the 111 students could be used in this comparison because the rating scale the admissions committee used changed slightly from year to year. 572 • JULY 1979 • AMERICAN PSYCHOLOGIST couples (as reported by the monitoring partner) only two argued more often than they had intercourse. All twelve of the unhappily married couples argued more often" (Howard & Dawes, 1976, p. 478). We then replicated this finding at the University of Oregon, where 27 monitors rated happiness on a 7-point scale, from "very unhappy"
Numerical construction of spherical $t$-designs by Barzilai-Borwein method
A point set XN on the unit sphere is a spherical t-design is equivalent to the nonnegative quantity AN,t+1 vanished. We show that if XN is a stationary point set of AN,t+1 and the minimal singular value of basis matrix is positive, then XN is a spherical t-design. Moreover, the numerical construction of spherical t-designs is valid by using BarzilaiBorwein method. We obtain numerical spherical t-designs with t + 1 up to 127 at N = (t + 2)2.
Continuous analogs of axiomatized digital surfaces
Z3. But simple surface points are defined by means of axioms, and the axioms do not reveal what simple surface points “look like.” In this paper eight of the nine varieties of simple surface points are shown to have natural “continuous analogs,” and the one remaining variety is shown to be very different from the other types. This work yields substantial generalizations of the main theorems on simple surface points that were proved by Morgenthaler, Reed, and Rosenfeld. Q
Protein turnover and amino acid transport kinetics in end-stage renal disease.
Protein and amino acid metabolism is abnormal in end-stage renal disease (ESRD). Protein turnover is influenced by transmembrane amino acid transport. The effect of ESRD and hemodialysis (HD) on intracellular amino acid transport kinetics is unknown. We studied intracellular amino acid transport kinetics and protein turnover by use of stable isotopes of phenylalanine, leucine, lysine, alanine, and glutamine before and during HD in six ESRD patients. Data obtained from amino acid concentrations and enrichment in the artery, vein, and muscle compartments were used to calculate intracellular amino acid transport and muscle protein synthesis and catabolism. Fractional muscle protein synthesis (FSR) was estimated by the precursor product approach. Despite a significant decrease in the plasma concentrations of amino acids in the artery and vein during HD, the intracellular concentrations remained stable. Outward transport of the amino acids was significantly higher than the inward transport during HD. FSR increased during HD (0.0521 +/- 0.0043 vs. 0.0772 +/- 0.0055%/h, P < 0.01). Results derived from compartmental modeling indicated that both protein synthesis (118.3 +/- 20.6 vs. 146.5 +/- 20.6 nmol.min-1.100 ml leg-1, P < 0.01) and catabolism (119.8 +/- 18.0 vs. 174.0 +/- 14.2 nmol.min-1.100 ml leg-1, P < 0.01) increased during HD. However, the intradialytic increase in catabolism exceeded that of synthesis (57.8 +/- 13.8 vs. 28.0 +/- 8.5%, P < 0.05). Thus HD alters amino acid transport kinetics and increases protein turnover, with net increase in protein catabolism.
A Critical Success Factors Model for Enterprise Resource Planning Implementation
Enterprise Resource Planning (ERP) systems are highly integrated enterprise-wide information systems that automate core business processes. The ERP packages of vendors such as SAP, Baan, J.D. Edwards, Peoplesoft and Intentia represent more than a standard business platform, they prescribe information blueprints of how an organisation’s business processes should operate. In this paper the scale and strategic importance of ERP systems are identified and the problem of ERP implementation is defined. A Critical Success Factors (CSFs) framework is proposed to aid managers develop an ERP implementation strategy. The framework is illustrated using two case examples from a research sample of eight companies. The case analysis highlights the critical impact of legacy systems upon the implementation process, the importance of selecting an appropriate ERP strategy and identifies the importance of Business Process Change (BPC) and software configuration in addition to factors already cited in the literature. The implications of the results for managerial practice are described and future research opportunities are outlined.
Cosmic rays and tests of fundamental principles
It is now widely acknowledged that cosmic rays experiments can test possible new physics directly generated at the Planck scale or at some other fundamental scale. By studying particle properties at energies far beyond the reach of any man-made accelerator, they can yield unique checks of basic principles. A well-known example is provided by possible tests of special relativity at the highest cosmic-ray energies. But other essential ingredients of standard theories can in principle be tested: quantum mechanics, uncertainty principle, energy and momentum conservation, effective space-time dimensions, hamiltonian and lagrangian formalisms, postulates of cosmology, vacuum dynamics and particle propagation, quark and gluon confinement, elementariness of particles... Standard particle physics or string-like patterns may have a composite origin able to manifest itself through specific cosmic-ray signatures. Ultra-high energy cosmic rays, but also cosmic rays at lower energies, are probes of both ”conventional” and new Physics. Status, prospects, new ideas, and open questions in the field are discussed. The Post Scriptum shows that several basic features of modern cosmology naturally appear in a SU(2) spinorial description of space-time without any need for matter, relativity or standard gravitation. New possible effects related to the spinorial space-time structure can also be foreseen. Similarly, the existence of spin-1/2 particles can be naturally related to physics beyond Planck scale and to a possible pre-Big Bang era.
Improved outcome for children with acute lymphoblastic leukemia: results of Dana-Farber Consortium Protocol 91-01.
The Dana-Farber Cancer Institute (DFCI) acute lymphoblastic leukemia (ALL) Consortium Protocol 91-01 was designed to improve the outcome of children with newly diagnosed ALL while minimizing toxicity. Compared with prior protocols, post-remission therapy was intensified by substituting dexamethasone for prednisone and prolonging the asparaginase intensification from 20 to 30 weeks. Between 1991 and 1995, 377 patients (age, 0-18 years) were enrolled; 137 patients were considered standard risk (SR), and 240 patients were high risk (HR). Following a 5.0-year median follow-up, the estimated 5-year event-free survival (EFS) +/- SE for all patients was 83% +/- 2%, which is superior to prior DFCI ALL Consortium protocols conducted between 1981 and 1991 (P =.03). There was no significant difference in 5-year EFS based upon risk group (87% +/- 3% for SR and 81% +/- 3% for HR, P =.24). Age at diagnosis was a statistically significant prognostic factor (P =.03), with inferior outcomes observed in infants and children 9 years or older. Patients who tolerated 25 or fewer weeks of asparaginase had a significantly worse outcome than those who received at least 26 weeks of asparaginase (P <.01, both univariate and multivariate). Older children (at least 9 years of age) were significantly more likely to have tolerated 25 or fewer weeks of asparaginase (P <.01). Treatment on Protocol 91-01 significantly improved the outcome of children with ALL, perhaps due to the prolonged asparaginase intensification and/or the use of dexamethasone. The inferior outcome of older children may be due, in part, to increased intolerance of intensive therapy.
A Survey of Domain Ontology Engineering: Methods and Tools
With the advent of the Semantic Web, the field of domain ontology engineering has gained more and more importance. This innovative field may have a big impact on computer-based education and will certainly contribute to its development. This paper presents a survey on domain ontology engineering and especially domain ontology learning. The paper focuses particularly on automatic methods for ontology learning from texts. It summarizes the state of the art in natural language processing techniques and statistical and machine learning techniques for ontology extraction. It also explains how intelligent tutoring systems may benefit from this engineering and talks about the challenges that face the field.