Title
stringlengths
3
331
text
stringlengths
14
9.14k
Statistical analysis of concurrently active human motor units
null
Management of a high mix production system with interdependent demands: global and decomposed optimization approaches for inventory control
In this work the management of a production system with a high mixture of products, interdependent demands and optional components is analyzed. An approach based on reorder point policy is proposed for both raw parts and finished goods inventory control. In the latter case, the solution of an optimization problem determines whether each product should be held in inventory and if so which safety factor z should be used. The choice of z, and as a consequence of the reorder level R, takes into consideration the demands interdependence, the customer's availability to buy if a certain waiting time is quoted to them and the fact that in a certain type of orders the optional components are required to ship. Global and decomposed optimization approaches are presented and compared. The decomposed approach is shown to achieve performances very close to the ones of the global optimization with much easier computations. By using the policy based on the decomposed optimization, it is possible to reduce simultaneously the value of the inventory and the expected number of lost sales as compared to a simple reorder point policy or to the policy currently in use at the company. A reduction of 35% of inventory and of almost 10 times of the average value of lost sales is expected if the company substitutes the current policy with the proposed one.
Human perception and control of vehicle roll tilt in hyper-gravity
Pilots and astronauts experience a range of altered gravity environments in which they must maintain accurate perception and control of vehicle orientation for tasks such as landing and docking. To study sensorimotor function in altered gravity, a hyper-gravity test-bed was produced using a centrifuge. Previous experiments have quantified static tilt perception in hyper-gravity; however, studies of dynamic tilt, such as those experienced by astronauts and pilots, have been entirely qualitative. Current dynamic models of orientation perception cannot reproduce the characteristic perceptions observed in hyper-gravity. The aims of this thesis are to: 1) quantify static and dynamic roll tilt perception in hyper-gravity, 2) study pilot manual control of vehicle roll tilt in hyper-gravity, and 3) modify a dynamic model to predict hyper-gravity orientation perception. A long-radius centrifuge was utilized to create hyper-gravity environments of 1.5 and 2 Earth G's. In one experiment, over a range of roll tilt angles and frequencies, human subjects' (N=8) perceptions of orientation, in the dark, were assayed with a somatosensory task. Static roll tilts were overestimated in hyper-gravity with more overestimation at higher gravity levels and larger roll angles. Dynamic rotations were also overestimated in hyper-gravity, but generally less so than for static tilts. The amount of overestimation during dynamic rotations was dependent upon the angular velocity of the rotation with less overestimation at higher angular velocities. In a second experiment, human subjects (N=12) were tasked with nulling a pseudo-random vehicle roll disturbance using a rotational hand controller. Initial nulling performance was significantly worse in hyper-gravity as compared to the 1 G performance baseline. However, hyper-gravity performance improved with practice, reaching near the 1 G baseline over the time course of several minutes. Finally, pre-exposure to one hyper-gravity level reduced the measured initial performance decrement in a subsequent, different hyper-gravity environment. A modification to a previous dynamic spatial orientation perception model was proposed to allow for the prediction of roll tilt overestimation observed in hyper-gravity. It was hypothesized that the central nervous system treats otolith signals in the utricular plane differently from those out of plane. This was implemented in the model by setting a difference between the linear acceleration feedback gains in and out of the utricular plane. The modified model was simulated and found to accurately predict the static overestimation observed over a wide range of angles and hyper-gravity levels. Furthermore, it simulated the characteristic dependence of dynamic overestimation upon angular velocity with less overestimation at higher angular velocities. The modified model now allows for simulation across a range of altered gravity environments to predict human orientation perception. We conclude that hyper-gravity results in misperception of static and dynamic roll tilt and decrements in pilot manual control performance. Perception and manual control errors due to altered gravity, such as those observed here in hyper-gravity, may impact the safety of future crewed space exploration missions, in terms of accidents or aborts.
Anomalous transport in complex networks
The emergence of scaling in transport through interconnected systems is a consequence of the topological structure of the network and the physical mechanisms underlying the transport dynamics. We study transport by advection and diffusion in scale-free and Erdős-Rényi networks. Using stochastic particle simulations, we find anomalous (nonlinear) scaling of the mean square displacement with time. We show the connection with existing descriptions of anomalous transport in disordered systems, and explain the mean transport behavior from the coupled nature of particle jump lengths and transition times. Moreover, we study epidemic spreading through the air transportation network with a particle-tracking model that accounts for the spatial distribution of airports, detailed air traffic and realistic (correlated) waitingtime distributions of individual agents. We use empirical data from US air travel to constrain the model parameters and validate the model's predictions of traffic patterns. We formulate a theory that identifies the most influential spreaders from the point of view of early-time spreading behavior. We find that network topology, geography, aggregate traffic and individual mobility patterns are all essential for accurate predictions of spreading.
Dynamic patterning of maternal messengerRibonucleic acids in the Early Caenorhabditis elegans embryo
Asymmetric segregation of maternally-encoded proteins is essential to cell fate determination during early cell divisions of the Caenorhabditis elegans (C. elegans) embryo, but little is known about the patterning of maternal transcripts inside somatic lineages. In the first Chapter of this thesis, by detecting individual mRNA molecules in situ, we measured the densities of the two maternal mRNAs pie-1 and nos-2 in non-germline cells. We find that nos-2 mRNA degrades at a constant rate in all somatic lineages, starting approximately 1 cell-cycle after each lineage separated from the germline, consistent with a model in which the germline protects maternal mRNAs from degradation. In contrast, the degradation of pie-1 mRNAs in one somatic lineage, AB, takes place at a rate slower than that of the other lineages, leading to an accumulation of that transcript. We further show that the 3' untranslated (UTR) region of the pie-1 transcript at least partly encodes the AB-specific degradation delay. Our results indicate that embryos actively control maternal mRNA distributions in somatic lineages via regulated degradation, providing another potential mechanism for lineage specification. The evolutionary fate of an allele ordinarily depends on its contribution to host fitness. Occasionally, however, genetic elements arise that are able to gain a transmission advantage while simultaneously imposing a fitness cost on their hosts. Seidel et al. previously discovered one such element in C. elegans that gains a transmission advantage through a combination of paternal-effect killing and zygotic self-rescue. In the second Chapter of this thesis we demonstrate that this element is composed of a sperm-delivered toxin, peel-1, and an embryo-expressed antidote, zeel-1. peel-1 and zeel-1 are located adjacent to one another in the genome and co-occur in an insertion/ deletion polymorphism. peel-1 encodes a novel four-pass transmembrane protein that is expressed in sperm and delivered to the embryo via specialized, sperm-specific vesicles. In the absence of zeel-1, sperm-delivered PEEL-1 causes lethal defects in muscle and epidermal tissue at the two-fold stage of embryogenesis. zeel-1 is expressed transiently in the embryo and encodes a novel six-pass transmembrane domain fused to a domain with sequence similarity to zyg-11, a substrate-recognition subunit of an E3 ubiquitin ligase. zeel-1 appears to have arisen recently, during an expansion of the zyg-11 family, and the transmembrane domain of zeel-1 is required and partially sufficient for antidote activity. Although PEEL-1 and ZEEL-1 normally function in embryos, these proteins can act at other stages as well. When expressed ectopically in adults, PEEL-1 kills a variety of cell types, and ectopic expression of ZEEL-1 rescues these effects. Our results demonstrate that the tight physical linkage between two novel transmembrane proteins has facilitated their co-evolution into an element capable of promoting its own transmission to the detriment of the rest of the genome. The Apical Epidermal Ridge (AER) in vertebrates is essential to the outgrowth of a growing limb bud. Induction and maintenance of the AER reply heavily on the coordination and signaling between two surrounding cell types: ectodermal and mesenchymal cells. In morphogenesis during embryonic development, a process called the epithelial-mesenchymal transition (EMT) occurs to transform epithelial cells into mesenchymal cells for increased cell mobility and decreased cell adhesion. To check whether the AER, which originated from the ectodermal layer, undergoes EMT for enhanced cell motility and invasiveness at an early stage of the limb outgrowth, we examined expression of biomarkers of the epithelial and mesenchymal cell types in the AER of a mouse forelimb at embryonic day 10.5 in Chapter three of this thesis. We also customized correlation-based image registration algorithm to perform image stitching for more direct visualization of a big field of tissue sample. We found that the AER surprisingly expresses both the epithelial marker and the mesenchymal marker, unlike a normal non-transitioning epithelial cell or a cell undergoing EMT. Our finding serves as a basis for potential future cell isolation experiments to further look into cell type switching of the AER and its interaction with the surrounding ectodermal and mesenchymal cells.
Retooling Detroit's waterfront
With the recent strain of environmental disasters associated with poor environmental planning and the crash of the economic system that had propped up this type of development, it is clear that architecture's relationship to nature needs to be rethought. The thesis uses the waterfront of Detroit as a test site for an integrated system based on ecological principles of interdependency, indeterminacy and time-based processes. The proposal situates itself in opposition to the urban development laid on top of the land and its application in current form to a new, so called "green" recreational riverwalk, which still relies on the hard engineering that has destroyed Detroit's native wetlands. Instead, this thesis proposes a soft infrastructure which synthesizes solutions for water retention and environmental enrichment along the coastline, based on the natural patterns of drainage landforms, and with human development tightly integrated within the system. This system is modulated to balance different degrees of environmental, technical and economic priorities, layered throughout the waterfront to not only create a comprehensive storm defense system but also to provide new places for recreation, urban farming and urban development.
Participation as an end versus a means : understanding a recurring dilemma in urban upgrading
Since the 1920s, participatory approaches to urban upgrading in developing nations have demonstrated that involving the urban poor in the physical, social, and economic development of their settlements could improve their living conditions. These housing policies and projects have since been central to urban poverty reduction. Yet, while participatory upgrading is still used on a limited scale, it has failed to become a mainstream component of urban development. This dissertation analyzes some reasons for that failure by investigating the trajectory of an urban poverty reduction program that had much potential for success in Cambodia, but whose results yet surprisingly fell short of expectations. It connects the results to a critical analysis of international experience with policies and programs for urban poverty reduction. It explores the issue in two steps: First it analyzes the historical evolution of the policies and practices of urban poverty reduction in developing nations. This highlights the apparently weak link between lessons from experience, international policy recommendations, and the programs actually implemented by governments. Second, it presents a narrative analysis of how a participatory urban poverty reduction policy originated, was implemented, and evolved in Phnom Penh from 1996 to 2004. That story provides a micro-level understanding of the shape and constraints of the evolution of policies and practices, complementing the macro-historical analysis. The findings illustrate that three main issues have prevented international and local agencies from promoting urban development assistance, using lessons learned from concrete experience over time, and thus kept them from adopting a more continuous use of proven practices.
Creating music by listening
Machines have the power and potential to make expressive music on their own. This thesis aims to computationally model the process of creating music using experience from listening to examples. Our unbiased signal-based solution models the life cycle of listening, composing, and performing, turning the machine into an active musician, instead of simply an instrument. We accomplish this through an analysis-synthesis technique by combined perceptual and structural modeling of the musical surface, which leads to a minimal data representation. We introduce a music cognition framework that results from the interaction of psychoacoustically grounded causal listening, a time-lag embedded feature representation, and perceptual similarity clustering. Our bottom-up analysis intends to be generic and uniform by recursively revealing metrical hierarchies and structures of pitch, rhythm, and timbre. Training is suggested for top-down un-biased supervision, and is demonstrated with the prediction of downbeat. This musical intelligence enables a range of original manipulations including song alignment, music restoration, cross-synthesis or song morphing, and ultimately the synthesis of original pieces.
Dynamics of dopamine signaling and network activity in the striatum during learning and motivated pursuit of goals
Learning to direct behaviors towards goals is a central function of all vertebrate nervous systems. Initial learning often involves an exploratory phase, in which actions are flexible and highly variable. With repeated successful experience, behaviors may be guided by cues in the environment that reliably predict the desired outcome, and eventually behaviors can be executed as crystallized action sequences, or "habits", which are relatively inflexible. Parallel circuits through the basal ganglia and their inputs from midbrain dopamine neurons are believed to make critical contributions to these phases of learning and behavioral execution. To explore the neural mechanisms underlying goal-directed learning and behavior, I have employed electrophysiological and electrochemical techniques to measure neural activity and dopamine release in networks of the striatum, the principle input nucleus of the basal ganglia as rats learned to pursue rewards in mazes. The electrophysiological recordings revealed training dependent dynamics in striatum local field potentials and coordinated neural firing that may differentially support both network rigidity and flexibility during pursuit of goals. Electrochemical measurements of real-time dopamine signaling during maze running revealed prolonged signaling changes that may contribute to motivating or guiding behavior. Pathological over or under-expression of these network states may contribute to symptoms experienced in a range of basal ganglia disorders, from Parkinson's disease to drug addiction.
Training hierarchical networks for function approximation
In this work we investigate function approximation using Hierarchical Networks. We start of by investigating the theory proposed by Poggio et al [2] that Deep Learning Convolutional Neural Networks (DCN) can be equivalent to hierarchical kernel machines with the Radial Basis Functions (RBF).We investigate the difficulty of training RBF networks with stochastic gradient descent (SGD) and hierarchical RBF. We discovered that training singled layered RBF networks can be quite simple with a good initialization and good choice of standard deviation for the Gaussian. Training hierarchical RBFs remains as an open question, however, we clearly identified the issue surrounding training hierarchical RBFs and potential methods to resolve this. We also compare standard DCN networks to hierarchical Radial Basis Functions in tasks that has not been explored yet; the role of depth in learning compositional functions.
Scale-up of a high technology manufacturing startup : failure tracking, analysis, and resolution through a multi-method approach
Product reliability, quality, and performance are essential for all companies, especially high technology manufacturing startups looking to scale-up successfully. Company image and reputation can be heavily impacted by product failures. The cost of failures in-house and at the customer will only increase as a company scales up. Failure mitigation is critical to the success of a product and its company throughout the entire product lifecycle. This thesis proposes an ideal Failure Mitigation Strategy (FMS) that provides a methodology and framework with linear process workflow and easy to follow steps that lead to the reduction of cost from failures. Establishing a strong FMS will assist the company in learning from their failures while reducing the total number and average cost of failure events. The ideal FMS was tailored to and implemented at New Valence Robotics Corporation (NVBOTS) in Boston, Massachusetts, as a case study. The ideal FMS consists of failure tracking, failure analysis, and multi-method failure resolution. Failure events are first observed and properly documented via the failure tracking system. Failure tracking data is then processed during failure analysis using a total cost model to automatically prioritize and down select the most impactful failure event types. Root cause analysis is then performed on the top priority failure event types. Finally a robust multi-method failure resolution methodology uses an economical combination of design and process changes along with testing to eliminate or reduce the cost of those failures. Over 200 failure events were tracked, including 50 unique failure event types, accounting for over $75,000 in costs at NVBOTS. A unified and improved tracking system was implemented at NVBOTS along with a powerful analysis framework. Failure analysis was performed, prioritizing the failures by total cost and a failure resolution framework was designed to implement the solutions to the top priority failure event types. The ideal Failure Mitigation Strategy offered in this thesis provides NVBOTS and other entities a framework that allows for full understanding of the current failure landscape as well as a systematic method to reduce the impact from failures through elimination and mitigation.
Custom built atomic force microscope for nitrogen-vacancy diamond magnetometry
The nitrogen-vacancy (N-V) center in diamonds have the potential to be an ultra-sensitive magnetic field sensor that is capable of detecting single spins. Implementing this sensor for general and nontransparent samples is not trivial. For N-V centers to be a useful probe, a way of positioning the NV center with nanometer accuracy while simultaneously measuring its fluorescence is needed. Here, a method of using N-V centers as magnetometer probes by combining this sensor with Atomic Force Microscopy (AFM) is described. A custom AFM was built that allows optical monitoring of the cantilever tip and collection of fluorescence with a high-NA objective from the same side. The AFM has a large open bottom and top and thus provides dual optical access. The motion of the cantilever is measured by optical beam deflection so that a wide range of commercial cantilevers can be used. The AFM and the confocal microscope objective can be locked in position while a piezoelectric stage allows raster scanning of the substrate.
Tissue-specific interactions between oncogenic K-ras and the p19A̳r̳f̳_p53 pathway determine susceptibility to transformation
Tumor development is a multi-step process driven by the collective action of gain-of-function mutations in oncogenes and loss-of-function alterations in tumor suppressor genes. The particular spectrum of mutations in a given cancer is rarely the result of random chance but instead derives from the intimate connections between proliferative networks and those suppressing growth and transformation. Specifically, hyper-active oncogenes directly engage tumor suppressor programs, such that cells harboring oncogenic lesions frequently must acquire secondary mutations that disable these anti-proliferative responses before progressing to overt transformation. This tight coupling represents a critical checkpoint protecting against tumor formation. Whether different cell types exhibit variability in the extent and/or timing of this oncogene-induced tumor suppression is largely unknown. The ability of oncogenic Ras to induce the tumor suppressive p1 9 Arf-p5.3 pathway and cause irreversible cell cycle arrest typifies this phenomenon. Using this-well established interaction as model, we investigated the cell-type specificity of oncogene-induced tumor suppression. By combining K-rasL mice with a reporter for p19Arf expression (Ar FP), we identify a tissue-specific, onocogenic K-ras-dependent expression pattern of 19Arfin lung tumors and sarcomas that correlates with each tissue's genetic requirements for tumorigenesis. Lung tumors, which can arise in the presence of p19Arf and show modest increases in tumor progression in its absence, exhibit very minimal p19 Arf induction. Conversely, sarcomas, which depend on p19 f-p53 mutation for tumor formation, display robust p 1 9 Af up-regulation. While previous studies proposed oncogene levels as the main determinant of p19A induction, we find equivalent signaling levels and instead highlight tissue-specific differences in the epigenetic regulation of Ink4a/Arf Using in vivo RNAi, we implicate Polycomb group (PcG) proteinmediated repression in lung tumors and SWI/SNF-dependent activation in sarcomas as being critically important for each tissue's unique expression pattern of p1 9 Arf During normal tumor progression, mutations in oncogenes and tumor suppressors occur in a sequential fashion, although whether unique orders of mutations dictate distinct phenotypes is unknown. The requirement for complete p53 pathway abrogation during oncogenic K-rasdependent sarcomagenesis suggested that tumor development in the muscle critically depends on early p53 mutation. To test this we generated a Flp-inducible allele of K-rasG12D (K-rasFSF-G12D) that when combined with established reagents for Cre-dependent p53 deletion permits the separate regulation of K-ras activation and p53 loss. Strikingly, although simultaneous mutation results in robust tumor formation, delaying p53 deletion relative to oncogenic K-ras expression
A systematic approach for architecture-level energy estimation of accelerator designs
With Moore's law slowing down and Dennard scaling over, energy-ecient domain-specific accelerators have become a promising way for hardware designers to continue bringing energy eciency improvements to data and computation-intensive applications. To enable the fast exploration of the accelerator design space, architecture-level energy estimators, which perform energy estimations without requiring complete hardware description of the designs, are critical to designers. However, it is difficult to use existing architecture-level energy estimators to obtain accurate estimates for accelerator designs, as accelerator designs are diverse and sensitive to data patterns. This thesis presents Accelergy, a generally applicable energy estimation methodology for accelerators that allows flexible specification of designs comprised of user-defined high-level compound components and user-defined low-level primitive components, which can be characterized by third-party energy estimation plugins. We have provided primitive and compound components for modeling deep neural network (DNN) accelerator designs as applications of the proposed methodology. The proposed Accelergy energy estimation framework, which consists of the Accelergy energy estimator and multiple estimation plugins, is validated on Eyeriss, a well-known DNN accelerator design. Overall, with its rich collections of action types and components, Accelergy can achieve 95% accuracy comparing to energy obtained from post-layout simulation in terms of total energy consumption and provide accurate energy breakdowns for components at dierent levels of granularity.
A metallurgical study of West African iron monies from Cameroon and Liberia
The aim of this thesis is to make a contribution to the study of West African iron monies through examination and analysis of a group of these objects in the collection of the Peabody Museum of Archaeology and Ethnology at Harvard University. The selection of objects from the collection includes five distinct types, representing different sizes and shapes that have been identified as monies/exchange mediums. All of these object types were originally part of a bundle or remain in bundled form; all share a provenience in West Africa, four groups in present day Cameroon and one in Liberia. The research corpus of material has dates ranging from the late nineteenth to the early twentieth century. My metallurgical studies of West African iron monies are the first such investigations to have been carried out. The results will contribute to the appreciation of the ways in which iron 'monies' functioned within late nineteenth - early twentieth century West African societies.
Classes of defense for computer systems
Computer security incidents often involve attackers acquiring a complex sequence of escalating capabilities and executing those capabilities across a range of different intermediary actors in order to achieve their ultimate malicious goals. However, popular media accounts of these incidents, as well as the ensuing litigation and policy proposals, tend to focus on a very narrow defensive landscape, primarily individual centralized defenders who control some of the capabilities exploited in the earliest stages of these incidents. This thesis proposes two complementary frameworks for defenses against computer security breaches -- one oriented around restricting the computer-based access capabilities that adversaries use to perpetrate those breaches and another focused on limiting the harm that those adversaries ultimately inflict on their victims. Drawing on case studies of actual security incidents, as well as the past decade of security incident data at MIT, it analyzes security roles and defense design patterns related to these broad classes of defense for application designers, administrators, and policy-makers. Application designers are well poised to undertake access defense by defining and distinguishing malicious and legitimate forms of activity in the context of their respective applications. Policy-makers can implement some harm limitation defenses by monitoring and regulating money flows, and also play an important role in collecting the data needed to expand understanding of the sequence of events that lead up to successful security incidents and inform which actors can and should effectively intervene as defenders. Organizations and administrators, meanwhile, occupy an in-between defensive role that spans both access and harm in addressing digital harms, or harms that are directly inflicted via computer capabilities, through restrictions on crucial intermediate harms and outbound information flows. The comparative case analysis ultimately points to a need to broaden defensive roles and responsibilities beyond centralized access defense and defenders, as well as the visibility challenges compounding externalities for defenders who may lack not only the incentives to intervene in such incidents but also the necessary knowledge to figure out how best to intervene.
Advanced aircraft seat design : the webbing concept
Air travel is so common in this day and age that any significant improvement in seat comfort on board a commercial passenger jet is likely to affect almost everybody. A proposed design concept in this project is the use of webbing as the substitute for current foam cushioning in the seat back. The result is a webbing-foam hybrid cushioning design that utilizes the benefits of both cushioning types to maximum effect. Experimental tests suggest that this design would also provide better overall comfort for the passenger. As a result, both consumer and industry would profit immensely from the implementation of such a design.
Hydraulically-actuated microscale traveling energy recovery
As the demand for portable electrical power grows, alternatives to chemical stored energy may enable users with additional system capabilities. This thesis presents a miniature hydroelectric turbine system for use in wearable energy harvesting applications. A radial outflow turbine, which trades performance for manufacturability, is designed and built. A permanent magnet generator is designed and embedded within the turbine to enable a compact overall system. Fluidic rectification is pursued with the goal of harnessing more of the available mechanical power. A method for reliably conveying pressurized fluid to and from the shoe is developed. Results for the turbine and generator system are presented under a variety of test conditions.
Assessing the costs of solar power plants for the Island of Roatàn
This is an analysis assessing the installation costs of different solar power plant technologies and the current commercial availability for installation on the Island or Roatàn. Commercial large-scale power plants have been in use for decades and their technical feasibility has been documented as well as their high installation costs. Roatàn is currently seeking alternatives for powering their island. This thesis explores the initial costs of the solar power options currently available to the island, focusing on the large energy storage requirements needed for the island to be powered entirely off of sunlight.
Dodd-Frank Wall Street Reform and Consumer Protection Act : how will it affect the real estate securitization market
This thesis investigates one of the United States' most sweeping regulatory responses since the New Deal legislation passed in the 1930's, the Dodd Frank Act. While the Dodd Frank Act will affect numerous financial markets, this thesis will focus on the implications of this regulation on the real estate securitization market. To better understand the regulatory response towards real estate securitization, we will clarify some of the key definitions, explain the history of securitization and describe the fundamental issues that led to the real estate securitization boom and subsequent bust as well as its implications on the financial crisis in the late 2000s. We will then summarize in detail the key provisions in the Dodd Frank Act associated with real estate securitization and describe the framework for which these provisions were formed. In conclusion, we will examine the implications of these provisions and explain our position of how the Dodd Frank Act will not achieve its desired effect on the real estate securitization market as drafted.
Large eddy simulations of premixed turbulent flame dynamics : combustion modeling, validation and analysis
High efficiency, low emissions and stable operation over a wide range of conditions are some of the key requirements of modem-day combustors. To achieve these objectives, lean premixed flames are generally preferred as they achieve efficient and clean combustion. A drawback of lean premixed combustion, however, is that the flames are more prone to dynamics. The unsteady release of sensible heat and flow dilatation in combustion processes create pressure fluctuations which, particularly in premixed flames, can couple with the acoustics of the combustion system. This acoustic coupling creates a feedback loop with the heat release that can lead to severe thermoacoustic instabilities that can damage the combustor. Understanding these dynamics, predicting their onset and proposing passive and active control strategies are critical to large-scale implementation. For the numerical study of such systems, large eddy simulation (LES) techniques with appropriate combustion models and reaction mechanisms are highly appropriate. These approaches balance the computational complexity and predictive accuracy. This work, therefore, aims to explore the applicability of these methods to the study of premixed wake stabilized flames. Specifically, finite rate chemistry LES models that can effectively capture the interaction between different turbulent scales and the combustion fronts have been implemented, and applied for the analysis of premixed turbulent flame dynamics in laboratory-scale combustor configurations. Firstly, the artificial flame thickening approach, along with an appropriate reduced chemistry mechanism, is utilized for modeling turbulence-combustion interactions at small scales. A novel dynamic formulation is proposed that explicitly incorporates the influence of strain on flame wrinkling by solving a transport equation for the latter rather than using local-equilibrium-based algebraic models. Additionally, a multiple-step combustion chemistry mechanism is used for the simulations. Secondly, the presumed-PDF approach, coupled with the flamelet generated manifold (FGM) technique, is also implemented for modeling turbulence-combustion interactions. The proposed formulation explicitly incorporates the influence of strain via the scalar dissipation rate and can result in more accurate predictions especially for highly unsteady flame configurations. Specifically, the dissipation rate is incorporated as an additional coordinate to presume the PDF and strained flamelets are utilized to generate the chemistry databases. These LES solvers have been developed and applied for the analysis of reacting flows in several combustor configurations, i.e. triangular bluff body in a rectangular channel, backward facing step configuration, axi-symmetric bluff body in cylindrical chamber, and cylindrical sudden expansion with swirl, and their performance has been be validated against experimental observations. Subsequently, the impact of the equivalence ratio variation on flame-flow dynamics is studied for the swirl configuration using the experimental PIV data as well as the numerical LES code, following which dynamic mode decomposition of the flow field is performed. It is observed that increasing the equivalence ratio can appreciably influence the dominant flow features in the wake region, including the size and shape of the recirculation zone(s), as well as the flame dynamics. Specifically, varying the heat loading results in altering the dominant flame stabilization mechanism, thereby causing transitions across distinct- flame configurations, while also modifying the inner recirculation zone topology significantly. Additionally, the LES framework has also been applied to gain an insight into the combustion dynamics phenomena for the backward-facing step configuration. Apart from evaluating the influence of equivalence ratio on the combustion process for stable flames, the flame-flow interactions in acoustically forced scenarios are also analyzed using LES and dynamic mode decomposition (DMD). Specifically, numerical simulations are performed corresponding to a selfexcited combustion instability configuration as observed in the experiments, and it is observed that LES is able to suitably capture the flame dynamics. These insights highlight the effect of heat release variation on flame-flow interactions in wall-confined combustor configurations, which can significantly impact combustion stability in acoustically-coupled systems. The fidelity of the solvers in predicting the system response to variation in heat loading and to acoustic forcing suggests that the LES framework can be suitably applied for the analysis of flame dynamics as well as to understand the fundamental mechanisms responsible for combustion instability. KEY WORDS - large eddy simulation, LES, wake stabilized flame, turbulent premixed combustion, combustion modeling, artificially thickened flame model, triangular bluff body, backward facing step combustor, presumed-PDF model, flamelet generated manifold, axi-symmetric bluff body, cylindrical swirl combustor, particle image velocimetry, dynamic mode decomposition, combustion instability, forced response.
Evaluation of spatial-spectral filtering in non-paraxial volume holographic imaging systems
In this thesis, the properties of transmission-mode volume phase holograms as spatial-spectral filters in optical systems for microscopic medical imaging are evaluated. In experiment, the relationship between the angle of incidence and diffraction efficiency are invesitgated for wavelength-detuned multiplex holograms to establish the limits of the narrow bandwidth lateral field of view. The depth selectivity of the microscope with a volume hologram pupil is also measured and found to vary significantly with recording parameters and lateral shift of the probe point source in object space. This experiment is modified to incorporate controlled levels of spherical aberration, where the effect on the depth selectivity is evaluated. A novel resolution target designed specifically for the evaluation of this imaging system is described and imaged. A flexible approach based on the 1st-order Born approximation is implemented to simulate all aspects of the imaging system with a multiplex volume hologram pupil. The simulation is then used to verify and expand upon the experimental results. A mathematical treatment of the nature of the anomalous apparent curvature of the diffraction image is performed, showing that a volume grating recorded in plane has weak out-of-plane spatial filtering behavior.
Reconfigurable Autonomous Surface Vehicles : perception and trajectory optimization algorithms
Autonomous Surface Vehicles (ASV) are a highly active area of robotics with many ongoing projects in search and rescue, environmental surveying, monitoring, and beyond. There have been significant studies on ASVs in riverine, coastal, and sea environments, yet only limited research on urban waterways, one of the most busy and important water environments. This thesis presents an Urban Autonomy System that is able to meet the critical precision, real-time and other requirements that are unique to ASVs in urban waterways. LiDAR-based perception algorithms are presented to enable robust and precise obstacle avoidance and object pose estimation on the water. Additionally, operating ASVs in well-networked urban waterways creates many potential use cases for ASVs to serve as re-configurable urban infrastructure, but this necessitates developing novel multi-robot planners for urban ASV operations. Efficient sequential quadratic programming and real-time B-spline parameterized mixed-integer quadratic programming multi-ASV motion planners are presented respectively for formation changing and shapeshifting operations, enabling use cases such as ASV docking and bridge-building on water. These methods increase the potential of urban and non-urban ASVs in the field. The underlying planners in turn contribute to the motion planning and trajectory optimization toolbox for unmanned aerial vehicles (UAVs), self-driving cars, and other autonomous systems.
User-centered automation process in synthetic biology research
By designing and re-designing biological system, synthetic biology is advancing a wide range of domains from biotherapeutics for fatal cancers to biofuels and artificial meat to improve the global environment and food security. As the scale and complexity of synthetic biology endeavors are increasing, designing automation processes to replace manual labor is becoming more important to improve cost effectiveness, reproducibility, and efficiency including error reduction. Despite the desire for lab automation in the research and industry, in reality, scientists still largely rely on manual techniques in the labs even though the conventional approach becomes unmanageable and slows down their research iterations. One of the key problems is the mental barrier. According to the online survey and interviews conducted in this research, almost 90% of researchers cannot trust the quality of robot's work even though they do not know the actual success rate of the robotics work and what the robot can do. To bridge the gap for making the automation process more accessible, this research is proposing the use of "Bot", a software robot with which people can communicate through the internet and "Internet of things (IoT)". In the system, Bot is connected with the lab automation robots such as liquid handling robots. By communicating with the Bot using user-interfaces such as Slack, researchers can place work orders on lab robots and monitor their order status anytime. Moreover, people can directly ask the Bot for important information and instructions, such as protocol success rate and scheduling.
Enabling a consumer headset in product development
Manufacturing-intensive companies like Ford Motor Company have come to the realization that they need to have a strong consumer focus to survive in today's competitive world. Ford has just recently announced steps to further align its program team centers more strongly with their consumers, yet the lower levels of the teams will still remain aligned around a standard part decomposition that finds its roots back to Henry Ford's vertical integration methodology. In today's information age, with the growing expectations of the consumer, as well as product complexity, it has become essential for product teams to share and communicate efficiently. It is no longer adequate for the program manager to be the sole focal point, where the voice of the consumer meets the voice of part engineering. As complex as it sounds, the consumer voice must be decomposed for delivery throughout the program team as the driving force by which the parts are engineered. Herein outlines an approach which has been called 'enabling a consumer headset in product development,' that illustrates the possibility of handle this complexity using today's tools. Bottom line: Industry is ready to take this one on. Needs analysis has established a focal point at the program team decompositional structure, product development process, and the driving management metrics and engineering specifications. Suggested are concepts that lead to a more natural and efficient way of delivering that consumer headset and these concepts are applied on three implementation projects: 1) a MIT course exercise; 2) a new Docu-Center architecture program at Xerox; and 3) a forward model 200X Mustang program. Findings are summarize into a final recommendation for future Ford program applications. The conclusion of this thesis recommends three items: 1) Introduces the Role of Architects, 2) Aligns the Organization Around the Consumer, 3) Transitions Engineering Focus to Interface Specifications.
Beyond the lean startup : applying the lean startup methodology in established firms
The lean startup methodology has been successfully applied to product development at startup companies, however many of its principles may also be of benefit to established firms. The purpose of this research was to explore the benefits of the lean startup methodology in established organizations. An electronic survey was administered to product managers and engineers at 44 established companies from diverse industries as well as posted on relevant online community groups. Follow-up up interviews were conducted with select respondents for further in-depth analysis. A total of 44 individuals completed the survey and 5 follow-up interviews were conducted. Overall, 11 respondents (25%) reported use of the lean startup methodology at established firms. Success with the methodology was reported in 6 cases. A high proportion of respondents (66%) were not familiar with the method; however, did report use of specific principles aligned with the lean startup method. Results also suggested that use of the methodology was more frequent in environments with high uncertainty and in companies less than 20 years old. Interview results corroborated survey findings and highlighted barriers to implementation. The findings of this work suggest that the lean startup methodology may provide benefit to established firms, however the application of this method in this context is in its infancy. Implications for best practice and directions for future research are also discussed.
Career path analysis of professionals selected by MIT undergraduates
For current MIT undergrads, life after graduation can seem daunting. With uncertainty about job duration, graduate school, and career paths in general, many undergraduates enter the real world unsure of what the future holds, or if what they have decided to do post-graduation is the "best" option. As such, MIT undergraduates in the Undergraduate Practice Opportunities Program (UPOP) were asked to interview professionals that they believed had jobs they would one day also like to have. This resulted in a large dataset of career paths for an extremely diverse group of individuals, all with their own unique stories and time-lines. This data was filtered, cleaned, and analyzed to gain insight into life after graduation. From the analyzed data it was found that the distributions of durations spent at graduate school, in companies, or in specific job titles were all not significantly different, and the average duration spent in each of these options was 2-6 years, with some noticeable outliers. Overall these analyses showed that there are many options for students in the first 10 years after completing their BS, and there is no clear "correct" option to choose from.
3D virus scaffolds for energy storage and microdevice applications
With constantly increasing demand for lightweight power sources, electrode architectures that eliminate the need for conductive and organic additives will increase mass specific energy and power densities. The increased demand for lightweight power is coupled with increasing device miniaturization. As the scale of devices decreases, current battery technologies add mass on the same scale as the device itself. A dual functional electro-mechanical material that serves as both the device structural material and the power source would dramatically improve device integration and range for powered movement. To address the demand for lightweight power with the objective of a dual functional electro-mechanical material, the M 13 bacteriophage was used to create novel 3-dimensional nano-architectures. To synthesize 3-dimensional nanowire scaffolds, the M13 virus is covalently linked into a hydrogel that serves as a 3-dimensional bio-template for the mineralization of copper and nickel nanowires. Control of nanowire diameter, scaffold porosity, and film thickness is demonstrated. The nanowire scaffolds are found to be highly conductive and can be synthesized as free-standing films. To demonstrate the viability of the 3-dimensional nanowire networks for electrical energy storage, copper nanowires were galvanically displaced to a mixed phase copper-tin system. These tin based anodes were used for lithium rechargeable batteries and demonstrated a high storage capacity per square area and stable cycling approaching 100 cycles. To determine the viability of the 3-dimensional nanowire networks as dual functional electro-mechanical materials and the mechanical stability of processing intermediates, phage hydrogels, aerogels, and metal nanowire networks were examined with nano-indentation. The elastic moduli of the metal networks are in the range of open cell metal foams The demonstration of 3-dimensional virus-templated metal nanowire networks as electrically conductive and mechanically robust should facilitate their implementation across a broad array of device applications to include photovoltaics, catalysis, electrochromics, and fuel cells.
Reprogramming cellular fate using defined factors
Embryonic stem (ES) cells have a vast therapeutic potential given their pluripotency, or the ability to differentiate into tissues from all three germ layers. One of the ultimate goals of regenerative medicine is to isolate pluripotent stem cells from patients. Nuclear reprogramming offers the possibility of creating patient-specific cell lines, thus abrogating the need for immunosuppressants following cell transplantation therapy. It was recently reported that the forced expression of four transcription factors, Oct4, Sox2, c-Myc and Klf4 can induce a pluripotent state in somatic cells, without the need for embryo destruction. The work presented here aims to characterize reprogramming using defined factors and provide insight into the mechanisms governing this process. It also seeks to identify transient cues to induce reprogramming in somatic cells, alleviating the need for virally transduced transcription factors that hinder its eventual clinical use.
Bombs unbuilt : power, ideas and institutions in international politics
Nuclear weapons are the most powerful weapons in human history, but contrary to virtually every prediction by scholars, relatively few states have acquired them. Why are there so few nuclear weapons states? What factors lead governments to reject and even renounce the ultimate weapon? What do the disconfirmed predictions of widespread proliferation tell us about contemporary theories of international relations? To answer these questions, this study tests 15 hypotheses based on core categories in international politics: power, resources, ideas, and institutions. The hypotheses on power suggest that a state's nuclear decisions are a function of its external threats and its place in the international system. They claim that the slow pace of proliferation can be explained by several factors: a lack of threat, bipolarity, security guarantees, and superpower pressure. The resource hypotheses emphasize material capability, i.e., whether a state has the money, scientific talent, or access to foreign technology required to develop nuclear weapons. Hypotheses on the role of ideas often focus on the beliefs held by decision makers. This study tests the influence of anti-nuclear norms on proliferation decision making. Institutional explanations highlight either domestic institutional arrangements (whether a state is democratic, whether it is liberalizing economically, its organizational politics) or international institutions like the nonproliferation regime. Many of the tests employ a data set consisting of 132 nuclear decisions and outcomes.
Littoral wetlands and lake inflow dynamics
Wetlands are increasingly recognized as important water treatment systems, which efficiently remove nutrients, suspended sediments, metals and anthropogenic chemicals through sediment settling and various chemical and biological processes. This thesis tackles three interconnected aspects of wetland physics. The first is wetland circulation, which is one of the most important design parameters when constructing wetlands for water quality improvement because it regulates the residence time distribution, and thus the removal efficiency of the system. Field work demonstrates that wetland circulation changes from laterally well mixed during low flows to short-circuiting during storms, which in combination with a reduced nominal residence time undermines the wetland treatment performance. The second important physical mechanism is thermal mediation, i.e. the temperature modification of the water that flows through the wetland. This change in water temperature is specifically important in littoral wetlands, where it can alter the intrusion depth in the downstream lake. Numerical analysis in conjunction with field observations shows that littoral wetlands located in small or forested watersheds can raise the water temperature of the lake inflow during summer enough to create surface inflows when a plunging inflow would otherwise exist. Consequently, more land borne nutrients and chemicals enter the epilimnion where they can enhance eutrophication and the risk of human exposure. The third and last physical mechanism considered in this thesis is the exchange flows generated between littoral wetlands and lakes. Field experiments show that during summer and fall, when river flows are low, buoyancy- and wind-driven exchange flows dominate the wetland circulation and flushing dynamics. More importantly, they can enhance the flushing by as high as a factor of ten, thus dramatically impairing the wetland potential for removal and thermal mediation.
Essays on debt markets
This thesis consists of three chapters on debt markets. In chapter 1, I consider the interaction between domestic banking and growth in a DSGE model of sovereign default in order to address (i) the joint existence of sovereign debt and international reserves, and (ii) the occurrence of twin (domestic banking and sovereign default) crises. In chapter 2, joint with Hui Chen and Jun Yang, we build a structural model to explain corporate debt maturity dynamics over the business cycle and their implications for the term structure of credit spreads. In chapter 3, joint with Juan Passadore, we study debt policy of emerging economies accounting for credit and liquidity risk.
A quantitative analysis and assessment of the performance of image quality metrics
Image quality assessment addresses the distortion levels and the perceptual quality of a restored or corrupted image. A plethora of metrics has been developed to that end. The usual mean of success of an image quality metric is their ability to agree with the opinions of human subjects, often represented by the mean opinion score. Despite the promising performance of some image quality metrics in predicting the mean opinion score, several problems are still unaddressed. This thesis focuses on analyzing and assessing the performance of image quality metrics. To that end, this work proposes an objective assessment criterion and considers three indicators related to the metrics: (i) robustness to local distortions; (ii) consistency in their values'; and (iii) sensitivity to distortion parameters. In addition, the implementation procedures of the proposed indicators is presented. The thesis then analyzes and assesses several image quality metrics using the developed indicators for images corrupted with Gaussian noise. This work uses both widely-used public image datasets and self-designed controlled cases to measure the performance of IQMs. The results indicate that some image quality metrics are prone to poor performance depending on the number of features. In addition, the work shows that the consistency in IQMs' values depends on the distortion level. Finally, the results highlight the sensitivity of different metrics to the Gaussian noise parameter. The objective methodology in this thesis unlocks additional insights regarding the performance of IQMs. In addition to the subjective assessment, studying the properties of IQMs outlined in the framework helps in finding a metric suitable for specific applications.
New polymeric biomaterial interfaces for biosensor applications
To fabricate living cell-based immunological sensors, we have examined two PEO-based biomaterials that can be patterned to generate cellular array templates: poly(allylamine)-g- poly(ethylene glycol) graft-copolymer and poly(ethylene glycol) dimethacrylate hydrogel. Poly(allylamine)-g-poly(ethylene glycol) polycation graft-copolymers were designed, synthesized, and characterized in order to combine bio-functionality with patternability on charged polyelectrolyte multilayer surfaces. Polymer-on-polymer stamping (POPS) techniques were used to create micron scale patterned regions on negatively charged multilayer surfaces via direct stamping of these graft copolymers. The long PEG side chains effectively resisted adsorption of antibodies or other proteins, and created a bio-inert area when patterned by POPS. On the other hand, desired proteins can be covalently attached to the graft copolymer by introducing proper coupling agents. Arrays of proteins were produced by either simple adsorption or coupling of proteins onto the graft copolymer patterned surfaces. The protein arrays were utilized as templates in fabricating cellular arrays of non-adherent B cells.
Tailoring light with photonic crystal slabs : from directional emission to topological half charges
Photonic crystal slabs are a versatile and important platform for molding the flow of light. In this thesis, we consider ways to control the emission of light from photonic crystal slab structures, specifically focusing on directional, asymmetric emission, and on emitting light with interesting topological features. First, we develop a general coupled-mode theory formalism to derive bounds on the asymmetric decay rates to top and bottom of a photonic crystal slab, for a resonance with arbitrary in-plane wavevector. We then employ this formalism to inversion symmetric structures, and show through numerical simulations that asymmetries of top-down decay rates exceeding 104 can be achieved by tuning the resonance frequency to coincide with the perfectly transmitting Fabry-Perot frequency. The emission direction can also be rapidly switched from top to bottom by tuning the wavevector or frequency. We then consider the generation of Mobius strips of light polarization, i.e. vector beams with half-integer polarization winding, from photonic crystal slabs. We show that a quadratic degeneracy formed by symmetry considerations can be split into a pair of Dirac points, which can be further split into four exceptional points. Through calculations of an analytical two-band model and numerical simulations of two-dimensional photonic crystals and photonic crystal slabs, we demonstrate the existence of isofrequency contours encircling two exceptional points, and show the half-integer polarization winding along these isofrequency contours. We further propose a realistic photonic crystal slab structure and experimental setup to verify the existence of such Mobius strips of light polarization.
Examination of corporate mental models for derivative product creation and their impact on customers
Companies in highly competitive industries often develop incremental products to meet diverse customer needs or to gain share from competitors. However, in order to help customers to choose between derivatives, companies also present more detailed product specifications or features to their customers, resulting in customer confusion. We categorized customer confusion into three facets: product overlap, ambiguous needs, and information overload and discussed each confusion in three case studies. This thesis presents a theory of mental models for companies facing this issue, and uses three case studies to examine the issue : Groceries (Trader Joes), Wearable Devices (Fitbit), and Semiconductor (Texas Instruments). We conclude that product ambiguity is the dominant type of customer confusion in the grocery retail industry. Trader Joe's has adopted the no sale strategy to mitigate this effect. We identify information overload as the most significant concern in wearable devices from the Fitbit case, where online user reviews supplement specification information. Finally, we find that contextual ambiguity is a huge problem for the customers in the semiconductor industry. Several strategies such as customer support and enhanced web content are identified to reduce this ambiguity. We propose three system diagrams showing how company strategies affect customer confusion regarding different levels of product knowledge and ability to acquire new knowledge. The diagrams shed light on how sales support could intervene effectively, based on the customer type and confusion type.
Comparison of data-driven analysis methods for identification of functional connectivity in fMRI
Data-driven analysis methods, such as independent component analysis (ICA) and clustering, have found a fruitful application in the analysis of functional magnetic resonance imaging (fMRI) data for identifying functionally connected brain networks. Unlike the traditional regression-based hypothesis-driven analysis methods, the principal advantage of data-driven methods is their applicability to experimental paradigms in the absence of a priori model of brain activity. Although ICA and clustering rely on very different assumptions on the underlying distributions, they produce surprisingly similar results for signals with large variation. The main goal of this thesis is to understand the factors that contribute to the differences in the identification of functional connectivity based on ICA and a more general version of clustering, Gaussian mixture model (GMM), and their relations. We provide a detailed empirical comparison of ICA and clustering based on GMM. We introduce a component-wise matching and comparison scheme of resulting ICA and GMM components based on their correlations. We apply this scheme to the synthetic fMRI data and investigate the influence of noise and length of time course on the performance of ICA and GMM, comparing with ground truth and with each other. For the real fMRI data, we propose a method of choosing a threshold to determine which of resulting components are meaningful to compare using the cumulative distribution function of their empirical correlations. In addition, we present an alternate method to model selection for selecting the optimal total number of components for ICA and GMM using the task-related and contrast functions. For extracting task-related components, we find that GMM outperforms ICA when the total number of components are less then ten and the performance between ICA and GMM is almost identical for larger numbers of the total components. Furthermore, we observe that about a third of the components of each model are meaningful to be compared to the components of the other.
Study of density fluctuations and particle transport at the edge of I-mode plasmas
The wide range of plasma parameters available on Alcator C-Mod has led to the accessibility of many regimes of operation. Since its commissioning, C-Mod has accessed the Linear ohmic confinement, Saturated ohmic confinement, L-Mode and ELM-free, ELMy and Enhanced D[alpha] H-Mode regimes. Recently, another novel regime, the IMode, has been identified[1][2][3][4]. I-modes feature the presence of steep H-Mode-like electron and ion temperature gradients at the edge of the plasma with L-Mode-like density profiles. The I-Mode, in contrast to the Hl-mode, shows very weak degradation of energy confinement with increased input power, and routinely reaches H98 > 1 while operating at low edge collisionalities ... making it a good candidate for reactor relevant tokamaks. Also relevant for reactors, this regime can be sustained in steady state for more than -15 energy confinement times without the need for ELMs to regulate particle and impurity confinement. Changes in edge density, temperature and magnetic field fluctuations accompany the L-mode to I-mode transition, with reduction of fluctuations in the 50-150kHz range as well as the appearance of a Weakly Coherent Mode (WCM) in the 200-300kHz range, analogous to the Quasi-Coherent Mode (QCM) characteristic of the Enhanced D[alpha] H-mode. Previous work[4] has established a connection between the midrange fluctuation suppression and reduction in the effective thermal diffusivity, Xye, in the pedestal region. The mechanism in I-mode for maintaining sufficient particle transport to avoid impurity accumulation and instabilities has been unclear. The O-mode reflectometry system has been extensively used for the characterization and detection of the I-mode and the WCM, in part, enhanced by upgrades to the system which enabled the baseband detection of density fluctuations at an array of cutoff locations at the edge of the plasma[5] [6] [7]. Using a novel model, the autopower signals of reflectometry channels detecting the density fluctuations have been decomposed into a broadband component and a WCM component. The latter is then used to estimate the intensity of the WCM. In parallel, the particle transport across the LCFS in I-mode plasmas has been estimated using a volume integrated particle transport model, where ionization source measurements are acquired using D[alpha] profiles measured near the outboard midplane. This model takes into account the anisotropic ionization source density around the periphery of the plasma by introducing an asymmetry factor, [sigma], which is then estimated using a study of I-Mode to H-Mode transitions. The results imply that measurements at the outboard midplane overestimate the surface-averaged influx. Finally, a comparison has been made between the particle flux across the LCFS of the I-mode and the intensity of the WCM, which shows a generally positive correlation between the two. This is supporting evidence that the WCM is, in fact, responsible for maintaining particle and impurity transport across the edge of the I-mode energy transport barrier.
Hochschild homology / cohomology of preprojective algebras of ADET quivers
Preprojective algebras [Pi]q of quivers Q were introduced by Gelfand and Ponomarev in 1979 in order to provide a model for quiver representations (in the special case of finite Dynkin quivers). They showed that in the Dynkin case, the preprojective algebra decomposes as the direct sum of all indecomposable representations of the quiver with multiplicity 1. Since then, preprojective algebras have found many other important applications, see e.g. to Kleinian singularities. In this thesis, I computed the Hochschild homology/cohomology of [Pi]q over C for quivers of type ADET, together with the cup product, and more generally, the calculus structure. It turns out that the Hochschild cohomology also has a Batalin-Vilkovisky structure. I also computed the calculus structure for the centrally extended preprojective algebra,
Adaptive error estimation in linearized ocean general circulation models
Data assimilation methods, such as the Kalman filter, are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. In this study we address the problem of estimating model and measurement error statistics from observations. We start by testing the Myers and Tapley (1976, MT) method of adaptive error estimation with low-dimensional models. We then apply the MT method in the North Pacific (5°-60° N, 132°-252° E) to TOPEX/POSEIDON sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The MT method, closely related to the maximum likelihood methods of Belanger (1974) and Dee (1995), is shown to be sensitive to the initial guess for the error statistics and the type of observations. It does not provide information about the uncertainty of the estimates nor does it provide information about which structures of the error statistics can be estimated and which cannot. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). The CMA is both a powerful diagnostic tool for addressing theoretical questions and an efficient estimator for real data assimilation studies. It can be extended to estimate other statistics of the errors, trends, annual cycles, etc. Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. After removal of trends and annual cycles, the low frequency /wavenumber (periods> 2 months, wavelengths> 16°) TOPEX/POSEIDON sea level anomaly is of the order 6 cm2. The GCM explains about 40% of that variance. By covariance matching, it is estimated that 60% of the GCM-TOPEX/POSEIDON residual variance is consistent with the reduced state linear model. The CMA is then applied to TOPEX/POSEIDON sea level anomaly data and a linearization of a global GFDL GCM. The linearization, done in Fukumori et al.(1999), uses two vertical mode, the barotropic and the first baroclinic modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCMTOPEX/ POSEIDON residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the TIP signal, which are not part of the 20 by 10 GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simultaneous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.
Genetic noise in the cyanobacterial circadian oscillator
Clocks are generally considered as quintessential examples of accurate and precise devices. Biological clocks however, are continually subjected to intracellular and extracellular fluctuations that might reduce the fidelity of this periodic timer. One fundamental limitation that might set an upper bound on the fidelity is the stochastic nature of gene expression creating a noisy intracellular environment. Circadian rhythms, driven by oscillators which provide cells with an internal clock that controls the gene expression program, have been observed in a wide range of organisms from cyanobacteria to mammals. To explore the impact of stochastic expression fluctuations on the performance of the circadian clock, it is necessary to monitor single cells, since population experiments will average out these fluctuations. The cyanobacterium Synechococcus elongatus PCC7942 is an excellent candidate for this study since its core circadian oscillator is well explored. We therefore measured, in single cells, the expression fluctuations of a fluorescent reporter driven by the cyanobacterial circadian oscillator. Repeated microscopic imaging of individual cells and their progeny revealed a robust circadian rhythm, and experiments with cells lacking the proposed central clock proteins confirm the crucial role they play in Synechococcus. Experiments conducted by microscopy and flow cytometry establish that the majority of genetic noise in Synechococcus arises from fluctuations uncorrelated between multiple genes (and therefore does not originate with a global clock noise).
Stress-engineering of nanopatterned membranes to produce three-dimensional structures
Microdevice fabrication is done on the surface of polished flat semiconductor substrates with a series of material depositions, etches and lithography steps. These processes are inherently planar and well suited for the fabrication of billions of micrometer-thick transistors over 300 mm diameter substrates, but impractical for building vertically. This thesis presents a method of building three-dimensionally (3D) with existing planar fabrication technology: fabricate on a thin membrane, and then fold the membrane into a 3D structure. Material stresses patterned on a membrane will cause controlled bending. A simple demonstration is the bilayer, in which a stressed metal is deposited on a stress-free membrane. One challenge with this approach is to achieve very small fold radii for tight 3D packing. The solution presented here is helium ion implantation into the membrane, which creates a large localized stress that is capable of bending a 100 nm thick membrane around a 1 [mu]m radius without fracturing it. The energy and dose of the helium ions control the direction and angle of the fold, which is explained within a theoretical framework, and shown to agree with experiment. One application of stress-folding is a chemical sensor. Built as a 3D micro-switch, the stress that develops in a reactive polymer bends the switch closed. Results show that it operates with negligible power consumption and selectively responds to a target analyte with more than a million-fold electrical resistance change. Other applications discussed include a 3D inductor and non-periodic artificial dielectrics made by membrane folding combined with new 3D optical patterning and magnetic self-alignment techniques. These practical advances open the way to designing a variety of 3D devices which may have broad applications in computing, communications and detection.
Canonical correlation of shipping forward curves
The behavior and interrelations between the main shipping forward curves are analyzed using multivariate statistics after removing the volatility distortions dictated by the Samuelson hypothesis. Principal Components Analysis and Canonical Correlation analysis were used to demonstrate how the task of explaining the various shipping forward curves can be simplified substantially and how very high correlations can be achieved between shipping forward curves. The conditions under which correlations are higher are discussed as well as the various applications of these results using case studies. Applications include trading from a hedge fund perspective, cross hedging any physical exposure in illiquid markets and portfolio optimization. Conditioning as a tool is also examined to demonstrate how more reliable correlation results can be obtained for cross hedging or other purposes, and how the best trading opportunities can be unveiled conditional on recently observed data. Tanker valuations are carried out using the adjusted forward curves with the RAFL ship valuation model. The results are very close to transaction prices for relatively modem vessels while deviations in older ships are explained with regards to phase out regulations and other factors. The ship value volatility and consequently the valuations of typical options are substantial and increase as a percentage of the ship value with age. These results have to be considered seriously in shipping transactions that include optionalities which are very common.
Disease marketing and patient coping : a research study
BACKGROUND: There is a high prevalence of disease marketing actions in the United States that are targeted towards patients with chronic illness. However, no study has assessed the direct effects of these marketing actions on patient coping attitudes and behaviors. OBJECTIVES: This study aims to investigate whether the mere presence of disease marketing impacts patient coping and if so, how do they affect patients' coping attitudes and behaviors. METHODS: We conducted a controlled experiment using online questionnaires to assess the disease perceptions, coping decisions and disease disclosure behaviors of 108 subjects. The subjects were divided into two groups where the experimental group (N = 55) was shown marketing actions associated with a fictitious disease called Karlsen's Disease while the control group (N = 53) was not shown any marketing actions. The subjects were then asked a series of questions related to health-related coping behaviors and non-health related social behaviors. T-tests and chi-square analyses were used to analyze the behavioral differences between the experimental (high-marketing) and control (no-marketing) groups. RESULTS: Subjects in the high-marketing group were overall significantly more willing to draft a will than subjects in the no-marketing group (t(106) = 2.64, p = 0.01); High-marketing group subjects were overall significantly more likely to wear a medical ID bracelet than no-marketing group subjects (c²(1, N = 108) = 3.71, p = 0.05); Among subjects who were willing to request a menu accommodation at a dinner party, those who were in the high-marketing group were significantly more likely to disclose their disease to the party host (c²(1, N = 90) = 4.65, p = 0.03); Subjects in the high-marketing group were also significantly more likely to anticipate greater understanding from the party host towards their menu accommodation request. When controlled for gender, women in the high-marketing group were more likely to join a patient support group (t(61) = 1.75, p = 0.09), and less likely to ask family and friends to shave their heads in show of solidarity (t(18) = -1.97, p = 0.07) than women in the no-marketing group; Men in the high-marketing group were more likely than men in the no-marketing group to disclose their health condition to the dinner party host (c²(1, N = 47) = 3.61, p = 0.06). Finally, among subjects with at least a 4-year college degree, those in the high-marketing group were more willing than those in the no-marketing group to wear a face mask to protect themselves from airborne pathogens in crowded public places (t(61) = 1.79, p = 0.08). CONCLUSIONS: Based on our results, the presence of disease marketing is anticipated to have a general positive impact on patient coping attitudes and behaviors. Chronically ill patients exposed to disease marketing actions are expected to anticipate less stigma from others, have increased willingness to disclose their illness and adopt health seeking behaviors. Disease marketing is also expected to have differential impact on patients based on their gender and level of education. Follow-up studies using real patients with chronic illness should be carried out to confirm the findings from this study.
Cooperative autonomous tracking and prosecution of targets using range-only sensors
Autonomous platforms and systems are becoming ever more prevalent. They have become smaller, cheaper, have longer duration times, and now more than ever, more capable of processing large amounts of information. Despite these significant technological advances, there is still a level of distrust for the public autonomous systems. In marine and underwater vehicles, autonomy is particularly important being that communications to and from those vehicles are limited, either due to the length of the mission, the distance from their human operators, the sheer number of vehicles being used, or the data transfer rate available from a remote operator to an underwater vehicle through acoustics. The premise for this research is to use the MOOS-IvP code architecture, developed at MIT, to promote and advance marine vehicle autonomy collective knowledge through a project called Hunter-Prey. In this scenario, two or more surface vehicles attempt to cooperatively track an evading underwater target using range-only sensors, and ultimately maneuver into position for a "kill" using a simulated depth charge. This scenario will be distributed to the public through academic institutions and interested parties, who will submit code for the vehicles to compete against one another. The goal for this project is to create and foster an open-source environment where parties can compete and cooperate toward a common goal, the advancement of marine vehicle autonomy. In this paper, the Hunter-Prey scenario is developed, a nominal solution is created, and the parameters for the scenario are analyzed using regression testing through simulation and statistical analysis.
Predictive modeling of combustion processes
Recently, there has been an increasing interest in improving the efficiency and lowering the emissions from operating combustors, e.g. internal combustion (IC) engines and gas turbines. Different fuels, additives etc. are used in these combustors to try to find the optimal operating conditions and fuel combination which gives the best results. This process is ad-hoc and costly, and the expertise gained on one system cannot easily be transfered to other situations. To improve this process a more fundamental understanding of chemistry and physical processes is required. The fundamental constants like rate coefficients of elementary reactions are readily transferable enabling us to use results from one set of experiments or calculations in a different situation. In our group we have taken this approach and developed the software Reaction Mechanism Generator (RMG), which generates chemical mechanism for oxidation and pyrolysis of a given fuel under a set of user-defined physical conditions. RMG uses group additivity values to generate thermochemistry of molecules and has a database of rate coefficients of elementary reactions. These two sets of data are used to generate chemical kinetic mechanism in a systematic manner. The reaction mechanisms generated by RMG are purely predictive and elementary rate coefficient from any reliable source can be added to RMG database to improve the quality of its predictions. The goal of my thesis was two fold, first to extend the capabilities and database of RMG and to release it as an open source software for the chemical kinetic community to use.
Design by grammar : algorithmic design in an architectural context
An experimental study was performed to explore the practical applicability of the rule based design method of shape grammars. The shape grammar method is used for the analysis and synthesis of the hayat house type in a particular context. In the analysis part, the shape grammar method is used to extract basic compositional principles of the hayat house. In the synthesis part, first the evolution of a new hayat house prototype is illustrated. An algorithmic prototype transformation is considered. This transformation is achieved in two ways: by changing the values assigned to the variables that define the component objects of the form, and by replacing the vocabulary elements of the form with new ones. Then, the application of the rule based design method for housing pattern generation is explored. The design of a housing complex is illustrated using this method.
Essays in microeconomic theory
Three essays are presented which explore how strategic decision-making on a micro level translates into macro effects. Careful attention is paid to how asymmetric information and free-riding exert strong influences on t.he behavior of individuals. In the first chapter, a simple model of open-source software development is presented. It is found that either too little development or redundant development effort can occur. While any redundant research effort grows slowly with the size of the community, projects for which user valuations arc sufficiently extreme, such as solutions to the Year 2000 Computer Problem (Y2K), will result in significant waste relative to a traditional closed-source environment. Correlations between value and cost are shown to resolve the empirical puzzle as to why some extremely useful and fairly simple software docs not get written while more complex software sometimes docs. It is shown that a modular design can improve or worsen the performance of an open-source community. In the second chapter, an industry is considered in which new firms require time to learn whether they have the "right stuff" to grow in size and profitability in the long run. The critical input market ( that for skilled labor) is imperfectly competitive. By extending the literate on nonstationary dynamic bargaining, analysis is performed on a set of intertemporal externalities exerted by future parties on today's parties, and vice versa. The results suggest why, even if firms are able to write detailed contingent contracts with their current employees, inefficient levels of firm entry will generally exist. The theory also sheds some light on the continuing debate over the contribution of small firms to economic growth. In the third and final chapter, players in a. war of attrition care about the identity of the winner, even when they lose. In particular, a three player war of attrition is considered . Two "team" players enjoy a. fraction of their valuation when their partner wins. The remaining, ''solo" player benefits only by winning the war. Imposing team symmetry, the solo player drops out more quickly then either team player. The incentive to avoid fighting costs by free riding on a teammate is outweighed by a strategic commitment effect. Team players thus continue to fight even when they have no chance of winning a subsequent two-player subgame with the solo player. Examining limiting results, when the "caring coefficient" between t.he team players is small, a selection result obtains: The solo player drops out immediately, allowing the team players to then compete in a standard two-player war of attrition.
Metropolitan governance and local land use planning in Boston, Denver, and Portland
Metropolitan areas across the U.S. are characterized by sprawling development which uses larger amounts of open space than necessary, leads to the inefficient use of energy and water, increases social inequality, and causes a variety of other negative externalities. One way to prevent this type of development is to promote coordinated land use planning at the metropolitan scale. Metropolitan coordination is a challenge, however, in a country where most land use decisions are made at the local level and most states have not encouraged regional planning. This dissertation examines several different models of metropolitan coordination - or what I call metropolitan governance - and asks how they compare in term of their relative effectiveness. Given the growing interest in voluntary forms of governance, I explore whether regional planning agencies without authority are as effective at influencing local land use planning as regional planning agencies with greater authority. My research focuses on regional planning agencies in Boston, Denver, and Portland because each agency has a different level of authority over land use planning and a different level of control over certain financial tools. My hypothesis is that regional planning agencies with more tools at their disposal (such as state-mandated planning authority and the power to allocate transportation improvement funds) will be more successful at influencing local land use planning so that it meets regional goals. I find that agencies with financial and regulatory incentives are better able to engage local stakeholders and influence local land use planning.
Evaluating Leadership in Energy and Environmental Design for Neighborhood Developments through existing models of green urbanism
The U.S. Green Building Council, the Congress for the New Urbanism, and the Natural Resources Defense Council are currently developing a rating system aimed at evaluating the environmental sustainability of new neighborhood developments. The system, known as LEEDND (Leadership in Energy and Environmental Design for Neighborhood Developments), will be the first comprehensive set of planning and design standards that has the potential for widespread adoption by the development industry. In the absence of a set of standards like these, planners and developers have traditionally looked to older communities that exhibit well-regarded environmental design as models. Because LEED-ND has the potential to supplant these example as a model for guiding future environmental planning and design endeavors, the extent to which LEED-ND captures the values manifested in earlier models should be evaluated. This thesis applies the LEED-ND standards retroactively to three existing communities that the planning and development professions have held up as good examples of environmentally sensitive design. Rather than using the new rating system to evaluate the developments, the developments themselves are used to evaluate LEED-ND and the degree to which it reflects the goals of traditional ecological planning. While the case studies each score high enough to be considered "LEED Certified" (on a modified version of the LEED-ND standards), they all follow a pattern of poor performance on several credits related to smart growth and New Urbanist design ideals. These points indicate areas in which the environmental values of the planning profession have changed over time, and how these values may manifest themselves in the physical design of the built environment.
Supply chain design and site selection for the expansion of international manufacturing capacity
The research conducted for this thesis was performed at "Company X", a U.S.-based engineered goods manufacturer. This project focused on Company X's overall manufacturing strategy, with an emphasis on how global expansion of manufacturing can allow the company to achieve greater international sales growth. Company X's current strategy for supplying non-U.S. markets has largely relied on U.S. manufacturing and assembly, followed by exporting of finished goods. Due to a desire to increase international sales and a need to address tariff and non-tariff barriers in certain key markets, Company X must now evaluate opportunities for in-country manufacturing and assembly in its target markets. This project seeks to evaluate the high-level financial and operational risks of expanding Company X's current manufacturing operations through the use of three types of analysis: 1) A single-site cost analysis of material and inventory flow to an international site; 2) A global manufacturing capacity plan to serve regional markets; and, 3) An evaluation of qualitative risk factors affecting potential site selection. The single-site model involves the development of a simplified cost model. This model demonstrates the cost-competitiveness of each supply chain design alternative for serving a single international site, including the sensitivity of the model to changes in key cost drivers. The global model builds on the results of the single-site model and evaluates the opportunities for international sites to serve both in-country and regional demand for the top markets Company X is targeting.
Functional role of TM poroelasticity in cochlear mechanics
The tectorial membrane (TM) is an extracellular matrix that overlies the mechanically sensitive hair bundles of sensory receptor cells in the inner ear. Based on this strategic position, it has long been accepted that the TM plays a critical role in the stimulation of sensory hair cells. Early measurements demonstrated elastic properties of the TM and suggested that the TM is resonant. More recent measurements have shown that longitudinal coupling of the TM generates traveling waves that contribute to cochlear tuning. Here we show the importance of (1) viscosity in controlling the spread of excitation in TM traveling waves, as well as the importance of matrix porosity in determining (2) the viscosity of genetically modified TMs, and (3) local interactions with hair bundles. To understand the longitudinal spread of mechanical excitation via TM traveling waves, we develop chemical manipulations that systematically and reversibly alter TM stiffness and viscosity. Increasing TM viscosity or decreasing stiffness reduces longitudinal spread of mechanical excitation, thereby coupling a smaller range of best frequencies, which would sharpen tuning. In contrast, increasing viscous loss or decreasing stiffness would tend to broaden tuning in resonance based TM models. Thus, TM wave and resonance mechanisms are fundamentally different in the way they control frequency selectivity. To understand the molecular origin of TM viscosity, we investigate traveling waves in genetically modified TMs. We show that nanoscale pores of TectaY1870C/+ TMs are significantly larger than those of Tectb -/- TMs. The larger pore size reduces shear viscosity, thereby reducing traveling wave speed and increasing spread of excitation. These results demonstrate the previously unrecognized importance of TM porosity in cochlear tuning. To understand how TM porosity affects the local interaction between the TM and hair cells, we apply oscillatory forces to the TM with spherical probe tips. The effective stiffness of the TM is small at low frequencies where the porous matrix and surrounding fluid can move independently. By contrast, the effective stiffness of the TM is large at high frequencies, where these two phases are entrained by viscosity to move together. Interestingly, the transition frequency is in the audio frequency range only for hair bundle sized tips. Furthermore, the transition region is characterized by increased phase lead between the stimulus force and applied displacement that may play an essential role in the stability of micromechanical feedback paths and ultimately the sensitivity of hearing. In conclusion, these results show that traveling wave properties and local interactions with the hair bundles depend critically on TM porosity, thus fundamentally changing the way we think about molecular mechanisms underlying cochlear frequency selectivity and sensitivity.
Towards high performing hospital enterprise architectures : elevating hospitals to lean enterprise thinking
This research is motivated by the National Academy of Engineering and the Institute of Medicine's joint call for research in healthcare, promoting the application of principles, tools, and research from engineering disciplines, and complex systems in particular. In 2005, the US healthcare expenditure represented 16% of its GDP, with hospitals representing the largest source of expenditure, as is the case in the United Kingdom. Consequently, the strategies and operations developed and implemented by hospitals have a significant impact on healthcare. Today, it would be hard to find a hospital that is not implementing a Lean initiative or who isn't familiar with its concepts. However, more often than not, their approach has narrowly focused at a process level and inside individual service units like an emergency department. This research seeks to elevate traditionally narrow hospital definitions of lean and explore the broader concepts of lean enterprise principles and Enterprise Architecture (EA) while enhancing our knowledge of hospitals' socio-technical complexity and enriching an emerging EA Framework (EAF) developed at the Massachusetts Institute of Technology (MIT). Following an extensive longitudinal multidisciplinary literature review, a number of expert interviews, and preliminary empirical findings, an exploratory inductive and deductive hybrid study was designed to collect and concurrently analyze both qualitative and quantitative empirical data from multiple hospital settings over two main phases: * The first phase consisted of recorded interviews with the Chief Executive Officers of seven leading Massachusetts hospitals, who also provided sensitive internal strategy and operations documents. We explored how hospitals currently measure their hospital performance and how their explicit and implicit practices may be improved using lean enterprise principles. e The second phase comprised two in-depth case studies of large leading multidisciplinary hospitals, one located in the US and other in the United Kingdom, and included a total of 13 embedded units of analysis. Multiple sources of evidence were collected including electronic medical records, 54 interviews, observation, and internal documents. Findings were categorized and sorted, as phenomena of interest consistently emerged from the data, and enriched both the EAF, and our understanding of hospitals' EA in particular. In both in-depth hospital cases we found that their EA consisted of multiple internal architectural configurations, and in particular, those with an enriched understanding of EA had made decisions which had improved not only their local performance, but also enhanced their interactions with other service units upstream and downstream. Conversely, worse performing configurations demonstrated a limited understanding of their hospital's EA. We conclude that hospital performance can be improved through an enriched understanding of hospital EA. Furthermore, whilst considering all hospitals included in this study, we propose general and specific recommendations, as well as diagnostic questions, performance dimensions, and metrics, to assist senior hospital leaders in architecting and managing their enterprise.
Boycott Toolkit : collaborative research for collective economic action
Many modern social movements advocate boycotts as a mechanism to pursue social change. However, these campaigns are often broad in scope and limited to committed activists as potential adherents. This thesis describes a web-based platform to organize highly targeted boycotts, perform collaborative research, and disseminate information through social networks. The approach differs from current boycott lists by allowing for community contributed content and by linking specific geographic contexts with potential individual actions. To better understand the needs of a real-world boycott campaign, the author traveled to Israel and the West Bank to meet with human rights advocates, international aid workers, journalists and activists. This field work suggested an appropriate structure in which a better boycott could be developed. After fully developing a tool that addressed these needs and constraints, the tool was broadened to demonstrate wider applications. The Boycott Toolkit was deployed to an international network of activists with seven campaigns that follow several major ongoing boycotts of today. These focused on a diverse set of issues: immigrant rights, environmental justice, marriage equality, reactionary media, and the ongoing Israel-Palestine conflict. The project was released to media attention, and a user survey indicated an appreciation for the careful differentiation between targets, revealing an enthusiastic, though small, set of active contributors.
Hedge funds and private equity versus real assets
The current world economy confronts investors with many challenges, especially investors managing institutional portfolios. Global GDP growth has been slowed, and the performance of traditional assets - equities and bonds - alone are often not able to satisfy the various risk and return objectives that institutional investors seek in their portfolios. Amid this challenging investment environment, investors around the world are seeking new investment strategies to lessen their reliance on those traditional asset classes. Consequently, alternative investments continue to garner greater attention of investors as an effective method to diversify their portfolios and to potentially increase overall returns and mitigate risk. However, the term "alternative investments" encompasses a broad range of investment concepts and there is no generally accepted standard definition. A major focus of this thesis is to compare real estate and real assets with hedge funds and private equity, the four most prevalent sub-classes within alternative investments. Specifically, we address the question of whether, or to what extent, real assets including real estate can improve the performance of institutional investment portfolio, in particular in comparison with the private equity and hedge funds. Additionally, we analyze the effect of diversifying globally compared to domestically. We first develop a common ground regarding alternative investments and their characteristics. Then, we focus primarily on traditional mean-variance optimization but also consider risk parity as the allocation criterion to explore the optimal mixed-asset allocation strategies as a function of the investor's expected return target. Additionally, we compare the resulting allocations with institutional investors current average allocation in their portfolios. The findings clearly indicate that adding alternative asset classes generally offers attractive diversification opportunities to a portfolio consisting of only traditional asset classes - stocks and bonds. We find that real assets and the private equity & hedge fund type of alternative assets both enhance the portfolio, and the aggregated optimal share of these alternative investments is much higher than current industry practice. However, the role of the various different types of alternative investments varies widely in a portfolio, in particular as a function of the investor's risk/return appetite.
Pyrolytic graphite production : automation of material placement
This research examines the process and challenges associated with the addition of an autonomous transfer robot to a manufacturing line for AvCarb Material Solutions for use in production of pyrolytic graphite. Development of the system included the design and fabrication of an end-effector, selection of a SCA RA robotic arm, and incorporation of a vision system. The arm and the end-effector were tested to see if material would shift during transfer. The entire system was tested for repeatability and transfer time. The results of the test indicted that the transfer system would successfully meet specifications with high process capability given by a Cpk of 1.47.
self-tuning one-hundred watt wireless power transfer system
This thesis presents a new method of controlling wireless power transfer suitable for highly resonant magnetically coupled systems. An application of this system is unattended autonomous operation such as recharging of autonomous underwater vehicles or underwater sensor networks. Special attention is given to maximizing power transfer even when there may be spatial variations in transfer distance, which shifts the resonance peak frequency and hence requires automated control. An automated system comprised of a 100 watt switching power amplifier coupled to a frequency controller is designed and implemented. The desired operating frequency is determined by quantification of the real-time AC power supplied to the resonant transmitter. The control system is preset to select operation at either of two selectable modes inherent to the resonant structure. The implemented system can operate underwater, requires only DC voltage inputs and operates over a range of distances while self-tuning to peak power transfer.
Power processing and active protection for photovoltaic energy extraction
Solar photovoltaic power generation is a promising clean and renewable energy technology that can draw upon the planet's most abundant power source - the sun. However, relatively high levelized cost of energy (LCOE), the ratio of the total cost of ownership to the total energy extracted over the lifetime of the generation system, has limited the grid penetration of solar power. Mismatch loss remains an important issue to address in PV systems, and a solar power system can lose as much as 30% of its energy generation capability over a year due to mismatch. Maximum power point tracking (MPPT) using power electronics converters can increase the overall solar energy extraction efficiency and thus reduce the LCOE. Many power electronics solutions have been proposed at the module and submodule levels, which only partially addresses the mismatch problem. However, scaling the existing solutions to finer optimization granularity has been cost-prohibitive. In the first part of this thesis, a new cell-level strategy, termed diffusion charge redistribution (DCR), is proposed to fully recover mismatch loss. The proposed technique processes power by leveraging the intrinsic solar cell capacitance rather than relying on externally added intermediate energy storage in order to drastically reduce to the cost of MPPT while enabling the finest optimization granularity. Moreover, strings balanced by this technique exhibit power versus current curves that are convex, which simplifies the required MPPT algorithm. Cell-level power balancing may also ease the testing and binning criteria during manufacturing, which leads to additional cost savings. Differential power processing (DPP) is a key concept to further improve energy efficiency by minimizing the amount of power conversion. In the second part of this thesis, the concept of differential power processing is introduced to the proposed cell-level power balancing technique by rethinking the string-level power electronics architecture. This enhancement can improve the overall efficiency of DCR by more than 3.5% while permitting the use of a slower DCR switching frequency. It can also be applied to many other cascaded converter architectures to reduce insertion loss. In particular, the proposed differential DCR (dDCR) architecture simultaneously achieves maximum power point tracking without any external passive components at the cell-level, and maintains differential power processing with zero insertion loss. This is accomplished by decoupling the MPPT functional block from the DPP functional block. The new power optimization aims to not only maximize energy extraction from each solar cell but also minimize the amount of processed power. The new multi-variable optimization space for the dDCR topology is evaluated and shown to be convex, which simplifies the required optimization algorithm. The inverter represents a large part of the overall cost and is often the most failure-prone component in a photovoltaic power system. In order to improve the cost and reliability of a grid-tie inverter, switched-capacitor techniques are adopted to reduce the required capacitance and rated voltage of the dc-link capacitor. The proposed switched-capacitor energy buffer can improve capacitor energy utilization by more than four times for a system with a 10% peak-to-peak ripple specification, and enable the use of film or ceramic capacitors to prolong the system lifetime to over a hundred years. The third part of this thesis explores the SC energy buffer design space and examines tradeoffs regarding circuit topology, switching configuration, and control complexity. Practical applications require control schemes capable of handling source and load transients. A two-step control methodology that mitigates undesirable transient responses is proposed and demonstrated in simulation. Finally, dc power system architectures have attracted interest as a means for achieving high overall efficiency and facilitating integration of renewable and distributed energy sources, such as a photovoltaic system. However, to enable widespread adoption of dc systems, the reliability of fault protection and interruption capability is essential. A new dc breaker topology, called the series-connected Z-source circuit breaker, is introduced to minimize the reflected fault current drawn from a source while retaining a common return ground path. Analogous in some respects to an ac thermal-magnetic breaker, the proposed Z-source breaker can be designed for considerations affecting both rate of fault current rise and absolute fault current level. The proposed manual tripping mechanism also enables protection against both instantaneous large surges in current and longer-term over-current conditions.
Fast scanning two-photon microscope
Fast scanning two-photon microscopy coupled with the use light activated ion channels provides the basis for fast imaging and stimulation in the characterization of in vivo neural networks. A two-photon microscope capable of fast scanning using acousto-optic deflectors was designed and implemented. The software controller was expanded so that random access scan in three dimensions could be handled, so that algorithms that can identify neurons from images acquired using the two-photon microscope can be developed. Finally the localization of optogenetic Channelrhodopsin-2 channel to the neuron cell body was tested using a ChR2-MBD construct.
Molecular systems analysis of a cis-encoded epigenetic switch
An ability to control the degree of heterogeneity in cellular phenotypes may be important for cell populations to survive uncertain and ever-changing environments or make cell-fate decisions in response to external stimuli. Cells may control the degree of gene expression heterogeneity and ultimately levels of phenotypic heterogeneity by modulating promoter switching dynamics. In this thesis, I investigated various mechanisms by which heterogeneity in the expression of FLO 11 in S. cerevisiae could be generated and controlled. First, we show that two copies of the FLOJ1 locus in S. cerevisiae switch between a silenced and competent promoter state in a random and independent fashion, implying that the molecular event leading to the transition occurs in cis. Through further quantification of the effect of trans regulators on both the slow epigenetic transitions between a silenced and competent promoter state and the fast promoter transitions associated with conventional regulation of FLO11, we found different classes of regulators affect epigenetic, conventional, or both forms of regulation. Distributing kinetic control of epigenetic silencing and conventional gene activation offers cells flexibility in shaping the distribution of gene expression and phenotype within a population. Next, we demonstrate how multiple molecular events occurring at a gene's promoter could lead to an overall slow step in cis. At the FLO] 1 promoter, we show that at least two pathways that recruit histone deacetylases to the promoter and in vivo association between the region -1.2 kb from the ATG start site of the FLO11 ORF and the core promoter region are all required for a stable silenced state. To generate bimodal gene expression, the activator Msnlp forms an alternate looped conformation, where the core promoter associates with the non-coding RNA PWR1's promoter and terminator regions, located at -2.1 kb and -3.0 kb from the ATG start site of the FLO]1 ORF respectively. Formation of the active looped conformation is required for Msnlp's ability to stabilize the competent state without destabilizing the silenced state and generate a bimodal response. Our results support a model where multiple stochastic steps at the promoter are required to transition between the silenced and active states, leading to an overall slow step in cis. Finally, preliminary investigations of heterozygous diploids revealed possible transvection occurring at FLO] 1, where a silenced allele of FLO 11 appeared to transfer silencing factors to a desilenced FLO11 allele on the homologous chromosome. These observations suggest a new mechanism through which heterogeneity in FL011 expression could be further controlled, in addition to the molecular events at the FL011 promoter we elucidated previously.
Visualizing adversaries : transparent pooling approaches for decision support in cybersecurity
Using coevolutionary algorithms to find solutions to problems is a powerful search technique but once solutions are identified it can be difficult for a decision maker to select a solution to deploy. ESTABLO runs multiple competitive coevolutionary algorithm variants independently, in parallel, and then combines their test and solution results at the final generation into a compendium. From there, it re-evaluates each solution, according to three different measurements, on every test as well as on a set of unseen tests. For a decision maker, it finally identifies top solutions using various metrics and visualizes them in the context of other solutions. However, it can be difficult to decide on which coevolutionary algorithms to run individually or use in ESTABLO. A coevolutionary variant, POOLING, was then created using this same principle of combining multiple variants. POOLING runs competitive coevolutionary algorithm variants, combines their solutions after every generation, and seeds the next generation with the top solutions found. ESTABLO (with POOLING as one of its variants) is demonstrated on multiple cyber security related problems. We found that using ESTABLO was beneficial to most problems as different variants dominated in different scenarios. We also found that POOLING was able to consistently produce individuals that performed well against adversaries and in the context of all of their peers.
Classroom web application facilitating peer feedback & discussion
Peer-to-peer interactions, either through discussion forums or the peer review process, provide students with essential articulation skills as they reflect and respond to the ideas of others. Unfortunately, many students lack confidence in the value of their thoughts and feedback, or students experience difficulty maintaining interest in a peer's thoughts/ideas, which results in a lack of motivation to participate or in comments that are overly superficial, flattering, or brief. The interactive application NORA emphasizes that no one reviews alone by allowing the users to write comments and then combine, like, and rearrange them on a large canvas as they analyze a piece of work, examine a topic, or provide feedback. Thus, NORA facilitates the peer review and discussion process, addressing the challenges faced by students. NORA's visual-oriented interface novel in how it presents content to the students in smaller pieces, allows several threads of comments to be seen at the same time, and provides for easy interaction between users as they write or combine comments, guiding them through specific learning goals chosen by the instructor. Targeted at college students, the application was tested in two different classes, Rhetoric and Communication and Spanish /, with different classroom activities that are typically done orally with extensive class discussion. In both classes, students analyzed the subject matter and reviewed the medium as they responded to the comments of each other and the guidelines provided by the professor. The peer-to-peer interaction allowed users to build upon each other's comments, and promoted accurate, thorough, and relevant feedback in an engaging manner. NORA was seen to encourage more interaction, draw out quieter and shyer students, and boost the number of thoughtful, analytical responses.
Numerical implementations of holographic duality via the fluid/gravity correspondence
The fluid/gravity correspondence describes an map from relativistic fluid dynamics to general relativity in an anti de Sitter (AdS) background in one more dimension. This is a specific example of a more general principle known as holographic duality, in which a quantum field theory (QFT) is dual to a gravitational theory with the QFT defined on the boundary. Since we can regard hydrodynamics as a low-energy description of many QFTs, the fluid/gravity correspondence lets us probe holographic duality for QFTs at low energy. In this thesis, we will discuss holographic duality, hydrodynamic theory and turbulence, numerical implementations of hydrodynamics, black branes in AdS, the fluid/gravity correspondence, and numerically testing the fluid/gravity correspondence.
Romantic regressions : an analysis of behavior in online dating systems
Online personal advertisements have shed their stigma as matchmakers for the awkward to claim a prominent role in the social lives of millions of people. Web sites for online dating allow users to post lengthy personal ads, including text and photos; search the database of users for potential romantic partners; and contact other users through a private messaging system. This work begins with psychological and sociological perspectives on online dating and discusses the various types of online dating Web sites. Next, it presents an analysis of user behavior on one site in particular, which has more than 57,000 active users from the United States and Canada. A demographic description of the population is given, and then 250,000 messages exchanged by the active users over an eight-month period are analyzed. An examination of which characteristics are "bounding" finds that life course attributes such as marital status and whether one wants children are most likely to be the same across the two users in a dyadic interaction. To understand which characteristics are important to users in deciding whom to contact, regression models show the relative strength of a variety of attributes in predicting how many messages a user with those attributes will receive. By far the strongest predictor of messages received is the number of messages sent. For men, age, educational level, and self-rated physical attractiveness are the next most important qualities. For women, they are not being overweight, self-rated physical attractiveness, and having a photo. Finally, a discussion of the design implications of these findings and other design issues follow the results.
Distributed mode estimation through constraint decomposition
Large-scale autonomous systems such as modern ships or spacecrafts require reliable monitoring capabilities. One of the main challenges in large-scale system monitoring is the difficulty of reliably and efficiently troubleshooting component failure and deviant behavior. Diagnosing large-scale systems is difficult because of the fast increase in combinatorial complexity. Hence, efficient problem encoding and knowledge propagation between time steps is crucial. Moreover, concentrating all the diagnosis processing power in one machine is risky, as it creates a potential critical failure point. Therefore, we want to distribute the online estimation procedure. We introduce here a model-based method that performs robust, online mode estimation of complex, hardware or software systems in a distributed manner. Prior work introduced the concept of probabilistic hierarchical constraint automata (PHCA) to compactly model both complex software and hardware behavior. Our method, inspired by this previous work, translates the PHCA model to a constraint representation. This approach handles a more precise initial state description, scales to larger systems, and to allow online belief state updates. Additionally, a tree-clustering of the dual constraint graph associated with the multi-step trellis diagram representation of the system makes the search distributable. Our search algorithm enumerates the optimal solutions of a hard-constraint satisfaction problem in a best-first order by passing local constraints and conflicts between neighbor sub-problems of the decomposed global problem. The solutions computed online determine the most likely trajectories in the state space of a system. Unlike prior work on distributed constraint solving, we use optimal hard constraint satisfaction problems to increase encoding compactness. We present and demonstrate this approach on a simple example and an electric power-distribution plant model taken from a naval research project involving a large number of modules. We measure the overhead caused by distributing mode estimation and analyze the practicality of our approach.
Standardized system for ballistic missile guidance data analysis
This thesis had the goals of standardizing and automating guidance data analysis at The Charles Stark Draper Laboratory, Inc. (Draper). This was accomplished via, the first ever machine understandable knowledge of Portable Engineering Testing Stations (PETS) variables located in a comprehensive MySQL database. Automation was achieved via a MATLAB program designed to be flexible for a wide variety of inputs and outputs. The standardization required time-consuming interface analysis and a tedious programming effort, but has proved to be successful and is already streamlining the data collection and analysis process for another program at Draper.
Optimizing the distribution network of perishable products to Small Format Stores
FoodCo is a leading foods company that has reputed brands and global operations with revenues in excess of USD 5Bn. Although FoodCo's sales to Small Format Stores (SFS) customers are a small part of the overall sales, it is a fast growing segment where FoodCo sees future. However, distribution to the SFS channel is a challenge - FoodCo needs to ship refrigerated and frozen products to over 40,000 stores through multiple distributors. Furthermore, such stores are characterized by low sales velocity relative to traditional retailers. The transactional nature of FoodCo's supply chain relationship with channel partners creates challenges for FoodCo in influencing key decisions in the supply chain. To tackle the problem, the authors reviewed the literature and interviewed experts and practitioners to understand best practices in Consumer Packaged Goods (CPG) companies across the world serving SFS. Although there were few direct parallels, collaboration was found to be a practice that successful companies employed. The authors also analyzed data including store sales, orders to FoodCo, promotions and supply chain costs, etc. They created a quantitative model that suggested that fees paid out to distributors for their full service are not proportional to the costs. They also concluded that FoodCo's lack of visibility into the sell-through demand made it subject to a strong bullwhip effect, leading to large amounts of inventories and shrinkage. Further, they identified that store sales were scattered geographically and that direct shipments to high selling stores were not possible. Based on the analysis, the authors recommend that FoodCo start collaborating with their channel partners. First, FoodCo could communicate the value of collaboration to its channel partners in order to gain their support. Then, FoodCo and the retailers can share their demand plan with each other, foster collaboration and elevate the manufacturer-retailer relationships to a strategic level. Further, FoodCo could build scale by consolidating volumes through a single re-distributor for channels where the sales volumes are very low.
Calculation of the axial charge of a heavy nucleon in Lattice QCD
In this thesis, we aim to calculate the non-renormalized axial charge gA of a heavy nucleon made out of quarks at the physical mass of the strange quark. We present the framework of Lattice QCD which makes the calculation of such observables attainable from first principles. The data used for the estimation of gA were obtained on a 243 x64 hypercubic lattice with lattice spacing a ~ 0.12 fm and pion mass m[pi] = 0.450 GeV. Three different source-sink seperations were used, tsink = [12a, 14a, 16a]. For each timeslice seperation signal we perform a correlated x2 fit and obtain the following values for gA: 0.551, 0.564 and 0.556. The unrenormalized value value for gA is extracted taking the limit as tsink --> [infinity] and is shown to be gA = 0.558. We discuss how the accuracy of this result is compromised by the small number of tsink values, by excited state contamination and by the increase of statistical noise with time.
Assessing land conservation strategies : the case of the Florida Everglades
South Florida's Everglades is home to 67 threatened and endangered species. By 2100 it is estimated that sea level rise will inundate over 20% of existing conservation lands. Species will be dislocated and migrate to new land. Simultaneously, more than 500,000 people are moving to the region annually. The new populations are subdividing and developing rural lands. By 2100, it is estimated that over 60% of rural land will be urbanized. In this thesis, I use Geographic Information Systems to project the location of urban land, conservation land and inundated land in South Florida over the next 50 years. I assess fee simple purchase and conservation easements as potential methods of conveying land protection. I conclude that none of the current methods of conservation have the capacity to manage the large scale land protection that will be critical in the coming years, if we are to protect our species from the emergent and significant stressors of climate change and urbanization. I conclude that a major federal initiative based on purchasing deed restrictions and a new agency that specializes in monitoring will be necessary to quickly creating a large, adaptive ecological reserve network.
Inhibitory synapses are repeatedly assembled and removed at persistent sites in vivo
Structural plasticity, the rewiring of synaptic connections, occurs not only during development, but is prevalent in the adult brain and likely represents the physical correlate of learning and memory. Removal or addition of excitatory and inhibitory synaptic inputs onto a neuron can affect their relative influence on excitation in specific dendritic segments, and ultimately regulate neuronal firing. However, the structural dynamics of excitatory and inhibitory synapses in vivo, and their relation to each other, is not well understood. To gain insight into synaptic remodeling in the adult brain in vivo, we used dual- and triple- color two-photon imaging to track the dynamics of all inhibitory and excitatory synapses onto a given neuron in the cerebral cortex at different timescales. By studying synaptic changes over 4-day or 24-hour intervals we were able to determine that inhibitory synapses are remarkably dynamic in vivo. We found that Inhibitory synapses occur not only on the dendritic shaft, but also a significant fraction is present on dendritic spines, alongside an excitatory synapse. Inhibitory synapses on these dually innervated spines are remarkably dynamic and in stark contrast to the stability of excitatory synapses on the same spines. Many of the inhibitory synapses on dendritic spines repeatedly disappear and reappear in the same location. These reversible structural dynamics indicate a fundamentally new role for inhibitory synaptic remodeling - flexible, input-specific modulation of stable excitatory connections. To determine whether synapse dynamics are regulated by experience-dependent plasticity, we performed monocular deprivation, finding that an ocular dominance shift reduces inhibitory synaptic lifetime and increases recurrence. To investigate the molecular mechanism of rapid inhibitory synapse appearance and removal, I am currently testing molecular interventions that influence the clustering of gephyrin, a scaffolding molecule that anchors inhibitory receptors at postsynaptic sites.
Crashworthiness analysis of ultralight metal structures
In the design of light weight crashworthy metal structures, thin-walled prismatic components have been widely used in aircraft, high speed trains, fast ships, and automobiles. Two new types of such components are proposed, both of which consist of a thin-walled member and an ultralight metal core such as an aluminum honeycomb or a closed-cell aluminum foam. The first type is the thin-walled member filled wit.h the ultralight metal core, while the second type is a double-walled member with the ultralight metal core sandwiched between the two walls. This research is to study the crushing behavior of the ultralight metal core and to determine the crashworthiness and the weight saving of structures composed of the two types of reinforced components, each of which may undergo axial crushing, bending, or twisting. Numerical expressions are developed to predict the crushing behavior of the new type components. The first task of the research is to study the crushing behavior of closed-cell aluminum foams. A new model of a truncated cube, which captures the basic folding mechanism of an array of cells, is developed. The model consists of a system of collapsing cruciform and pyramidal sections. Theoretical analysis is based on energy consideration in conjunction with the minimum postulate in plasticity. The assumed kinematic model for the crushing mechanism of truncated cube cells gives a good agreement with the deformation mechanism obtained from the numerical simulation. Analytical formulation for the crushing resistance of the truncated cube cell is shown to correlate very accurately with the numerical results. Closed form solutions for the crushing resistance of closed-cell aluminum foams in terms of relative density are developed. The formulas are compared with the experimental results and give an excellent agreement. The double-walled sandwich columns appear to have the highest specific crushing resistance. It is found that during progressive crushing, the core-face plate debonding occurs only at the corner portion of the column, while the web portion remains intact and dissipates most of the work. In the case of filled columns, significant increase of the mean crushing force is also obtained by filling the thin-walled columns with aluminum foam. It is found that the increase of the mean crushing force of a foam-filled column has a linear dependence on the foam compressive resistance and cross sectional area of the column, with a proportionality constant equal to 1.8. The proposed solution is well correlated with the experimental data for wide range of column geometries, materials, and foam strengths. The ultralight metal filler also provides a higher bending resistance by retarding inward fold formation at the compression flange. In the case of aluminum foam filling, the presence of the foam filler changes the crushing mode from single stationary fold to a multiple propagating fold. The progressive crushing prevents the drop in load carrying capacity due to sectional collapse. This phenomenon is captured by both experiment and numerical simulation. Partially foam-filled beams offer nearly as high bending resistance as fully filled beam. The concept of the effective foam length is developed. Two distinct crushing states are observed in the torsional deformation of filled thin-walled bars, namely the initial torsional resistance and the stabilized torsional crushing mechanisms. The ultralight metal filler provides a stabilizing effect on the torsional crushing process. The inward fold collapse of the column wall is restricted due to the filler, and the plastic resisting mechanism is increased through the formation of outward-diagonal shear bands. The structural weight saving is assessed through a concept of specific energy absorption, defined as the work absorbed divided by the total component weight. The filled and double-walled sandwich members give higher specific energy absorption than the empty thin-walled member. A 40 - 100 % weight saving can be achieved by the double-walled sandwich components, while 25 - 70 % weight saving can be achieved by filled thin-walled components. It is proven that the proposed components are attractive structural elements for weight-efficient crashworthy design.
Learning from the unique public realm of the long distance train
The Long Distance American Train is a unique, meaningful, and somewhat mysterious site. These ephemeral, mobile societies exhibit remarkable qualities, including intensive interactions with strangers, conviviality amongst diverse groups, imagination and reflection, and immersion in the American landscape. I conducted participant observation research while riding seven long distance (over 700 mile) trains in December 2014- January 2015 to discover how, why and for whom long distance train travel is unique and meaningful, and whether there were lessons that could be learned regarding how to design and manage the train and non-train public spaces. In addition to my ethnographic research, I analyzed the changing symbolism of the train. My research indicates that the long distance train exhibits qualities similar to Foucault's concepts of heterotopia, and that though it is in many ways private and limited, it also deserves consideration a unique, national public space. The train is also an avatar of modernity, and was crucial in enacting and making possible the American social and spatial landscape inherited from the late 19th century. It is thus an important place to reflect on these conditions and imagine a new path. I suggest that train is able to provide these unique experiences because it exhibits eight spatio- temporal qualities: Functionality, Accessibility, Visual Connection to Landscape, Human-Scale Design, Grounded Placelessness, (De)Regulation, Duration, and Autonomy. Amtrak can build upon its unique platform by improving the train through more fully realizing these conditions. Abstracted, these principles can also be applied to other, non-train public spaces. In comparing the partially successful public realm of the train to status quo of public space design in the United States, I unearth several additional principles that demand rethinking. Finally, I suggest that Amtrak and the U.S. Government can better take advantage of this underutilized platform by enacting a series of changes to great benefit of many passengers.
Policies to manage electronics waste : an analysis of US and EU regulatory initiatives
Policies to address the environmental challenges associated with the disposal of electronics waste are being developed in the US and the EU. This paper offers a standard critique of those policies and also analyzes them in terms of the likelihood they will induce innovation in response to their requirements. The US proposal relaxes the hazardous waste handling requirements for cathode ray tubes and mercury containing equipment, with the intent that more of them will be recycled. However, the rule does not contain any requirement that recycling occur, and the economic incentives to do so are minimal. Technological innovation and diffusion of current technology are both unlikely responses to the proposed rule. In addition, the rule does not apply to many users and types of electronics equipment, thereby only addressing a very small portion of the overall electronics waste issue. The rule fails to consider other materials found in electronics equipment and issues regarding recyclability, recycled content, secondary markets, and materials substitution. Two of the EU proposed directives are much more comprehensive in their coverage of electronics waste. They require certain recycling targets to be met and mandate the elimination of some hazardous substances from electronics equipment. Diffusion and incremental innovation are likely responses, with perhaps radical innovation only as the targets become more stringent in future years. The directives require additional clarification regarding the types of electronics equipment covered, financing mechanisms, the structure of the recycling targets, the granting of exemptions, and the development of secondary markets. Portions of the two EU proposed directives may be challenged under World Trade Organization rules. An analysis shows that provisions in the directives could face some difficulty in obtaining exemptions. A growing disconnect between global trade rules and the making of environmental policy is examined. Finally, one additional proposed EU policy that would require manufacturers of electronics equipment to consider the life cycle environmental impacts of the equipment during its design and manufacture is analyzed. At present, innovation is an unlikely response to this directive, though the inclusion of specific performance requirements could substantially improve it. Suggested modifications are offered for each of the policies examined.
An imperative extension to alloy and a compiler for its execution
This thesis presents an extension of the Alloy specification language with the standard imperative programming constructs, allowing for the natural specification of dynamic systems. Using this extension, programmers can express stateful behavior directly, mixing declarative and imperative styles as desired. A relational semantics for the new imperative constructs will ensure that specifications written using the extension are translatable into the original Alloy language, allowing their analysis using the existing Alloy Analyzer. The thesis also presents a compiler from the extended Alloy language to Prolog so that specifications may be efficiently executed. While the Alloy Analyzer's SAT-based analysis engine is incredibly fast in exploring a wide search tree, Prolog's unification-based strategy has the ability to delve very deeply into highly constrained search trees. Many specifications of dynamic systems have this property, making Prolog a perfect engine for executing them. This combination of a language extension and a compiler for its execution represents an end-to-end solution for programming. The Alloy Analyzer allows the programmer to check properties of a high-level specification of the desired behavior, and the Prolog-based compiler allows the execution of that specification; if the compiled program is not fast enough, the programmer may refine the specification to make it faster, and the Alloy Analyzer will check that the refinement step has not introduced errors.
The role of large-scale government-supported research institutions in development : lessons from Taiwan's Industrial Technology Research Institute (ITRI) for developing countries
This thesis seeks to examine the extent of the role that the Industrial Technology Research Institute (ITRI) played in Taiwan's high-technological development and whether developing countries of today can promote such development by creating similar institutional arrangements. Literature on innovation systems was reviewed, particularly national innovation systems and the role of R&D institutions within these. Taiwan's recent economic success, deemed attributable to economic and institutional reforms in recent decades, was also studied. In depth analysis was carried out of its leading high-technological research institute, ITRI, which bridges the gap between industry and academia. Although the case of Taiwan is sometimes presented as a unique example of industrial success of an SME-based state, this thesis argues that this success was possible because the research and development process had a large institute at its core. One way of creating such a research scale is by merging existing institutes, a process that would result in more efficient use of capital and human resources. The case of high-technological development in Pakistan is briefly assessed in order to gauge how its existing institutions structure could be amended to allow such changes to be made. The study concludes with the following three main points: (i) scale is an important factor: Taiwan's SME-based industry was able to succeed because of a large research institute at its core; (ii) in developing countries, governments decide which form of high-technology to pursue and when; thus, timing and choice of sector are important; and (iii) political leadership was seen to be important in the case of Taiwan's development in high-technology, and can play a key role in developing countries of today.
Applications of three-manifold Floer Homology
In this thesis we give an exposition of some of the topological preliminaries necessary to understand 3-manifold Floer Homology constructed by Peter Kronheimer and Tomasz Mrowka in [16], along with some properties of this theory, calculations for specific manifolds, and applications to 3-manifold topology.
Decentralized control of multi-robot systems using partially observable Markov Decision Processes and belief space macro-actions
Planning, control, perception, and learning for multi-robot systems present signicant challenges. Transition dynamics of the robots may be stochastic, making it difficult to select the best action each robot should take at a given time. The observation model, a function of the robots' sensors, may be noisy or partial, meaning that deterministic knowledge of the team's state is often impossible to attain. Robots designed for real-world applications require careful consideration of such sources of uncertainty. This thesis contributes a framework for multi-robot planning in continuous spaces with partial observability. Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) are general models for multi-robot coordination problems. However, representing and solving Dec-POMDPs is often intractable for large problems. This thesis extends the Dec-POMDP framework to the Decentralized Partially Observable Semi-Markov Decision Process (Dec-POSMDP), taking advantage of high- level representations that are natural for multi-robot problems. Dec-POSMDPs allow asynchronous decision-making, which is crucial in multi-robot domains. This thesis also presents algorithms for solving Dec-POSMDPs, which are more scalable than previous methods due to use of closed-loop macro-actions in planning. The proposed framework's performance is evaluated in a constrained multi-robot package delivery domain, showing its ability to provide high-quality solutions for large problems. Due to the probabilistic nature of state transitions and observations, robots operate in belief space, the space of probability distributions over all of their possible states. This thesis also contributes a hardware platform called Measurable Augmented Reality for Prototyping Cyber-Physical Systems (MAR-CPS). MAR-CPS allows real-time visualization of the belief space in laboratory settings.
Tissue oxymetry using magnetic resonance spectroscopy
A noninvasive method for in vivo measurement of tissue oxygen concentration has been developed. Several techniques currently used suffer limitations that prevent their practical clinical use. Our method is to use the paramagnetism of molecular oxygen to build a method for noninvasive tissue oxymetry. By using paramagnetism of molecular oxygen, magnetic resonance spectroscopy (MRS) can be used to measure tissue oxygenation. Chemical shifts of brain metabolites and water have a downfield shift with increased amounts of oxygen. Chemical shifts were linearly dependent on the fraction of inspired oxygen (FI02) and the slope is approximately 0.0003 ppm per percent change of oxygen. The slope was not significantly different between brain metabolites or water. Furthermore, the slope agreed with simple theoretical predictions using Henry's law and the magnetic susceptibility of molecular oxygen. Changes in brain oxygenation in the same animals was confirmed using gradient echo BOLD measurements of changes in R2* as a function of F10₂ in the same animals. The results demonstrated the promising potential of this technique. The implementation of this method in stroke and tumor models is discussed. Thesis Supervisor: Bruce Jenkins
Execution cost optimization for hierarchical planning in the now
For robots to effectively interact with the real world, they will need to perform complex tasks over long time horizons. This is a daunting challenge, but human ability to routinely solve these problems leads us to believe that there is underlying structure we can leverage to find solutions. Recent advances using hierarchical planning [19] have been able to solve these problems by breaking a single long-horizon problem into several short-horizon problems. While this approach is able to effectively solve real world robotics planning problems, it makes no effort to account for the execution cost of an abstract plan and often arrives at poor quality plans. In this thesis, we analyze situations that lead to execution cost inefficiencies in hierarchical planners. We argue that standard optimization techniques from flat planning or search are likely to be ineffective in addressing these issues. We outline an algorithm, RCHPN, that improves a hierarchical plan by considering peephole optimizations during execution. We frame the underlying question as one of evaluating the resource needs of an abstract operator and propose a general way to approach estimating them. We introduce the marsupial logistics domain to study the effectiveness of this approach. We present experiments in large problem instances from marsupial logistics and observed up to 30% reduction in execution cost when compared with a standard hierarchical planner.
The challenge for liquidity in small stock exchanges and trading portals : the case of the Belgian Stock Exchange
The world-wide consolidation in the electronic trading industry has provided evidence that small exchanges and trading portals need to deliver more than sophisticated technology, streaming quotes and market data. In order to deliver value and survive, they need to provide liquidity. Noteworthy among the most recent industry challenges is the dismal performance of exchanges like the Belgian Stock Exchange that finally caved in to the inevitable merger with the London Stock Exchange. The Italian exchange took similar action and so did a number of other small exchanges in the European Union. This development has exacerbated the debate over the need for small stock exchanges and portals to exist unless they can provide both superior technology and liquidity. This paper proposes to examine the performance of the Belgian stock exchange and a select group of portals trading Belgian equities through the metric of liquidity access for fostering trade execution and capital flows. Illiquidity and the dislocation of a number of securities traded on the Belgian exchange are examined using transaction costs and the price impact of trading (as opposed to just asset prices) to explain such lack of liquidity.
Lean-burn characteristics of a gasoline engine enriched with hydrogen from a plasmatron fuel reformer
If a small amount of hydrogen is added to a gasoline fueled spark ignition engine the lean limit of the engine can be extended. Lean running engines are inherently more efficient, and have the potential for significantly lower NOx emissions. Hydrogen addition reduces the combustion variability. In this engine concept supplemental hydrogen is generated on-board the vehicle by diverting a small fraction of the gasoline to a plasmatron where it is partially oxidized into a stream containing hydrogen, carbon monoxide, nitrogen, and carbon dioxide. It is then mixed in the intake port with the main fuel/air charge to provide hydrogen enhanced lean operation A series of experiments were performed to study the feasibility of this engine concept. Since the plasmatron is still under development the final composition of the plasmatron gas is not yet known. Therefore, two different bottled gases were used to simulate the plasmatron output. An ideal plasmatron gas (H2 , CO, and N2) was used to represent the output of the theoretically best plasmatron. In addition, a typical plasmatron gas (H2 , CO, N2 , and C0 2) was used to represent the current output of the plasmatron. In addition, a series of hydrogen only addition experiments were performed to quantify the impact of the non-hydrogen components in the plasmatron gas. Various amounts of plasmatron gas were used, ranging from the equivalent of 10%-30% of the gasoline being converted in the plasmatron. At each of these fractions a sweep of the relative air/fuel ratio was performed, starting at stoichiometic and slowly increasing lambda until the engine began to misfire. At each operating point data was collected to quantify efficiency, emissions, and combustion stability. All of the data was compared to a baseline case of the engine operating stoichiometrically on gasoline only. It was found that the peak net indicated fuel conversion efficiency of the system increased 12% over the baseline case. In addition, at this peak efficiency point the engine out NOx emissions decrease by 94% (165ppm vs. 2800ppm) while the hydrocarbon emissions decreased by 6% (2210ppm vs. 2350ppm). NOx emissions reductions of 99% were possible although they occured at slightly lower overall efficiency points. In the analysis the relative air/fuel ratio was found to be an inadequate measure of mixture dilution. Two new dilution parameters were defined. The Volumetric Dilution Parameter, VDP, represents the heating value per unit volume of the air/fuel mixture. Pumping work reductions due to dilution correlate with VDP. The Thermal Dilution Parameter, TDP, represents the heating value per unit heat capacity of the fuel/air mixture. Combustion and emissions parameters correlate with TDP.
Transport of proteins, biopharmaceuticals and small pharmaceutical compounds into normal and injured cartilage by Sangwon Byun.
Traumatic joint injury can induce acute damage to cartilage and surrounding joint tissues accompanied by an inflammatory response, which can significantly increase the risk of developing Qsteoarthritis. The mechanism by which joint injury results in disease development is not fully understood. However, chondrocyte metabolism is greatly affected by the transport properties of cartilage extracellular matrix, which determine the accessibility and the concentrations of various proteins and therapeutic agents to cells and cell receptors. Using in vitro models of mechanical injury to cartilage, we have characterized the uptake and binding of proteins and biopharmaceuticals in normal articular cartilage and have compared the results to those in cartilage subjected to mechanical injury and pro-inflammatory cytokines. We studied equilibrium partitioning and non-equilibrium transport into cartilage of Pfpep, a 760 Da positively charged peptide inhibitor of the pro protein convertase PACE4. Competitive binding measurements revealed negligible binding to sites in the matrix. The uptake of Pf-pep depended on GAG charge density, consistent with predictions of Donnan equilibrium. The diffusivity of Pf-pep was measured to be ~1 x 10-6 cm2/s, close to other similarly-sized non-binding solutes. These results suggest that small positively charged therapeutics will have a higher concentration within cartilage than in the surrounding synovial fluid, a desired property for local delivery; however, such therapeutics may rapidly diffuse out of cartilage unless there is additional specific binding to intratissue substrates that can maintain enhanced intratissue concentration. We have also examined the effect of mechanical injury and inflammatory cytokines, TNFa, on the uptake of anti-IL-6 antibody Fab fragment (48 kDa). Anti-IL-6 Fab was able to penetrate into cartilage, though final equilibrium uptake would likely occur only after 6-10 days within 1 mm thick explant disks. Uptake of anti-IL-6 Fab was significantly increased following mechanical injury of the cartilage in vitro. A further increase in uptake was caused by TNFa treatment combined with mechanical injury. The increase in uptake was accompanied by GAG loss from the tissue, suggesting that there can be greater accessibility of large solutes into cartilage after direct mechanical injury or inflammatory cytokine treatment to the tissue, where the increase in uptake was related with the severity of matrix damage and loss. We also studied the binding and uptake of TNFa in articular cartilage and observed significant binding of TNFa to matrix sites. Binding was stronger for the monomeric form of TNFa compared to trimeric form. Binding of TNFa was not disrupted by pre-treatment of the tissue with trypsin, indicating that the intra-tissue binding sites were not removed by trypsininduced proteolysis of the matirx. These results suggest that matrix binding as well as monomer-trimer conversion of TNFa both play crucial roles in regulating the accessibility of TNFa to cell receptors. The results of this thesis are significant in that they suggest that injurious mechanical loading and inflammatory cytokine applied to cartilage can affect transport processes within the tissue. The resulting altered transport, in turn, can influence the accessibility of proinflammatory cytokines and anti-catabolic drugs which are designed to treat pathogenesis of OA.
The self-aware city
This thesis explores the idea of real-time urban space management. While increasing amounts of real-time information about the city, specifically the location of people and resources, appear, it becomes necessary to explore how different strategies of distributing real-time location information can be used as urban design tools for a more sustainable resource allocation. I focus on the study of street-parking, a system that clearly has a market situation with demand and supply, but due to lack of information is poorly managed today. I argue that an equilibrium state of the parking market in popular areas, similar to many other urban space markets, is a frequent over demand. The important challenges are therefore allocation optimization and queuing management. I propose five different strategies of using real-time location information to reduce search times and analyze the system through computer simulations and logic. Borrowing ideas from Game Theory, I try to illustrate how collaborative behavior between drivers could yield most efficient results from both the individual and the group point of view. Lastly, I outline some challenges that the use of real-time information systems introduce to the realm of urban design in general.
High-resolution temporal records of magmatism, sedimentation, and faulting at evolving plate boundaries
This dissertation uses high-precision U-Pb zircon geochronology to document the spatial and temporal distribution of magmatism, deformation, and sedimentation during Paleogene ridge-trench interaction in western Washington. Chapter 1 creates a regional stratigraphy for nonmarine sedimentary and volcanic rocks throughout central and western Washington and demonstrates that the depositional history of these rocks is consistent with accretion of over thickened oceanic crust (Siletzia terrane) and passage of a triple-junction. Chapter 2 establishes the volcanic stratigraphy of northern Siletzia and show that it is consistent with its origin as an accreted oceanic plateau, possibly developed above a long-lived Yellowstone hot spot. Chapter 3 quantifies magma emplacement rates in a large, granitoid intrusive complex (Golden Horn batholith) that was emplaced during Paleogene ridge-trench interaction. Parts of this batholith were constructed at the highest rate ever documented in a large granitoid intrusive complex (~0.0125 km³/a). This high emplacement rate may be related to its unique tectonic setting. The second tectonic setting is the rift-to-drift transition in the Newfoundland-Iberia rift. This rift is considered the type example of a magma-poor rifted margin and both margins consist of broad areas of exhumed subcontinental lithospheric mantle. Chapter 5 documents time-transgressive magmatism from east to west across both margins and suggests that mantle was exhumed during a single period of detachment faulting.
Optimization and visualization of strategies for platforms, complements, and services
This thesis probes the causal elements of product platform strategies and the effects of platform strategy on a firm. Platform strategies may be driven by internal or external forces, and the lifecycle of a firm and of a platform strategy evolve over time in response to both the needs of the firm and the changes in the external environment. This external environment may consist of a "platform ecology," in which the platform strategies of firms affect one another. These effects may be positive, buoying revenues, or negative, eliminating markets and appropriating value. The thesis assumes that a company whose strategy is to produce complements or services for another firm's platform may be said to have a platform strategy, and further assumes that a company with a modular platform strategy built primarily for its own internal use may also be said to have a platform strategy. Finally, this thesis will demonstrate example visualization techniques that make the nature of such platform strategies more apparent. This thesis asks and tries to answer a few key questions: ** What comprises the elements of a platform strategy? ** What kinds of companies adopt these strategies? ** What circumstances drive adoption? ** What outcomes can be expected? ** What happens to such a strategy over time? The thesis asserts and attempts to prove these hypotheses: ** Platform Strategies of one firm can influence those of many other firms, by direct effect on the other firms, or by simple economic benefit example. ** Return on Investment (ROI) is influenced by these strategies. ** Beyond ROI and thus Profit fluctuations, company survival, in an evolutionary Darwinian sense, may depend on these strategic choices.
Spatial laws of urban micro-agglomerations.
Intercity studies have shown that a city's characteristics -ranging from infrastructure to crime-scale as a power of its population. These studies, however, have not been extended to the intra-city scale, leaving open the question of how urban characteristics are distributed within a city. Here we study the spatial organization of one important urban characteristic: its amenities, such as restaurants, cafes, and libraries. We use a dataset summarizing the position of more than 1.2 million amenities disaggregated into 74 distinct categories and covering 47 U.S. cities to show that: (i) the spatial distribution of amenities within a city is characterized by dense agglomerations of amenities (which we call micro-clusters), (ii) that unlike in the intercity case, size is a poor predictor of the amenities of each type that locate in each micro-cluster, and (iii) that the number of amenities of each type in a micro-cluster is better predicted using information on the collocation of amenities observed across all micro-clusters than using the micro-cluster's size. Finally, we use these findings to create a recommendation algorithm that suggests amenities that are missing in a micro-cluster and can inform the efforts of developers and planners looking to construct and regulate the development of new and existing neighborhoods.
Generating single-domain antibodies against fibronectin splice variants
Here, I describe the process of generating single-domain antibodies which bind to splice variants of fibronectin containing EIIIA or EIIIB. Alpacas were immunized with either purified antigen cocktails, or from the ECM of tumor samples, and antibody libraries were generated. Using these libraries to pan against, I selected for VHH which bind to EIIIA and EIIIB. Since these splice variants are upregulated in tumor angiogenesis and are rarely seen elsewhere in adult tissues, antibodies targeting EIIIA or EIIIB may be of use for imaging tumors and metastases.
Statistical models in medical image analysis
Computational tools for medical image analysis help clinicians diagnose, treat, monitor changes, and plan and execute procedures more safely and effectively. Two fundamental problems in analyzing medical imagery are registration, which brings two or more datasets into correspondence, and segmentation, which localizes the anatomical structures in an image. The noise and artifacts present in the scans, combined with the complexity and variability of patient anatomy, limit the effectiveness of simple image processing routines. Statistical models provide application-specific context to the problem by incorporating information derived from a training set consisting of instances of the problem along with the solution. In this thesis, we explore the benefits of statistical models for medical image registration and segmentation. We present a technique for computing the rigid registration of pairs of medical images of the same patient. The method models the expected joint intensity distribution of two images when correctly aligned. The registration of a novel set of images is performed by maximizing the log likelihood of the transformation, given the joint intensity model. Results aligning SPGR and dual-echo magnetic resonance scans demonstrate sub-voxel accuracy and large region of convergence. A novel segmentation method is presented that incorporates prior statistical models of intensity, local curvature, and global shape to direct the segmentation toward a likely outcome. Existing segmentation algorithms generally fit into one of the following three categories: boundary localization, voxel classification, and atlas matching, each with different strengths and weaknesses. Our algorithm unifies these approaches. A higher dimensional surface is evolved based on local and global priors such that the zero level set converges on the object boundary. Results segmenting images of the corpus callosum, knee, and spine illustrate the strength and diversity of this approach.
Design of interactive maps for ocean dynamics data
Comprehensive spatiotemporal modeling and forecasting systems for ocean dynamics necessitate robust and efficient data delivery and visualization techniques. The multi-disciplinary simulation, estimation, and assimilation systems group at MIT (MSEAS) focuses on capturing and predicting diverse ocean dynamics, including physics, acoustics, and biology on varied scales, thereby developing new methods for multi-resolution ocean prediction and analysis, including data generation and assimilation. The group has primarily used non-interactive ocean plots to visualize its simulated and measured data. Although these maps and sections allow for analysis of ocean physics and the underlying numerical schemes, more interactive maps provide more user control over depicted data, allowing easier study and pattern identification on multiple scales. Integrating static and geospatial data in dynamic visualization creates a heightened viewpoint for analysis, enhances ocean monitoring and prediction, and contributes to building scientific knowledge. This thesis focuses on explaining the motivation behind and the methodologies applied in designing these interactive maps.
Molecular modeling of hydrate-clathrates via ab initio, cell potential, and dynamic methods
High level ab initio quantum mechanical calculations were used to determine the intermolecular potential energy surface between argon and water, corrected for many- body interactions, to predict monovariant and invariant phase equilibria for the argon hydrate and mixed methane-argon hydrate systems. A consistent set of reference parameters for the van der Waals and Platteeuw model, ... and ..., were developed for Structure II hydrates and are not dependent on any fitted parameters. Our previous methane-water ab initio energy surface has been recast onto a site-site potential model that predicts guest occupancy experiments with improved accuracy compared to previous studies. This methane-water potential is verified via ab initio many-body calculations and thus should be generally applicable to dense methane-water systems. New reference parameters, ... and ..., for Structure I hydrates using the van der Waals and Platteeuw model were also determined. Equilibrium predictions with an average absolute deviation of 3.4% for the mixed hydrate of argon and methane were made. These accurate predictions of the mixed hydrate system provide an independent test of the accuracy of the intermolecular potentials.
Diagrams of affine permutations and their labellings
We study affine permutation diagrams and their labellings with positive integers. Balanced labellings of a Rothe diagram of a finite permutation were defined by Fomin- Greene-Reiner-Shimozono, and we extend this notion to affine permutations. The balanced labellings give a natural encoding of the reduced decompositions of affine permutations. We show that the sum of weight monomials of the column-strict balanced labellings is the affine Stanley symmetric function which plays an important role in the geometry of the affine Grassmannian. Furthermore, we define set-valued balanced labellings in which the labels are sets of positive integers, and we investigate the relations between set-valued balanced labellings and nilHecke words in the nilHecke algebra. A signed generating function of column-strict set-valued balanced labellings is shown to coincide with the affine stable Grothendieck polynomial which is related to the K-theory of the affine Grassmannian. Moreover, for finite permutations, we show that the usual Grothendieck polynomial of Lascoux-Schiitzenberger can be obtained by flagged column-strict set-valued balanced labellings. Using the theory of balanced labellings, we give a necessary and sufficient condition for a diagram to be a permutation diagram. An affine diagram is an affine permutation diagram if and only if it is North-West and admits a special content map. We also characterize and enumerate the patterns of permutation diagrams.
Automated trading informed by event driven data
Models of stock price prediction have traditionally used technical indicators alone to generate trading signals. In this paper, we build trading strategies by applying machine-learning techniques to both technical analysis indicators and market sentiment data. The resulting prediction models can be employed as an artificial trader used to trade on any given stock exchange. The performance of the model is evaluated using the S&P 500 index.
Interactions of cadmium, zinc, and phosphorus in marine Synechococcus : field uptake, physiological and proteomic studies
A combination of uptake field studies on natural phytoplankton assemblages and laboratory proteomic and physiological experiments on cyanobacterial isolates were conducted investigating the interactions of cadmium (Cd), zinc (Zn), and phosphorus (P) in marine Synechococcus. Enriched stable isotope field uptake studies of ¹¹⁰CD in the Costa Rica Upwelling dome, a Synechococcus feature, showed that uptake of Cd occurs in waters shallower than 40 m, correlates positively with chlorophyll a concentrations and is roughly equivalent to the calculated upwelling flux of cadmium inside the dome. In laboratory experiments, Synechococcus WH5701 cells exposed to low picomolar quantities of free Cd under Zn deficiency show similar growth rates to no added Cd treatments during exponential growth phase, but show differences in relative abundances of many proteins involved in carbon and sulfur metabolism suggesting a great metabolic impact. During stationary phase, chronic Cd exposure in this coastal isolate causes an increase in relative chlorophyll a fluorescence and faster mortality rates. The interactions of acute Cd exposure at low picomolar levels with Zn and phosphate (PO4³-) were investigated in Synechococcus WH8102, an open ocean isolate. The presence of Zn appears vital to the response of the organism to different PO4 ³- cocentrations. Comparisons with literature transcriptome analyses of PO4 ³- stress show similar increases in relative abundance of PO4 ³- stress response proteins including a PO4 ³- binding protein and a Zn-requiring alkaline phosphatase. A bacterial metallothionein, a Zn-associated protein, appears to be correlated with proteins present under low PO4 conditions. Together, these experiments suggest that the interactions of Cd and Zn can affect Synechococcus and play a role in the acquisition of PO4 ³-.
Framework for feminist technology intervention
This thesis describes a feminist framework for technological interventions. I first define the problem by contrasting studies from psychology with research from other social sciences to determine that the primary reason for the gender imbalance in technological spaces is based in hostile work environments and not in the fact that women are disinterested as recent psychological research claims. This lack of diversity affects how technology products are shaped and how consumers interact with these artefacts. I outline a techno-feminist approach to intervention by looking at legislative and technological interventions into tech workspaces. Because this thesis is concerned with creating a framework for interventions rather than an individual technology, I describe different collaboration and production models typical to contemporary technology. These models are Web 2.0, open source software production, and collaborative platforms for distributing physical technology objects. In order to find out how to build a technological framework for making technology spaces more equitable for women, I created two projects. The first one is a Web 2.0 platform that provides data about gender and the technology workspace as well as instructions for visualizing it. The second one is a collaboration on a feminist technology for the workplace. The conclusion of the thesis is a description of future work based on these two projects.
Evaluation of the economic feasibility of core-shell baroplastic polymers and a comparison to traditional thermoplastic elastomers
Baroplastic materials are pressure miscible systems that can be molded by the application of pressure at low/room temperature. They have the potential to replace traditional thermoplastic elastomers in many applications. To quantitatively determine the competitiveness of baroplastic materials in current markets, a detailed cost model was developed. Embedded in the cost model is a polymer flow model that predicts processing times as a function of processing pressure. The raw material cost of baroplastics was roughly estimated to input into the cost model. The results of the cost model show that baroplastics have a significant economic advantage over thermoplastic elastomers due, mostly, to the greatly reduced cycle times associated with processing baroplastic materials. Recommendations for future work include developing a more refined estimate of the raw material price of baroplastics as well as investigating the costs of more specific applications.
Architecture sandwiched : tuning anisotropy through variable thickness and hetereogeneous laminar assemblies
Much of architecture's earliest material palettes and construction methods are often referred to today as legacy materials - those primarily consisting of various types of stone and masonry construction. While these materials are often conceptually thought of as being solid, monolithic, and even homogeneous, in actuality they rely on logics of assembly more akin to contemporary sandwich structures, which are laminar assemblies typically composed of two or more stressed skins and either a solid or cellular core that binds them together. While it is still common to use ancient materials in contemporary architecture, the construction methods and techniques used several hundred years ago are no longer appropriate for today's buildings. This thesis however, argues for a newfound relevance of their influence on contemporary and even future material selections and methods. Specifically, this thesis explores the potentials of composite sandwiches varying in thickness and material in search of architectural possibilities whose structural, formal, and aesthetic implications are a result of tuning multiple influences. Variable thickness is used here as a strategy for enabling a range of architectural and tectonic conditions, all within the same heterogeneous but integrated laminar assemblies. While most commercial products in the realm of composite sandwiches are of uniform thickness in section, this thesis suggests a method for constructing sandwiched elements with variable thickness. This is done primarily through a process of infill and backfill using expanding urethane foam as a medium which creates the so called "core" of the sandwich between two skins. This investigation works through a series of small scale prototypes, each of which focus on a particular tectonic, spatial, or structural condition. These mock ups are meant to serve as didactic artifacts, providing feedback with which to incorporate and speculate upon larger architectural propositions through drawing and representation. The end result is a set of architectural proposals which suggest the beginnings of new design methodologies.
PROTOTOUCH a system for prototyping ubiquitous computing environments mediated by touch
Computers as we know them are fading into the background. However, interaction modalities developed for "foreground" computer systems are beginning to seep into the day-to-day interactions that people have with each other and the objects around them. The Prototouch system allows user interface designers to quickly prototype systems of ubiquitous computing devices which utilize touch and gesture based interactions popularized by the recent explosion of multitouch-enabled smartphones, enabling the user to act as container, token, and tool for the manipulation and transportation of abstract digital information between these devices. Two versions of the system utilizing different network topologies have been created, and several example applications utilizing the system have been developed.
Seismic and magnetic constraints on the strucutre of upper oceanic crust and fast and slow spreading ridges
The upper ocean crust contains a comprehensive record of the shallow geological processes active along the world's mid-ocean ridge system. This thesis examines the magnetic and seismic structure of the upper crust at two contrasting ridges-the fast spreading East Pacific Rise (EPR) and the slow spreading Mid-Atlantic Ridge (MAR)-to build a more complete understanding about the roles of volcanic emplacement, tectonic disruption and hydrothermal alteration in the near-ridge environment. A technique that inverts potential field measurements directly from an uneven observation track is developed and applied to near-bottom magnetic data from the spreading segments north of the Kane transform on the MAR. It is concluded that the central anomaly magnetization high marks the locus of focused volcanic emplacement. A cyclic faulting model is proposed to explain the oscillatory magnetization pattern associated with discrete blocks of crust being transported out of the rift valley between intensely altered fault zones. Seismic waveform and amplitude analyses of the magma sill along the EPR reveal it to be a thin (<100 m) body of partial melt. These characteristics have important implications for melt availability and transport within the cycle of eruption and replenishment. A genetic algorithm-based seismic waveform inversion technique is developed and applied to on- and near-axis multichannel data from 17'20'S on the EPR and the spreading segment south of the Oceanographer transform (MAR) to map and compare for the first time the detailed velocity structure of the upper crust at two different spreading rates. Combined with conventionally processed seismic profiles, our results show that, while final extrusive thickness is comparable at all spreading ridges (300-500 m), the style of thickening may vary. While a thin (<100 m) extrusive carapace quadruples in thickness within 1-4 km of the EPR crest, the extrusive section at the MAR achieves its final thickness within the inner valley. Both show evidence for a narrow zone of volcanic emplacement. Vigorous hydrothermalism at the EPR may produce a more rapid increase in basement velocities relative to the MAR. Rapid modification of the extrusive/dike transition at both ridges indicates that hydrothermalism is enhanced in this interval. Along-axis transport of lavas may thicken the extrusive pile at slow spreading segment ends, strengthening the magnetic highs generated by lava chemistry.
A consumer guide to the benefits and obstacles of transitioning to the hydrogen fuel cell
Hydrogen Fuel Cells are a much talked about technology often represented as promising virtually unlimited amounts of non-polluting power by chemically reacting hydrogen, the most abundant element in the universe, with oxygen without combustion. Our analysis indicates that fuel cells are indeed a promising technology still under development. Our analysis concludes that there are considerable problems to overcome before a widespread transition to hydrogen fuel cells occurs, including cost, infrastructure, performance and most importantly generation of the hydrogen fuel itself. The infrastructure and hydrogen generation hurdles are extremely large, enough to require significant government intervention before renewable hydrogen resources displace fossil fuels. We believe the transition to renewable hydrogen fuel sources, and fuel cells are inevitable given the diminishing, non-renewable fossil fuel reserves. We further believe that we are rapidly approaching the date required to make fundamental energy policy changes to enable a hydrogen economy. Disappointingly, there is little evidence that U.S. government is prepared to make this decision in a timely manner.
The politics of proximity : local redistribution in developed democracies
Over the last few decades, countries across the European Economic Area (EEA) have granted local governments considerable discretion over social policy. This project examines the consequences of these reforms. Drawing on unique data from over 28,000 European local governments, it demonstrates that decentralization has not been accompanied by declining levels of provision, as predicted by extant theories, but rather by significant expansion in the scale and scope of redistributive activity. Explaining this puzzle, the dissertation argues that local government behavior is shaped by the 'politics of proximity', which provides clear incentives for incumbents to invest in redistributive policy for electoral gain. These hypotheses are tested across five empirical chapters, each of which leverages micro-level data, natural experiments, and speech evidence to explore this emerging form of redistributive politics.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
151
Add dataset card