Title
stringlengths
3
331
text
stringlengths
14
9.14k
Air pollution and early deaths in the United States : attribution of PM₂.₅ exposure to emissions species, time, location and sector
Combustion emissions constitute the largest source of anthropogenic emissions in the US. They lead to the degradation of air quality and human health, by contributing to the formation of fine particulate matter (PM2 .5 ), which is harmful to human health. Previous work computed the population PM2 .5 exposure and number of early deaths caused by emissions from six major sectors: electric power generation, industry, commercial and residential activities, road transportation, marine transportation and rail transportation. In the present work we go beyond aggregate sectors and now attribute exposure and early deaths to sectors, emissions species, time of emission, and location of emission. This enables determination of the emissions reductions that would have the greatest benefit by sectors, species, time and location. We apply a long-term adjoint sensitivity analysis with population exposure to PM2 .5 in the contiguous US as the objective function, and calculate the four dimensional sensitivities (time and space) of PM2 .5 exposure with respect to each emissions species. Epidemiological evidence is used to relate increased population exposure to premature mortalities. This is the first regional application of the adjoint sensitivity analysis method to characterize long-term air pollution exposure. (A global scale application has been undertaken related to intercontinental pollution.) We find that for the electric power generation sector 75% of the attributable PM2 .5 exposure is due to SO2 emissions, and 80% of the annual impacts are attributed to emissions from April to September. This suggests that burning of low sulfur coal has greatest benefit in the summer. In the road transportation sector, 29% of PM2 .5 exposure is due to NO, emissions and 33% from ammonia (NH3), which is a result of emissions after-treatment technologies. We estimate that the benefit of reducing NH3 emissions from road transportation is ~20 times that of NOx per unit mass. 75% of the road transportation ammonia impacts occur during the months October to March. We rank the states based on their contribution to the overall combustion emissions-attributable PM2 .5 exposure in the US, and calculate that California contributes 12%, Pennsylvania 7% and Ohio 5.8%. We publicly release the sensitivity matrices computed, noting their potential use as a rapid air quality policy assessment tool.
Staging construction
In a reality at once distant and imminent, the Lost Languages and Other Voices exhibit features stories of stone, tree, and jig. Suspended between a zero-waste utopia where out-of-commission buildings are efficiently stripped for parts, pulverized, and recast into new buildings and a preserved world where the size of climate-controlled wunderkammers get ever larger, these material narratives pull one into perspectives vastly distinct from one's own. At times longer-lived, other times more slowly developed, and oftentimes involving subtle sensibilities, the tales of these matter characters enumerate the point that mass can neither be created nor destroyed, although it may be rearranged in space, or its associations may be changed in form. This thesis proposes falsework as a support structure for architectural transformations that renders un-building a lot more kindred to unfurling than demolishing. Designed as a process governed by both material and notional instructions, falsework selectively subtracts and reconfigures parts of built form to reveal indeterminate spaces that had always been (possible) there, thereby enabling reflective, mournful, or prospective activities. "Staging" refers to both the performance itself and the act of setting the stage for what comes next, prioritizing the procedure of construction over or adjacent to its resulting artifacts. This expanded notion of "construction" challenges the supremacy of architectural objects as well as the obsession with their creation and relative indifference towards their life and ultimate demise. In a world filled with perpetually moving matters, falsework sustains possibilities open, for things to collapse or for an eventual repair.
Mapping the core regulatory circuitry of embryonic stem cells
Embryonic stem (ES) cells are of tremendous biological interest because they have the capacity, termed pluripotency, to generate any cell type of the adult organism. Our lab is interested in understanding the genetic circuitry that governs pluripotency. For my thesis work I have contributed to a team effort to deduce the transcriptional regulatory circuitry of ES cells. This collaborative effort first sought to define the genes that are regulated by the key pluripotency regulators, Oct4, Sox2 and Nanog. We then determined the genes targeted by the Polycomb Repressive Complex in ES cells. These datasets allowed us to define the core transcriptional regulatory circuitry for these cells and demonstrated that pluripotency is mediated through the repression of developmental regulators. Finally, an effort to understand how Wnt signaling modifies this circuitry led to the discovery that the Wnt signaling component Tcf3 is a core component of the transcriptional regulatory circuitry and serves to repress the pluripotency regulators, contributing to the balance between pluripotency and differentiation.
An advanced driver warning framework incorporating educational warnings
Car accidents are a serious problem. The measures currently being taken are not very successful in preventing accidents. To reduce the number of accidents, driver support and warning systems are built. Part of their solution is the use of education, in the form of educational warning systems. However, issuing warnings might distract the driver from the driving task exactly when the stress level is high and immediate action is required. This work concentrates on educational warning systems in the framework of cars and driving. It proposes an innovative design that is demonstrated via a prototype of an educational warning system. One of the main objectives of the research presented here is to test if delaying warnings and feedback (to prevent stress and distraction) improves the learning ability and the performance of drivers using them. Are delayed (educational) warnings superior to immediate warnings? Using the 300M IT Edition, an experiment to test the effects of delayed feedback on the learning process in two driving tasks was carried out. The findings showed significant evidence of better performance overall, while yielding marginal significant of improvement in task understanding, and some indication, although not significant, of faster and stronger improvement in task performance of the delayed feedback group. The main impact of the work is some evidence that delayed warnings in driver learning tasks are superior. More importantly, it is not evident that it is inferior, which makes it preferable to immediate feedback that may distract the driver from the driving task.
Delivery of biomolecules into mammalian cells using anthrax toxin
The intracellular delivery of biomolecules into mammalian cells is a major challenge due to the plasma membrane, which acts as a barrier between the extracellular environment and intracellular components. Recently, a non-toxic delivery platform derived from anthrax lethal toxin has been developed to overcome this challenge for the delivery of biomolecules into the cytosol of mammalian cells. The PA/LFN delivery platform has been used to deliver over 30 known biomolecules of diverse sequences, structures, and functionalities. Collectively, these translocation studies have helped to elucidate the translocation mechanism and to probe intracellular biological processes. In this thesis, the PA/LFN delivery platform was used to analyze the delivery of assorted biomolecules through the PA pore. A facile, modular ligation strategy using sortase A was developed for the conjugation of biomolecules to LFN. The biomolecules for this analysis included antibody mimic proteins with defined sizes and secondary structures, mirror image peptides and proteins, polypeptides containing non-canonical amino acids or small molecule drugs, and cyclic peptides. Our translocation analyses have led to guidelines for translocation as well as insight into design parameters for the efficient delivery of new cargos. The PA/LFN delivery platform has also been used to translocate bioactive cargos for the disruption of intracellular protein-protein interactions (PPI). The translocation efficiency and bioactivity of a tandem monobody to Bcr-Abl, an affibody to hRaf- 1, and a mirror image peptide to MDM2 were analyzed. Efficient translocation and disruption of the intended PPI in each case indicated that the delivery platform could be used to deliver bioactive cargos into cells for therapeutic utility. As an application of this technology, the PA/LFN delivery platform was employed to analyze the intracellular stability of mixed chirality proteins. One major factor that governs a protein's stability is the N-end rule, which states that the N-terminal residue of a protein impacts its intracellular stability through the ubiquitin (Ub)/proteasome system. Utilizing the PA/LFN delivery platform, the stability of proteins containing one N-terminal D-amino acid was analyzed. In contrast to N-terminal L-amino acids, each N-terminal D-amino acid abrogates protein degradation by the N-end rule pathway.
Free Research Extension to the iCampus-MIT Online Assessment Tool
MIT is currently developing a web-based service for the large-scale assessment of student writing (iMOAT.net). This service contains a database of useful data, particularly texts of student essays that should be available for research and collaboration purposes. In this thesis, I propose a high level design for an interface to the iMOAT system called FREiMOAT that will control access to this research data. This information has the potential to be used by both independent researchers as well as current users of the iMOAT system for self evaluation and collaboration purposes. Current users of the iMOAT system (administrators at a number of schools around the country), have requested the ability to view each others materials so they might improve upon their own assessments. (e.g. SMALL UNIVERSITY would like to see how STATE COLLEGE is able to use the service on larger bodies of students) Independent researchers, on the other hand, might want access to the site for purposes of determining if students from different states perform differently on the same assessments. This interface is responsible for two main tasks; access control and maintaining data anonymity.
Percolation and homogenization theories for heterogeneous materials
Most materials produced by Nature and by human beings are heterogeneous. They contain domains of different states, structures, compositions, or material phases. How these different domains are distributed in space, or in other words, how they connect to one another, determines their macroscopic properties to a large degree, making the simple rule-of-mixtures ineffective in most cases. This thesis studies the macroscopic effective diffusion, diffusional creep, and elastic properties of heterogeneous grain boundary networks and composite solids, both theoretically and numerically, and explores the microstructure-property correlations focusing on the effects of microstructural connectivity (topology). We have found that the effects of connectivity can be effectively captured by a percolation threshold, a case-specific volume fraction at which the macroscopic effective property undergoes a critical transition, and a set of critical scaling exponents, which also reflect the universality class that the property belongs to. Using these percolation quantities together with the generalized effective medium theory, we are able to directly predict the effective diffusivity and effective diffusional creep viscosity of heterogeneous grain boundary networks to a fairly accurate degree. Diffusion in composite solids exhibits different percolation threshold and scaling behaviors due to interconnectivity at both edges and corners. Continuum elasticity suffers from this complexity as well, in addition to the complicating factor that each phase is always characterized by several independent elastic constants. These issues are each addressed in detail. In addition to studying all the above properties for a random distribution of grain boundaries or phases, we have also studied the effects of correlations in spatial distributions.
Multi-stakeholder contribution to biotechnology environmental assessment
Environmental assessments, such as those for biotechnology applications, are typically conducted by small groups of expert assessors, but scholars and practitioners are increasingly interested in involving diverse stakeholders. In addition to other reasons for broader involvement, researchers have proposed that stakeholders could substantively aid assessment by (1) contributing system knowledge; (2) applying diverse conceptual models; (3) helping available knowledge keep pace with assessment needs; and (4) contributing based on their values, as do narrow expert assessors. Hypothesizing that these types of contribution, suggested theoretically or observed in single workshops, represent key sources of stakeholder contribution across processes, this study examines contribution in several diverse participant processes: an Environmental Protection Agency (EPA) workshop on testing schemes for some engineered microbes compared with another EPA office's testing requirements for other engineered microbes; an MIT-Wilson Center workshop series on synthetic biology environmental assessment research needs; and the Food and Drug Administration's engineered salmon environmental assessment along with diverse stakeholder comments and critiques. The study also identifies practical considerations for enabling multi-stakeholder contribution and applies lessons to broader societal processes. The study analyzes process documents, conversations with conveners and participants, and participant observation. It also reviews knowledge about biological processes representing important areas for assessment and research, discussing complexities of knowledge production and use for assessment. Stakeholders contributed in each of the four hypothesized ways across the cases, suggesting that diverse involvement could regularly contribute positively to assessment. Stakeholders also (5) challenged standard assessment approaches, challenges that could aid assessment as well. Practical considerations for enabling diverse participant contribution emerge from the cases: Process continuity over time; credible expectations of authority or influence in decision-making; and balance between predefined structure and flexibility and between technical tasks and enabling non-technical input may be key. Work developing approaches in these areas is needed, including on incorporating nontechnical inputs, on processes encompassing later assessment stages, on integrating diverse participant processes with governance, and on diverse involvement in other aspects of technology development and execution. Better and increased stakeholder involvement could, through substantive content and incorporation of values, enable science, technology development, and decision-making best to serve society.
Analysis of U.S. transportation research & development
Infrastructure systems are central to quality of life and economic competitiveness in nations worldwide, but daunting challenges stand in the way of providing systems capable of delivering needed infrastructure services. In the United States, the transportation system, which is widely considered to be the nation's largest infrastructure system, provides a case study of the complex investment, design, and operations-related problems of infrastructure service provision. An effective and efficient research and development (R&D) system is needed to support the search for solutions to these problems; the nation is served by a large and well-developed transportation R&D system, but given the magnitude of outstanding needs for new technologies, systems, and policies and the persistence of resource shortfalls, it is appropriate to re-examine all aspects of the transportation R&D enterprise in search of strategies for improving its performance. This thesis identifies and analyzes factors that influence the performance of the transportation R&D system and how it can respond to emerging infrastructure challenges. It first discusses categories and characteristics of infrastructure and seeks to place analysis of infrastructure systems, like the transportation system, in a broader socio-economic and environmental context. The thesis then outlines the basic composition of the transportation research and development system and explores the policy environment and critical issues that influence both transportation R&D challenges and the behavior of the system in response to those challenges. Data on transportation R&D expenditures, including longitudinal data for the sector as well as limited cross-sectoral comparisons to place it in context, is presented. Finally, examination of issues (like coordination and integration) related to the structure of the R&D system, is included to frame the prior analysis of expenditures within a broader range of potential strategies for improving the efficiency and effectiveness of the transportation R&D system.
The Chinatown stories : investigating water (in)justice through transmedia urban design in the L.A. River
Presenting a case study of the L.A. River, this thesis analyzes the L.A. River revitalization master plans from 1996 and the subsequent efforts by public and private entities to create "an equitable, natural river." It demonstrates that the current urban design framework neglects to take a finegrained approach to distinctive river stretches and communities, lacks clear water justice objectives, and fails to adequately represent local stakeholders, and thus lacks the ability to actualize their vision. This thesis argues that the discipline and practice of urban design can use transmedia storytelling as a tool for power- and knowledge-sharing between urban designers and community members to achieve water justice objectives in the L.A. River. The thesis proposes a transmedia urban design method that incorporates transmedia stories and transmedia community engagement to inform the development of a just urban design.
A hair bundle proteomics approach to discovering actin regulatory proteins in inner ear stereocilia
Because there is little knowledge in the areas of stereocilia development, maintenance, and function in the hearing system, I decided to pursue a proteomics-based approach to discover proteins that play a role in stereocilia function. I employed a modified "twist-off" technique to isolate hair bundle proteins, and I developed a method to purify proteins and to process them for analysis using multi-dimensional protein identification technology (MudPIT). The MudPIT analysis yielded a substantial list of proteins. I verified the presence of 21 out of 34 (62%) existing proteins known to be present in stereocilia. This provided strong evidence that my proteomics approach was efficient in identifying hair bundle proteins. Next, I selected three proteins and localized them to murine cochlear stereocilia. StarD10, a putative phospholipid binding protein, was detectable along the shaft of stereocilia. Nebulin, a putative F-actin regulator, was located toward the base of stereocilia. Finally, twinfilin 2, a putative modulator of actin polymerization, was found at the tips of stereocilia. In order to determine the function of twinfilin 2, I localized the protein predominately to the tips of shorter stereocilia where it is up-regulated during the final phase of elongation. When overexpressed, I found that twinfilin 2 causes a shortening of microvilli in LLC-PK1/CL4 cells and in native cochlear stereocilia. The main result of this thesis was determining the sub-cellular localization of three interesting proteins and functionally characterizing one protein. My thesis also confirmed the proteomics screen I developed as an efficient method for identifying proteins in stereocilia.
A lower crustal perspective on the stabilization and reactivation of continental lithosphere in the western Canadian shield
New geochronological, thermochronological, geological and isotopic data from an extensive (> 20,000 km²) exposure of high-pressure granulites (0.8 to > 1.5 GPa, >750 ⁰C) in the East Lake Athabasca region of the Snowbird tectonic zone provide important constraints on the stabilization, reactivation and exhumation of continental lithosphere in the western Canadian Shield. The exhumed lower crust of this craton comprises several disparate domains that preserve a complex record of tectonic, magmatic and metamorphic processes from formation to exhumation. U-Pb zircon geochronology documents two episodes of metamorphic zircon growth at 2.55 Ga and 1.9 Ga, linked with two high-pressure granulite facies assemblages preserved in Chipman domain mafic granulites. The intervening 650 m.y. of relative quiescence implies a period of lithospheric stability during which the granulites continued to reside in the deep crust. Disruption of the stable Archean craton at 1.9 Ga broadly coincides with the assembly of the Laurentian supercontinent. The correlation of 1.9 Ga mafic magmatism and metamorphism in the Chipman domain with contemporaneous mafic magmatism along > 1200 km strike-length of the Snowbird tectonic zone indicates that regional asthenospheric upwelling was an important aspect of this reactivation event.
Photonic quantum computers and communication systems
Quantum information processors have been proposed to solve classically intractable or unsolvable problems in computing, sensing, and secure communication. There has been growing interest in photonic implementations of quantum processors as they offer relatively long coherence lengths, precise state manipulation, and efficient measurement. In this thesis, we first present experimental techniques to generate on-chip, photonic quantum processors and then discuss protocols for fast and secure quantum communication. In particular, we describe how -to combine the outputs of multiple stochastic single-photon sources using a photonic integrated circuit to generate an efficient source of single photons. We then show designs for silicon-based quantum photonic processors that can be programmed to implement a large class of existing quantum algorithms and can lead to quicker testing of new algorithms than was previously possible. We will then present the integration of large numbers of high-efficiency, low-timing jitter single-photon detectors onto a silicon photonic integrated circuit. To conclude, we will present a quantum key distribution protocol that uses the robust temporal degree of freedom of entangled photons to enable fast, secure key exchange, as well as experimental results for implementing key distribution protocols using silicon photonic integrated circuits.
Narrative tactics for making other worlds possible
Be they childhood games of make-believe, sophisticated literary projects, or political inventions (a "Great America") authors have taken advantage of a world-building imagination creating their own worlds, and theorizing what they were doing. From the 1960s onwards, fictional worlds were studied from a philosophical point of view, using "possible worlds" theory and modal logic, which consider the ontological status of fictional worlds, the nature of their functioning, and their relationship with the actual world. These ideas have been combined with literary theory, setting the foundation for the study of imaginary worlds. Architects and Urbanists have used facets of world-building arguably for as long as the disciplines have existed. Though modernity launched a highly conscious tradition of imagining worlds in literature and creative culture, it also stained imagination and dreaming with a connotation of frivolity and a wastefulness that was antithetical to modern projects of utility and rationality. In the later half of the twentieth century there was an increase in number of architects exploring the irrational and imaginative in defiance of the reign of rationalism. A chasm tore through the discipline: grounded and rational practitioners on one side and imaginative inventors of form, indulgently entrapped in their fantasies, on the other. World-builders have developed robust methods for producing visions for futures, pasts, and other worlds. A study of worldbuilding and narrative methods and their possible application to architectural and urban design has remained largely unaddressed. This thesis proposes methods for design and tests these methods through a case study. The case study is the city of Boston in the year 2100 being changed by many factors not least of which are the effects of sea level rise. A story has been authored, the world surrounding that story has been structured, and designs within that world have been represented. This thesis seeks to combine methods from storytelling, world-building, and scenario planning in order to allow imaginative explorations of, and design for speculative environments, in response to, and preparation for, challenging situations. And, in the end it seeks to provide tools to tell better stories and see better worlds.
Feasibility studies for quantum computation with spectral hole burning media
In this thesis I consider a scheme for quantum computation in which quantum bits (qubits) are stored in individual spectral holes of an in homogeneously broadened medium, such as a cryogenically cooled crystal of Pr:Y2 SiO 3 . Qubits are transferred between spectral holes by virtue of mutual coupling of the atoms to a single quantized cavity mode, which allows for easy implementation of two bit gate operations. I show that laser induced adiabatic passage can be used to transfer an arbitrary symmetric ground state coherence between two many-atom spectral holes. However, it is not clear how to construct entangled states of qubits which are represented by many atoms, and therefore we require that each spectral hole contain only a single atom. The many-atom coherence transfer is still useful for constructing N-photon Fock states in the cavity. The coherence transfer is susceptible to spontaneous emission and cavity decay; the latter is the dominant decay channel for Pr:YSO. I have shown that the coherence transfer can proceed in a cavity dark state which is invulnerable to cavity decay, at the cost of becoming especially susceptible to spontaneous emission, and vice versa for coherence transfer with an atomic dark state. We can achieve the strong atom-cavity coupling necessary for coherence transfer by using extremely high-finesse optical resonators and by reducing the cavity mode volume. The latter is achieved by either reducing the total cavity volume as with a microcavity, or by tightly focusing the mode to a small active volume as with a near-concentric cavity. I consider how the presense of multiple degenerate cavity modes affects the two-atom coherence transfer, and find that the transfer is only exact when both atoms couple to the same mode. For the prototype Pr:YSO material, using a tightly focused mode in a centimeter-length cavity, we can couple as many as 400 qubits with a ground state coherence lifetime of about 1 s, which would allow us to apply as many as 20 sequential gate operations.
Unprocessed rice husk ash as a partial replacement of cement for low-cost concrete
Cement is a very valuable commodity as it can be used to construct structurally sound buildings and infrastructure. However, in many developing countries cement is expensive due to the unavailability of local resources to produce enough cement in-country to meet the demand for this material, and therefore it has to be imported. In rice-producing countries rice husk ash-a material naturally high in silica-can be used as a supplementary cementitious material and can substitute a portion of Portland cement in concrete without sacrificing the compressive strength. This study investigates the use of Cambodian rice husk ash in 10, 20 and 30% replacements of Portland cement by mass in mortar, without optimization of the ash by controlled burning. Five ashes collected from different sources in Cambodia were assessed for their suitability for use in rural Cambodian construction via compression strength testing of 2" (50 mm) mortar cubes. A 20% replacement of unprocessed Cambodian rice husk ash was deemed appropriate for use in small-scale, rural structural applications. Low-tech methods of grinding the ash were also investigated and were found to drastically increase the compressive strength of RHA-cement mortars in comparison to mortars made with unground RHA.
Network architecture for a chemical sensor network
A real-time continuous chemical sensor network can obtain detailed data to analyze the dynamic behavior of water systems such as a lake. The behaviors of interest to us in Upper Mystic Lake are the effects of stratification on methane water chemistry and the results of water mixing between layers. To monitor the water chemistry, a network of three buoys is populated with various sensors. This paper covers the design implementation of the network architecture for transmitting sensor data between buoys and a shore station. The buoys' sensors and construction are also included.
Robust, risk-sensitive, and data-driven control of Markov Decision Processes
Markov Decision Processes (MDPs) model problems of sequential decision-making under uncertainty. They have been studied and applied extensively. Nonetheless, there are two major barriers that still hinder the applicability of MDPs to many more practical decision making problems: * The decision maker is often lacking a reliable MDP model. Since the results obtained by dynamic programming are sensitive to the assumed MDP model, their relevance is challenged by model uncertainty. * The structural and computational results of dynamic programming (which deals with expected performance) have been extended with only limited success to accommodate risk-sensitive decision makers. In this thesis, we investigate two ways of dealing with uncertain MDPs and we develop a new connection between robust control of uncertain MDPs and risk-sensitive control of dynamical systems. The first approach assumes a model of model uncertainty and formulates the control of uncertain MDPs as a problem of decision-making under (model) uncertainty. We establish that most formulations are at least NP-hard and thus suffer from the "'curse of uncertainty." The worst-case control of MDPs with rectangular uncertainty sets is equivalent to a zero-sum game between the controller and nature.
Analysis of ultra-narrow ferromagnetic domain walls
A mathematical analysis of ultra-narrow ferromagnetic domain walls was undertaken, with graphical plots coded in the programming language TrueBASIC. An intrinsic inter-atomic potential stemming from the breakdown of the continuum approximation of matter is calculated and its contribution to the coercive force of hard materials is depicted. The interaction of a very narrow domain wall with a similarly narrow planar defect is analyzed. Time-dependent motion of such walls is modeled for various external driving forces and in different combinations of material parameters. This work was completed in parallel with a study of narrow crystallographic magnetic discontinuities known as twin boundaries, and was designed to gain an intuition into the control of high-anisotropy magnetic recording devices. The equations developed here would be particularly useful as a basis for approaching the calculations of the stability of high-density storage media.
Reducibility and computational lower bounds for problems with planted sparse structure
Recently, research in unsupervised learning has gravitated towards exploring statistical-computational gaps induced by sparsity. A line of work initiated by Berthet and Rigollet has aimed to explain these gaps through reductions to conjecturally hard problems from complexity theory. However, the delicate nature of average-case reductions has limited the development of techniques and often led to weaker hardness results that only apply to algorithms that are robust to different noise distributions or that do not need to know the parameters of the problem. We introduce several new techniques to give a web of average-case reductions showing strong computational lower bounds based on the planted clique conjecture for planted independent set, planted dense subgraph, biclustering, sparse rank-1 submatrix, sparse PCA and the subgraph stochastic block model. Our results demonstrate that, despite the delicate nature of average-case reductions, using natural problems as intermediates can often be beneficial, as is the case in worst-case complexity. Our main technical contribution is to introduce a set of techniques for average-case reductions that: (1) maintain the level of signal in an instance of a problem; (2) alter its planted structure; and (3) map two initial high-dimensional distributions simultaneously to two target distributions approximately under total variation. We also give algorithms matching our lower bounds and identify the information-theoretic limits of the models we consider.
Management of a high mix production system with interdependent demands : modeling of stochastic demands and the concept of virtual profit as a decomposition tool
An optimized framework for the inventory control of a high mix production system has been designed in order to guarantee the optimal mix of items in stock in presence of correlated demands. The Virtual Profit concept was developed to measure the criticality of an item in presence of correlated demands. The introduction of the Virtual Profit in the optimization problem allowed the problem to be decomposed and the optimal control parameters to be computed separately. Demands were modeled based on the stochastic properties of the historical demand so that simulations could be performed using statistically generated orders. The simulations provided a validation of the proposed technique showing that, with the same size of inventory, considering the Virtual Profits instead of the real profits improves the quality of the solution, especially when low levels of inventory are kept.
Using mass media to bring engineering principles to young audiences to inspire interest and pursuit of future engineering or technologically based careers
In the progression of this thesis document, an idea for an episode of an educational and interactive television show has been explored and developed. The direction of this episode will fit into the aforementioned educational television show format (which will be further described and discussed in subsequent sections of this document). For our particular episode, the focus audience and main target demographic of the theme are young, middle-school aged girls. The theme of the show, which has to do with cooking a familiar and typically well-liked (by children) food by using an alternative energy source and engineering design principles. In this show, it is our goal that both the players and the viewing audience learn about the engineering concepts involved with basic optics and solar energy. In our investigation, a theme for the episode has been developed, and a sample solution has been worked out and tested. Based on the results of this trial run, suggestions and conclusions have been made regarding the future directions for this project.
Best practices for venture philanthropy collaborations between disease-focused foundations and for-profit life science companies by Joanne Chang.
The history of private philanthropy in the US has been dominated by family foundations with arms-length philanthropy practices that largely existed in separation from commercial enterprise and business operations. This paper looks at emerging organizational and funding models being used in a wide range of disease areas in which philanthropy has shifted towards a more "venture-oriented" model sometimes referred to as disease foundation venture philanthropy (DFVP) as practiced by disease focused foundations (DFFs). More specifically, this research seeks to understand how these models map onto the range of translational challenges confronted by those engaged in bringing ideas from the bench to the bedside and it explores our current understanding of DFVP best practices. It concludes by raising questions and addressing issues designed to assist those who seek to setup successful collaborations between DFFs and industry partners.
Electric vehicle technology in Kathmandu, Nepal : a closer look at development
Electric vehicle (EV) development in the Kathmandu Valley began in 1993 as a response to the urgency of a severe air pollution situation. The dynamics of government intervention, non-governmental organization advocacy, international donor support, and private sector involvement all shaped EV implementation in various ways. Its success led other South and East Asian cities to view it as a model for implementing EVs to alleviate air pollution. Yet despite a promising beginning and intensive proliferation, the EV industry was failing only six years after its inception. What went wrong with a development that seemed to have all the makings of success? This thesis outlines the EV development trajectory and examines the principal factors that impeded progress. Interviews with over 30 individuals in the electric vehicle industry, government agencies, NGOs, and international donor organizations provided me with first-hand accounts of the puzzles of EV development. Also, my research in published and unpublished documents, local press coverage, and an EV news server added rich material for analysis. The most entrenched barriers to the implementation of the EV industry have been the disparate interests and goals of stakeholders, in particular the resistance and hostility of fossil-fuel interests, and deficiencies in human resources and support networks. Analysis of these impediments yields lessons on how EV advocates can overcome these obstacles. Lessons learned in this thesis are that EV advocates must build a coalition of supportive actors, seek governmental commitment for EV-supportive polices, work to align the disparate economic goals of private actors, and develop a capacity for training and education.
Parallel implementations of dynamic traffic assignment models and algorithms for dynamic shortest path problems
This thesis aims at the development of faster Dynamic Traffic Assignment (DTA) models to meet the computational efficiency required by real world applications. A DTA model can be decomposed into several sub-models, of which the most time consuming ones are the dynamic network loading model and the user's route choice model. We apply parallel computing technology to the dynamic network loading model to achieve faster implementations. To the best of our knowledge, this concerns the first parallel implementations of macroscopic DTA models. Two loading algorithms are studied: the iterative loading algorithm and the chronological loading algorithm. For the iterative loading algorithm, two parallelization strategies are implemented: decomposition by network topology and by time. For the chronological loading algorithm, the network topology decomposition strategy is implemented. Computational tests are carried out in a distributed-memory environment. Satisfactory speedups are achieved. We design efficient shortest path algorithms to speedup the user's route choice model. We first present a framework for static shortest path algorithms, which prioritize nodes with optimal distance labels in the scan eligible list. Then we apply the framework in dynamic FIFO, strict FIFO, and static networks. Computational tests show significant speedups. We proceed to present two other shortest path algorithms: Algorithm Delta and Algorithm Hierarchy. We also provide the evaluations of the algorithms.
Investigation of relaxor ferroelectrics
This thesis develops phonon-polariton based THz spectroscopy and uses this technique to make the first THz frequency dielectric measurements of a relaxor ferroelectric crystal, in particular KTao0.982Nb0.018O3 (KTN 1.8). THz spectroscopy has emerged as an important probe for a wide variety of systems with the development of pulsed THz radiation sources and time-domain detection methods. Four factors motivate the use of phonon-polaritons generated in an ionic crystal (typically LiNbO3 or LiTaO3) via impulsive stimulated Raman scattering as a THz source for spectroscopy: (1) the versatility of phonon-polariton waveform shaping and detection, (2) the ability to use the ionic crystal as a compact, integrated spectroscopic platform, (3) the high THz refractive index of the host material facilitates coupling of THz radiation into high-dielectric samples, and (4) the potential to generate large amplitude polariton fields for nonlinear THz spectroscopy. Here we demonstrate both reflection and transmission implementations of THz spectrometers based on grating interferometric measurement of the phase and amplitude of a phonon-polariton waveform before and after interaction with a sample.
Overcoming barriers to participation in training : lessons from the home health care workers of 1199/SEIU, New York's Health and Human Services Union
This thesis explores the barriers to participation in the 1199 Home Care Industry Bill Michelson Education Fund (Home Care Education Fund). The Home Care Education Fund is structured as a Taft-Hartley, joint labor-management training fund to provide skills upgrading opportunities to unionized home care workers. It is the only such fun in the United States devoted exclusively to home care workers. Home care is a growing sector of the health care industry, and home attendants and home health aides are projected to be among the fastest-growing occupations in the following decade, according to the Bureau of Labor Statistics. Home care workers are also some of the most economically disadvantaged workers in the health care sector, earning poverty-level wages and, with the exception of 1199/SEIU members, lacking health insurance and pension benefits. Three sets of stakeholder groups were interviewed for this thesis: home care workers, who participated in a series of focus group meetings and personal interviews; home care agency employers; and Home Care Education Fund and ETJSP staff members. A written survey instrument was implemented to home care agency employers regarding their staffing levels and training benefits to supplement personal interviews. Each group articulated a coherent set of barriers facing home care workers, with unique challenges facing the agency employers and Education Fund staff in meeting the workers' needs. It is argued that shared interests bind these groups together and that a considerable overlap exists between the provision of quality medical care, welfare and job training policies. Further, there is an urgent need to support a frontline, marginalized workforce that is caring for thousands of disabled and elderly clients on a daily basis. The ultimate goal of this thesis is to identify those key barriers that prevent participation in the Home Care Education Fund so that staff and trustees may work together to tailor their services to meet their unique needs. It concludes with supporting recommendations for workforce development policy.
A room temperature optomechanical squeezer
Decades of advancement in technologies pertaining to interferometric measurements have made it possible for us to make the first ever direct observation of gravitational waves (GWs). These GW emitted from violent events in the distant universe bring us crucial information about the nature of matter and gravity. In order for us to be able to detect GWs from even farther or weaker sources, we must further reduce the noise in our detectors. One of the noise sources that currently limits GW detectors comes from the fundamental nature of measurement itself. When a certain measurement reaches very high precision, the Heisenberg uncertainty principle comes into play. In GW detectors, this uncertainty manifests itself in the quantum nature of the light. Due to its quantum nature, light (or electromagnetic field) has an uncertain amplitude and phase.
Investigation of electromagnetic welding
We propose several methodologies to study and optimize the electromagnetic process for Electromagnetic Forming (EMF) and Welding (EMW), thereby lowering the necessary process energy up to a factor of three and lengthening the life-time of EMW compression coils. We present a new theoretical approach to calculate a so-called critical kinetic energy to achieve a proper EMW joint, which is related to the volume of the accelerated mass and the Vicker's Hardness of the material. Using this novel approach, welding windows for several materials are presented. Studying the circuit theory, the current discharge pulse can be optimized to the needs of the EMW process, when opting for a critically damped RLC circuit. We present MultiSIM and MATLAB models that prove the proposed optimization and reflect the experimental EMW setup and parameters. Using the models, unknown parameters, such as machine inductance and resistance can be extrapolated for EMF and EMW machinery. Furthermore, the MATLAB model can calculate the optimal gap between the outer and inner workpiece for the outer workpiece to reach the maximum velocity at impact. Good correlation was found with regards to the High-Speed Videography used to study the EMF process in further detail measuring velocities between 50 m/s and 100 m/s. Studying the mechanical properties of the outer workpiece we propose an EMF-EMW setup that would decrease the strength of the outer workpiece by introducing a controlled amount of wrinkles through an EMF step with a mandrel inside the outer workpiece, followed by a lower critical energy EMW step.
Phase transformations and microstructural design of lithiated metal anodes for lithium-ion rechargeable batteries
There has been great recent interest in lithium storage at the anode of Li-ion rechargeable battery by alloying with metals such as Al, Sn, and Sb, or metalloids such as Si, as an alternative to the intercalation of graphite. This is due to the intrinsically high gravimetric and volumetric energy densities of this type of anodes (can be over an order of magnitude of that of graphite). However, the Achilles' heel of these Li-Me alloys has been the poor cyclability, attributed to mechanical failure resulting from the large volume changes accompanying alloying. Me-oxides, explored as candidates for anode materials because of their higher cyclability relative to pure Me, suffer from the problem of first cycle irreversibility. In both these types of systems, much experimental and empirical data have been provided in the literature on a largely comparative basis (i.e. investigations comparing the anode behavior of some new material with older candidates). It is the belief of the author that, in order to successfully proceed with the development of better anode materials, and the subsequent design and production of batteries with better intrinsic energy densities, a fundamental understanding of the relationship between the science and engineering of anode materials must be achieved, via a systematic and quantitative investigation of a variety of materials under a number of experimental conditions. In this thesis, the effects of composition and processing on microstructure and subsequent electrochemical behavior of anodes for Li-ion rechargeable batteries were investigated, using a number of approaches.
Modulation of innate immune signaling pathways by the intracellular pathogen Toxoplasma gondii
Toxoplasma gondii, an obligate intracellular protozoan parasite, is one of the most successful eukaryotic pathogens. It can infect virtually any warm-blooded animal, including humans, in whom it can cause serious disease. Its success is likely due to its ability to modulate host immune responses and host innate immune signaling pathways allowing it to establish a chronic infection with few symptoms in its hosts, which favors transmission to new hosts. Here, we report that Toxoplasma activates NF-[kappa]B and inhibits STAT1 signaling pathways to promote both its own survival and the survival of its host. We identified GRA15, a novel Toxoplasma secreted factor that activates the host cell NF-[kappa]B pathway. GRA15 is polymorphic between Toxoplasma strains and only active in the type II clonal lineage. GRA15 expression increases host pro-inflammatory cytokine production in vivo, thereby helping the host to control parasite growth. Conversely, Toxoplasma infection dampens the activation of other immune responses by inhibiting IFN-[gamma] and STAT1 signaling. All of the Toxoplasma strains that we have tested directly inhibit the activity of STAT1, the transcription factor through which IFN-[gamma] signals. We found that infection does not inhibit STAT1 phosphorylation, dimerization, nuclear translocation, or DNA binding. Instead, Toxoplasma must act even farther downstream, perhaps by inhibiting the recruitment of co-activators or RNA polymerase. Infection actually increased the association of STAT1 with DNA, which has been shown previously to be associated with decreased STAT1 transcriptional activity. The Toxoplasma effector that inhibits STAT1 remains unknown, but our results suggest that it is not secreted into the host cell upon invasion but must interface with its cellular target after the parasitophorous vacuole is formed. A deeper knowledge of how and why Toxoplasma modulates these processes will help us to understand more about the basic signaling pathways themselves and to discover clues on how to better treat Toxoplasma infections in humans.
Building representations from natural language
In this thesis, I describe a system I built that produces instantiated representations from descriptions embedded in natural language. For example, in the sentence 'The girl walked to the table', my system produces a description of movement along a path (the girl moves on a path to the table), instantiating a general purpose trajectory representation that models movement along a path. I demonstrate that descriptions found by my system enable the imagining of an entire inner world, transforming sentences into three-dimensional graphical descriptions of action. By building action descriptions from ordinary language, I illustrate the gains we can make by exploiting the connection between language and thought. I assert that a small set of simple representations should be able to provide powerful coverage of human expression through natural language. In particular, I examine the sorts of representations that are common in the Wall Street Journal from the Penn Treebank, providing a counterpoint for the many other sorts of analyses of the Penn Treebank in other work. Then, I turn to recognized experts in provoking our imaginations with words, using my system to examine the work of four great authors to uncover commonalities and differences in their styles from the perspective of the way they make representational choices in their work.
Energy consumption and smart growth in Massachusetts : does smart growth make a difference?
With the environmental crisis involving climate change fast approaching, all potential mitigation techniques must be explored and implemented. A key approach comes from the power towns and cities have to influence land use and building standards in their jurisdiction. This thesis uses a scenario planning approach to explore the energy implications of four potential futures for the town of Littleton, MA. Four scenario storylines (Business as Usual (BAU), Baby Steps (high residential density, no mixing), Mixed Use Village (higher residential density, mixed uses), and Thoroughly Green (similar to Mixed Use Village with added green building requirements)) were used to frame the potential outcomes. Typical development typologies from nearby Massachusetts towns served as proxies for the scenarios. Using an elasticity method based on the density, diversity and design of the typologies, the reduction in Vehicle Miles Traveled (VMT) for each alternative scenario as compared to the BAU scenario was calculated and used to determine the reduction in gasoline usage and CO2 emissions. Local and regional average electricity and heating values were used to calculate the home energy consumption for each scenario.
Rescuing endangered knowledge : a systems approach
This research involves the identification and definition of"Endangered Knowledge" and outlines a tool that a firm can use to identify, capture, and reutilizes endangered knowledge. Endangered knowledge (EK) is valuable knowledge firms acquire during product development that has a high potential to be erased from a firm's memory. Two primary factors contribute to endangered knowledge. First, the firm does not believe the knowledge has future value, or does not take the time to correctly assess the value of the knowledge. Product development teams are usually under a great deal of time and financial pressures, and once a particular piece of knowledge has been acquired and applied to a specific process, it is quickly discarded. Second, an individual in a firm may realize that a piece of knowledge could have value to their team or another team in the future, but have no system in place which will enable them to effectively store and communicate that knowledge. In both cases, the knowledge is lost, ultimately costing the firm time and money to replace the lost learning. This paper can be broken up into four sections. The first section includes an introduction to endangered knowledge and provides two case studies where different product development teams wasted time and money because they were unable to access knowledge acquired by other members in their firm. The second section defines the terminology, (knowledge vs. information, learning vs. teaching, transfer vs. transform) and highlights knowledge management (KM) initiatives in existence today. The third section outlines five essential steps a knowledge management system must address in order to be effective. The final section introduces a new methodology product development teams can use to capture and reuse, or "rescue" endangered knowledge.
computer model for acoustic propagation around conical seamounts
This paper demonstrates a technique for computing the long-range sound pressure field around a penetrable conical seamount. The pressure field is generated by a harmonic point source. The seamount is positioned in a vertically stratified ocean. It is modeled as an outgrowth of the sediment layer covering the ocean bottom. First, the seamount is decomposed into superposed rings of diameters increasing with the depth. Thus the problem reduces to a cylindrically layered system. Then, the method of normal modes is used to compute the sound pressure field in each layer. In order to maintain numerical stability, the Direct Global Matrix approach is used. The radial eigenfunctions are expressed as functions of normalized Hankel and Bessel functions, and the linear system that arise is organized in an unconditionally stable matrix. The results show a perturbation zone behind the seamount. It is bounded by two lines going from the source and tangent to the ring that is at the depth of the source. The values of the sound pressure inside the perturbation zone can be higher or lower than the values outside of it, according to the dimensions of the seamount.
Point by point gain calibration of the DMTPC
Since 1975 a growing body of astronomical evidence has given increasing credibility to the existence of dark matter. Once a simple proposition by Fritz Zwicky to explain discrepancies in the virial motion of galaxy clusters, dark matter can now explain galactic rotation curves, the hierarchical structure of the universe, and the gravitational lensing of the bullet cluster. Nonetheless, the exact particulate nature of dark matter remains a mystery. The Dark Matter Time Projection Chamber (DMTPC) is a directional detection experiment that will be able to measure the energy, length, and direction of nuclear recoil tracks induced by incoming weakly interacting massive particles (WIMPs). In this thesis I analyze nonuniformities in the CCD images used to record the nuclear recoils. I then identify the source of the nonuniformities and describe a method for calibrating the CCD images. The method improves the energy resolution by decreasing the uncertainty by a factor of 4 for the bottom time projection chamber (TPC) and a factor of 3 for the top TPC.
Development of a pointing, acquisition, and tracking system for a nanosatellite laser communications module
Launch opportunities for small satellites are rapidly growing and their technical capabilities are improving. Several commercial constellations of small satellites for Earth imaging and scientific observation are making their way onto orbit, increasing the need for high bandwidth data downlink. Obtaining regulatory licensing for current radio frequency (RF) communications systems is difficult, and state of the art nanosatellite RF systems struggle to keep up with the higher demand. Laser communications (lasercom) has the potential to achieve high bandwidth with a reduction in power and size compared to RF, while simultaneously avoiding the significant regulatory burden of RF spectrum allocation. Due to narrow beamwidths, the primary challenge of lasercom is the high-precision pointing required to align the transmitter and receiver. While lasercom has been successfully demonstrated on multiple spacecraft platforms, it has not yet been demonstrated on a scale small enough to meet the size, weight, and power constraints for nanosatellites. The Nanosatellite Optical Downlink Experiment (NODE) developed at MIT is designed to achieve a lasercom downlink of 10 to 100 Mbps within the constraints of a typical 3-U CubeSat. This thesis focuses on the development of the pointing, acquisition, and tracking system for NODE. The key to achieving a high bandwidth downlink is to bridge the gap between existing CubeSat attitude determination and control capabilities and the narrow beamwidths of lasercom. We present a two-stage pointing control system to achieve this. An uplink beacon and detector provide fine attitude feedback to enable precision pointing, and CubeSat body pointing is augmented with a fine steering mechanism. The architecture of the pointing, acquisition, and tracking system is presented, followed by the in-depth design and hardware selection. A detailed simulation of the ground tracking performance is developed, including novel on-orbit calibration algorithms to eliminate misalignment between the transmitter and receiver. A testbed is developed to characterize the selected fine steering mechanism for performance and thermal stability. The proposed system is capable of achieving at least two orders of magnitude better pointing than existing CubeSats to enable high bandwidth nanosatellite downlinks.
Study of SACD in Portugal
The challenges brought on by the increasing complexity of electronic products, and the criticality of the materials these devices contain, present an opportunity for maximizing the economic and societal benefits derived from recovery and recycling. Small appliances and computer devices (SACD), including mobile phones, contain significant amounts of precious metals including gold and platinum, the present value of which should serve as a key economic driver for many recycling decisions. However, a detailed analysis is required to estimate the economic value that is unrealized by incomplete recovery of these and other materials, and to ascertain how such value could be reinvested to improve recovery processes. I present a dynamic product flow analysis (dPFA) for SACD throughout Portugal, a European Union member, including annual data detailing product sales and industrial-scale preprocessing data for recovery of specific materials from devices. I employ preprocessing facility and metals pricing data to identify losses, and develop an economic framework around the value of recycling including uncertainty. I show that significant economic losses occur during preprocessing (over $70M USD unrecovered in computers and mobile phones, 2006-2014) due to operations that fail to target high value materials, and characterize preprocessing operations according to material recovery and total costs. Finally, I present market level, operational, and policy recommendations aimed at capturing the unrecovered economic value identified in the Portuguese WEEE recycling system.
The synthesis and investigation of the electronic properties of crown ether, [2]-catenane, and [2]-rotaxane architectures
Scope. The body of work described in this thesis focuses on the synthesis of donor-acceptor architectures of the pseudorotaxane, rotaxane, and catenane genres. The binding constants of thiophene and phenylene-ethynylene based crown ethers are determined via fluorescence quenching titrations, in an attempt to correlate structure with binding affinities. The insight obtained from the binding constant determinations allows for the proper choice of crown ether for the formation of [2]-catenanes and [2]-rotaxanes. The electronic properties of these complexes are then probed by electrochemical, spectroelectrochemical, and conductivity measurements. The potential of a poly([2]-catenane) as a photoconductive polymer is also investigated. The obstacles encountered in synthesizing poly([2]-rotaxanes) is also discussed, with emphasis on end-group strategies that aid in thwarting electrochemically induced dethreading. Chapter 1. This chapter provides a broad overview of the physical methods utilized in this thesis. Electrochemical methods are covered, with particular emphasis on oxidative polymerization of heteroaromatic monomers. Correlation between electrode phenomena and cyclic voltammogram waveshapes is included, in order to reinforce an understanding of the various electrochemical processes. Electroactive catenane and rotaxane supramolecules feature prominent in the literature, and a brief overview of significant examples is given. Advanced electrochemical techniques regarding in-situ conductivity measurements and spectroelectrochemistry are discussed. The elucidation of binding affinities for hosts and guest via fluorescence spectroscopy, a major theme in Chapter 2, is presented with focus on interpreting Stern-Volmer plots.
Advantages of using high speed sintering as a rapid manufacturing alternative in footwear applications
Rapid manufacturing is a family of technologies that employ additive layer deposition techniques to construct parts from computer based design models.[2] These parts can then be used as prototypes or finished goods. One type of rapid manufacturing technology, Selective Laser Sintering, only allows for a point-by-point sintering process to construct the 3D representations of CAD models. This makes for long processing periods and is ineffective for high volume manufacturing. However, a new process called high-speed sintering uses infrared energy to 'flash' the polymer powder at multiple points making the layer deposition process much more time efficient. In effect each infusion of energy results in an entire layer being constructed rather than a single point. One of the first industrial applications for this technique is in performance footwear manufacturing. New Balance, a Boston based shoe and apparel company, in collaboration with Loughborough University has an interest in exploring the technology for low volume parts manufacturing as well as personalized footwear. High speed sintering has the potential to replace injection molding for specific footwear and non-footwear applications. This technology has several key advantages over injection molding including the ability to build complex geometries that would be impossible with injection molding. Also as the technology continues to evolve new materials could improve the mechanical performance of finished parts. Nevertheless, as with commercializing any new technology identifying a cost effective implementation route is a pivotal step.
Investigation of a SACK approach for ex vivo expansion of human HSCs
Ex vivo expansion of hematopoietic stem cells (HSCs) is a long-standing challenge faced by both researchers and clinicians. To date, no robust, efficient method for the pure, ex vivo expansion of human HSCs has been demonstrated. Previous methods primarily induced the expansion of committed hematopoietic progenitor cells (HPCs), yielding even less pure populations of HSCs. This research was based on the hypothesis that, like for other adult stem cells (ASCs), the major barrier to expanding HSCs ex vivo is in preferentially regulating the asymmetric self-renewal of HSCs without loss in their ability to produce differentiated committed HPCs. This laboratory has shown that a p53-dependent pathway specifically controls the self-renewal pattern of several types of ASCs and thereby provides an effective means for expansion of ASCs in culture. The method, which involves the use of purine metabolites to achieve suppression of asymmetric cell kinetics, is referred to as SACK. The utility of the p53-dependent pathway was investigated for directing expansion of human HSCs. In order to support this investigation, the proliferation of HPCs in in vitro cultures was repressed by culturing cells without hematopoietic growth factors and cytokines.
Development of a risk management system for consumables used in biopharmaceutical manufacturing
Injectable drugs, like those manufactured by the BioPharmOps group of Novartis Pharmaceuticals AG, must conform to strict guidelines for purity and potency. Recent non-conformances of critical supplied consumables have revealed potential business and patient safety risks for biotechnology manufacturers worldwide. As a result, Novartis has launched a program to enhance control systems over all consumables and their suppliers. Within this program, the author has developed a system to identify, analyze, and mitigate the various risks which may impact the business due to non-conformances in supplied consumables. The first function of the system is the identification of key risks and their potential effects according to various failure modes that have been observed during the use of the consumables in production. This is accomplished with a standardized list of possible failure modes which can be applied to all consumables. The categorization allows the relative risk of each failure mode to be compared among consumables. Secondly, the risk of contamination is evaluated using a Failure Modes and Effects Analysis (FMEA) framework. The three dimensions of the FMEA framework are the severity, likelihood, and detectability of a failure. The severity of each failure mode is assessed by analyzing the quantitative and qualitative impact that a failure might have on the purity and potency of the drug. This calculation is based on the properties of each consumable and its use in the production system. The likelihood of failure events is assessed through an analysis of the complexity of the consumable and its supply chain, and a review of the quality systems at the supplier. Detectability analysis considers the tests and inspections in place at various stages including consumable manufacturing, receiving inspection, and in-process tests during drug manufacturing which could detect a non-conformance. The total risk level is evaluated as the product of these three dimensions and a threshold is defined for requiring additional mitigations for these risks. This risk assessment method is implemented in an automated worksheet to ensure consistency among users and efficient analysis. The third outcome of the system is the recommendation of mitigations to reduce total exposure to contamination risk. Mitigations may be internal (new tests and inspections) or implemented at the supplier (improved sampling rates, enhanced general quality systems, or new controls). The recommended mitigations provide guidance for the reduction of risks to an acceptable level, and when implemented, the impact and frequency of non-conformances will be diminished. Ultimately, this reduces Novartis' exposure to potential business loss and protects patients from injury caused by contamination.
Enabling and inhibiting urban development : a case study of Lahore Improvement Trust as a late colonial institution
This thesis examines the Lahore Improvement Trust in relation to the urban development of the city of Lahore in mid-twentieth century. LIT was responsible for most major urban development in the city from 1936 up until 1975, when it metamorphosed into the Lahore Development Authority. However, its impact on Lahore's urban history is surprisingly under-recognized, and this may be due to the relative failure of the body itself in delivering a large part of its mandate, despite being responsible for major morphological changes in the city. The formation of LIT, like other Improvement Trusts in India, was based on a real need for planned urban development of a rapidly expanding city. This thesis argues that the structure of such a body was, however, based on conceptual frameworks that were introduced in India by numerous different British institutions, with the aim of either 'testing out' or for furthering a particular colonial agenda. These inherent structural beliefs were carried through numerous cycles of 'reform' before being applied onto the Improvement Trust network which, this study argues, followed a strict path dependent paradigm in a late colonial institution such as LIT. Using the annual reports of LIT, I show that this was evident in the modus operandi of the body, to the point that despite being able to implement individual projects that can be considered successful to a certain extent, it failed to develop or implement a coherent urban vision. Projects under LIT were fragmented instances in the larger urban morphology of the city, which failed to respond to the more pressing problems in the city. Its failure to register itself as a viable body was further exacerbated by the body's incapability to deal with issues such as housing shortage in the city. This was particularly evident in the face of a major shock as Partition in 1947. A huge influx of migrants from East Punjab and riots within the city that caused major infrastructural damage within the city meant that the deficit of the body carried itself exponentially beyond the event of Partition in 1947. That the Trust exhibited institutional inertia well beyond the Partition in its mode of operating explains the weak progress it made beyond that event, and its eventual dissolution into Lahore Development Authority in 1975. Hence, while most projects implemented by the Trust were moderately successful, the lack of a holistic urban plan, a result of both structural (internal) and situational (external) problems, was where LIT failed to deliver causing it to leave an ineffectual mark on Lahore's urban history.
Fault detection algorithms for spacecraft monitoring and environmental sensing
Constellations of hundreds of low-Earth orbiting small satellites are currently being designed and built. Operators plan to provide data and media distribution services as well as imaging and weather observations. As our society increases its dependence on satellite services for communication and navigation, there is a growing need for efficient spacecraft systems monitoring and space situational awareness to avoid service interruptions due to hazards such as space weather. Particularly for large constellations, satellites need greater autonomy to improve responsivity and reduce the load on human operators. In this thesis, we present the development of algorithms that identify unusual behavior in satellite health telemetry. Once these events have been identified, we collect and analyze them, along with assessing space weather observations and operational environment factors. Our approach uses transient event detection and change-point event detection techniques, statistically evaluating the telemetry stream compared to a local norm. This approach allows us to apply our algorithms to any spacecraft platform, since there is no reliance on satellite- or component-specific parameters, and it does not require a priori knowledge about the data distribution. We apply these techniques to individual telemetry data streams on geostationary Earth orbit (GEO) communications satellites (ComSats), and consider the results, a compiled list of unusual events for each satellite. Results include being able to identify events that affect many telemetry streams at once, indicative of a spacecraft system-level event. With data from multiple satellites, we can use these methods to better determine whether external factors played a role. We compare event dates to known operational activities and to known space weather events to assess the use of event detection algorithms for spacecraft monitoring and for environmental sensing.
Vorticity transfer through rapid area change
Extensive studies have been conducted on the use of biomimetic foils for propulsion and maneuvering of vehicles. These studies, however, mostly focuses on the use of sinusoidal motion similar to bird flapping or fish swimming to generate the necessary forces. Few studies have been conducted to investigate the generation of maneuvering forces by using rapid vorticity transfer into the fluid through a rapid motion as observed in some animals. In this study a NACA 0012 foil was towed steadily at Reynolds number of 14000, then the foil is rapidly accelerated in the transverse direction. Two different cases were tested: One where the area decreases and one where it increases, referred to as vanishing foil and emerging foil, respectively. Various angle of attacks were tested, and in all the cases the circulation is conserved. The method of Particle Image Velocimetry and flow visualization were used to map out the three-dimensional vortical structure after the rapid motion. In the emerging foil experiment the flow structure is similar to the case of accelerating wing. From the vanishing foil experiment, however, we managed to discover a phenomenon called global separation, where separation happens instantaneously over the entire surface of the body. This global separation allows a more effective and rapid transfer of vorticity, at about one order of magnitude faster than vorticity transfer through conventional means.
Abdominal vacuum lift as an aid to diagnosing abdominal adhesions
The internal organs are designed to move freely and slide over one another during normal body movement. The abdominal organs, however, have a tendency to adhere to the abdominal cavity (peritoneum) and other abdominal organs after surgery or infection. These adhesions can cause pain, discomfort , inflammation, anxiety, depression, problems with conception, trouble eating, and decreased immune function. There are around 300,000 hospital admissions in the U.S. every year for patients due to adhesions.. Part of the problem is that there is no suitable method to diagnose adhesions. Recently there have been a number of studies which suggest that measuring visceral slides under ultrasound using exaggerated respiration may prove to be very promising in diagnosing adhesions non invasively. Yet there are still weaknesses in the predictive power of these procedures. For such procedures to be successfully implemented into clinical medicine and offer non invasive methods to diagnosing adhesions, they must first be able to offer higher percentage predictive values. We have worked on a number of models of an external abdominal vacuum system which we believe will increase the accuracy and predictive values of measuring visceral slides under ultrasound using exaggerated respiration.
Primitive computations in phrase construction
The Minimalist Program in current linguistic theory seeks to explain linguistic structure in terms of economy principles, under the assumption that the human language faculty is a perfect system that performs only enough work to satisfy interface requirements. We consider processing costs as a property of syntactic computation and propose that these principles of economy may be met by the availability of alternative operations, each favorable in different circumstances. We characterize the basic Merge operation as a collection of three nested operations that apply to three corresponding levels of nested syntactic data types. In this framework, we provide an analysis of coordinate structure that uses a goal of minimizing processing cost to explain a number of peculiar characteristics of coordination, including the Coordination of Likes Constraint, the Coordinate Structure Constraint, and apparent case and agreement violations.
A multilayer network approach to quantifying biologically-derived systematic risk in biomedical finance
Sharply rising disease prevalence and associated healthcare costs are placing an increasingly significant economic burden on society. Biomedical research and industry have struggled to adequately address this challenge, as evidenced by the stagnation and even decline of new therapeutics development success rates. Recent work in the MIT Laboratory for Financial Engineering has explored the potential of using financial engineering in the form of biomedical "megafunds" to help tackle this problem. New methods will be needed to better assess systematic financial risks for these therapeutic project portfolios. This primarily methodological thesis seeks to explore the opportunity to leverage multilayer network models as tools to help measure this risk, specifically the biologically-derived component of risk resulting from project correlations generated through the underlying biological networks. Historical examples of coupling between drug development projects are used to motivate a framework in which project correlations emerge from a combination of indication and target similarity. This framework motivates the construction of a multilayer network model, drawing upon multiple systems biology databases for its construction and using a sample of FDA orphan designations as a representative project set. Using shortest path distance and Random Walk with Restart (RWR) relevance, indication and target similarity between projects are quantitatively evaluated. Comparing average sales correlations to the log of average RWR relevance for classes of compounds reveals notable relationships between correlation and network similarity. This relationship is shown to be stronger for the case of disease relevance (R2 = 0.99) than for target relevance (R2 = 0.93). A potential approach is finally described for integrating biological network similarity with financial models useful for portfolio analysis, and implications on portfolio selection are discussed through synthetic construction of hypothetical orphan drug portfolios..
Geographic location and geographic prediction performance benefits for infrastructureless wireless networks
The field of infrastructureless wireless networks (IWNs) is a broad and varied research area with a history of different assumption sets and methods of analysis. Much of the focus in the area of IWNs has been on connectivity and throughput/energy/delay (T/E/D) tradeoffs, which are important and valuable metrics. When specific IWN routing protocols are developed, they are often difficult to characterize analytically. In this thesis we review some of the important results in IWNs, in the process providing a comparison of wideband (power-limited) versus narrowband (interference-limited) networks. We show that the use of geographic location and geographic prediction (GL/GP) can dramatically increase the performance of IWNs. We compare past results in the context of GL/GP and develop new results in this area. We also develop the idea of throughput burden and scaling for the distribution of topology and routing information in IWNs and we hope that this work provides a context in which further research can be performed. We primarily focus our work on wideband networks while also reviewing some narrowband results. In particular, we focus on wideband networks with non-zero processing energy at the nodes, which combines with distance-dependent transmission energy as the other main source of power consumption in the network. Often the research in this area does not take into account processing energy, but there is previous work which shows that processing energy is an important consideration. The consideration of processing energy is the determining factor in whether a whisper to the nearest neighbor (WtNN) or characteristic hop distance routing scheme is optimal. Whisper to the nearest neighbor routing involves taking a large number of short hops, while characteristic hop distance routing is the scheme by which the optimal hop distance is based on the distance dependent transmission energy and the processing energy, as well as the attenuation exponent. For a one-dimensional network, we use a uniform all-to-all traffic model to determine the total hop count and achievable throughput for three routing types: WtNN without GL/GP, WtNN with GL/GP, and characteristic hop distance with GL/GP. We assume a fixed rate system and a random and uniform node distribution. The uniform all-to-all traffic model is the model where every node communicates with every other node at a specified rate. The achievable throughput is the achievable rate at which each source can send data to each of its destinations. The results we develop show that the performance difference between WtNN with and without GL/GP is minimal for one-dimensional networks. We show the reduction in hop count of characteristic hop distance routing compared to WtNN routing is significant. Further, the achievable throughput of characteristic hop distance routing is significantly better than that of WtNN networks. We present a method to determine the link rate scaling necessary for link state distribution to maintain topology and routing information in mobile IWNs. We developed several results, with the main result of rate scaling for two-dimensional networks where every node is mobile. We use a random chord mobility model to represent independent node movement. Our results show that in the absence of GL/GP, there is a significant network burden for maintaining topology and routing information at the network nodes. We also derive real world scaling results using the general analytic results and these results show the poor scaling of networks without GL/GP. For networks of 100 to 1000 nodes, the rate scaling for maintaining topology in mobile wireless networks is on the order of hundreds of megabits to gigabits per second. It is infeasible to use such significant amounts of data rate for the sole purpose of maintaining topology and routing information, and thus some other method of maintaining this information will need to be utilized. Given the growing number of devices connected to the Internet, in the future it is likely that IWNs will become more prevalent in society. Despite the significant amount of research to date, there is still much work to be done to determine the attributes of a realistic and scalable system. In order to ensure the scalability of future systems and decrease the amount of throughput necessary for network maintenance, it will be necessary for such systems to use geographic location and geographic prediction information.
In vitro models for airway epithelial cell culture
This work is about the development of a physiologically relevant model of the human airway. Various factors such as the cell model, physiochemical factors such as the cell substrate properties including its stiffness, shear stress, stretch, the air-liquid interface and the biochemical factors in the medium influence the biology of the cells. The aim of this work is to closely approximate conditions in an in vivo situation by engineering the above conditions in to the in vitro platform. An assay to introduce the cell substrate properties was developed in a glass bottomed petri dish type culture as well as a microfluidic device culture. The influence of the cell substrate on airway epithelial cell monolayer formation was investigated in detail by changing the stiffness of the substrate independently by changing the gel concentration, the gel formation pH and the height of the gel from a hard substrate. Further, we found that biochemical growth factors have a huge role in cell monolayer formation. A real-time measurement of monolayer integrity using electrical resistance measurements was developed. A shear stress application platform was developed and a stretch application platform was designed. The applications of such a platform with the inclusion of various physiologically relevant factors include the study of physiologic evolution of microbes such as the influenza virus.
Dynamically orthogonal field equations for stochastic fluid flows and particle dynamics
In the past decades an increasing number of problems in continuum theory have been treated using stochastic dynamical theories. This is because dynamical systems governing real processes always contain some elements characterized by uncertainty or stochasticity. Uncertainties may arise in the system parameters, the boundary and initial conditions, and also in the external forcing processes. Also, many problems are treated through the stochastic framework due to the incomplete or partial understanding of the governing physical laws. In all of the above cases the existence of random perturbations, combined with the complex dynamical mechanisms of the system often leads to their rapid growth which causes distribution of energy to a broadband spectrum of scales both in space and time, making the system state particularly complex. Such problems are mainly described by Stochastic Partial Differential Equations and they arise in a number of areas including fluid mechanics, elasticity, and wave theory, describing phenomena such as turbulence, random vibrations, flow through porous media, and wave propagation through random media. This is but a partial listing of applications and it is clear that almost any phenomenon described by a field equation has an important subclass of problems that may profitably be treated from a stochastic point of view. In this work, we develop a new methodology for the representation and evolution of the complete probabilistic response of infinite-dimensional, random, dynamical systems. More specifically, we derive an exact, closed set of evolution equations for general nonlinear continuous stochastic fields described by a Stochastic Partial Differential Equation. The derivation is based on a novel condition, the Dynamical Orthogonality (DO), on the representation of the solution. This condition is the 'key' to overcome the redundancy issues of the full representation used while it does not restrict its generic features. Based on the DO condition we derive a system of field equations consisting of a Partial Differential Equation (PDE) for the mean field, a family of PDEs for the orthonormal basis that describe the stochastic subspace where uncertainty 'lives' as well as a system of Stochastic Differential Equations that defines how the uncertainty evolves in the time varying stochastic subspace. If additional restrictions are assumed on the form of the representation, we recover both the Proper-Orthogonal-Decomposition (POD) equations and the generalized Polynomial-Chaos (PC) equations; thus the new methodology generalizes these two approaches. For the efficient treatment of the strongly transient character on the systems described above we derive adaptive criteria for the variation of the stochastic dimensionality that characterizes the system response. Those criteria follow directly from the dynamical equations describing the system. We illustrate and validate this novel technique by solving the 2D stochastic Navier- Stokes equations in various geometries and compare with direct Monte Carlo simulations. We also apply the derived framework for the study of the statistical responses of an idealized 'double gyre' model, which has elements of ocean, atmospheric and climate instability behaviors. Finally, we use our new stochastic description for flow fields to study the motion of inertial particles in flows with uncertainties. Inertial or finite-size particles in fluid flows are commonly encountered in nature (e.g., contaminant dispersion in the ocean and atmosphere) as well as in technological applications (e.g., chemical systems involving particulate reactant mixing). As it has been observed both numerically and experimentally, their dynamics can differ markedly from infinitesimal particle dynamics. Here we use recent results from stochastic singular perturbation theory in combination with the DO representation of the random flow, in order to derive a reduced order inertial equation that will describe efficiently the stochastic dynamics of inertial particles in arbitrary random flows.
impact and potential role of multinational corporations in achieving sustainability in Developing countries
This thesis aims to assess the activities and influence of automotive multinational corporations (MNCs) in developing countries as it relates to the concept of "total sustainability" within three countries: Argentina, Brazil, and Mexico. It is an innovative perspective of the systemic sustainability issues incorporated in corporate strategy, industrial policy, worker representation, and the environmental protection. Research has focused on collecting information from journals, industry publications, and studies by international organization, on the interplay between government policy and automotive MNC activity. Focus of attention has been paid to influences on the "Pillars of Sustainability", described by Professor Nicholas Ashford, that encompass sustainable development: Environment, Economy, and Employment. Analysis of the observations and industry/policy trends has used the Ashford framework, which focuses on both the above mentioned factors of total sustainability and the many processes that interconnect their states. Attention is also paid to emergent complex system behavior and associated risks and opportunities. Conclusions and Recommendations are focused on systemic views of the challenges posed by automotive industry activity on the nations studied and policy recommendations on how to possibly both capture economic benefit, but also further sustainable development efforts. Suggested future research topics associated with this thesis would encompass the analysis of different industries, entrepreneurial enterprises, industrial policies, technologies and policies, or the development of associated System Dynamics models.
SVM algorithms : analysis and applications
Support Vector Machines (SVMs) have attracted recent attention as a learning technique to attack classification problems. The goal of my thesis work is to improve computational algorithms as well as the mathematical understanding of SVMs, so that they can be easily applied to real problems. SVMs solve classification problems by learning from training examples. From the geometry, it is easy to formulate the finding of SVM classifiers as a linearly constrained Quadratic Programming (QP) problem. However, in practice its dual problem is actually computed. An important property of the dual QP problem is that its solution is sparse. The training examples that determine the SVM classifier are known as support vectors (SVs). Motivated by the geometric derivation of the primal QP problem, we investigate how the dual problem is related to the geometry of SVs. This investigation leads to a geometric interpretation of the scaling property of SVMs and an algorithm to further compress the SVs. A random model for the training examples connects the Hessian matrix of the dual QP problem to Wishart matrices. After deriving the distributions of the elements of the inverse Wishart matrix Wn-1(n, nI), we give a conjecture about the summation of the elements of Wn-1(n, nI). It becomes challenging to solve the dual QP problem when the training set is large. We develop a fast algorithm for solving this problem. Numerical experiments show that the MATLAB implementation of this projected Conjugate Gradient algorithm is competitive with benchmark C/C++ codes such as SVMlight and SvmFu. Furthermore, we apply SVMs to time series data.
Fields of rationality of cuspidal automorphic representations
This thesis examines questions related to the growth of fields of rationality of cuspidal automorphic representations in families. Specifically, if F is a family of cuspidal automorphic representations with fixed central character, prescribed behavior at the Archimedean places, and such that the finite component [pi] [infinity] has a [Gamma]-fixed vector, we expect the proportion of [pi] [epsilon] F with bounded field of rationality to be close to zero if [Gamma] is small enough. This question was first asked, and proved partially, by Serre for families of classical cusp forms of increasing level. In this thesis, we will answer Serre's question affirmatively by converting the question to a question about fields of rationality in families of cuspidal automorphic GL2(A) representations. We will consider the analogous question for certain sequences of open compact subgroups F in UE/F(n). A key intermediate result is an equidistribution theorem for the local components of families of cuspidal automorphic representations.
Simultaneous positron emission tomography/functional magnetic resonance imaging for imaging neuroreceptor dynamics
Whole-brain neuroimaging is a key technique for studying brain function and connectivity. Recent advances in combining two imaging modalities - magnetic resonance imaging (MRI) and positron emission tomography (PET) - into one integrated scanner, have created the opportunity to explore the underlying neurochemistry of brain function in more detail. Imaging these dynamics plays an important role for understanding drug action and function of neurochemical pathways in the brain and is crucial, yet largely unexplored, for creating and evaluating treatment of neurological and psychiatric disorders. In this thesis, we first address technological challenges in simultaneous PET/MRI by designing, building and evaluating PET compatible MR probes for brain imaging, which enable highly sensitive dual modality imaging. We then develop simultaneous imaging methods with PET and functional MRI to assess and validate relationships between receptor occupancy and changes in brain activity due to pharmacological challenges targeting the dopamine system. Our results indicate that dopamine receptor occupancies and vascular responses are correlated in anatomical space and with pharmacological dose. Moreover, the temporal dynamics of the signals show that a direct neurovascular coupling between receptor occupancy and hemodynamics exists and that a temporal divergence between PET and fMRI can be used to investigate previously unexplored neurochemical parameters and adaptation mechanisms in vivo. Overall, our findings provide insight into dopaminergic receptor dynamics and their effects on high-level brain function, paving a way to address receptor-specific brain dysfunction effectively.
Reference-frame theory and stability region generation
Electricity provides the foundation for many of today's technological advances. The desire for energy security, a reduction in carbon dioxide emissions and a diversification of resources are all motivations for changes in how electricity is generated and transmitted. Recent alternatives to traditional centralized power-plants include technologies that are decentralized and intermittent, such as solar photovoltaic and wind power. This trend poses considerable challenges in the hardware making up these systems, the software that control and monitor power networks and their mathematical modelling. This thesis presents a set of contributions that address some of the aforementioned challenges. Firstly, we examine the fundamental theories used in modelling and controlling power systems. We expand previous work on reference-frame theory, by providing an alternative interpretation and derivation of the commonly used Park and Clarke transformations. We present a geometric interpretation that has applications in power quality. Secondly, we introduce a framework for producing regions of stability for power systems using conditional generative adversarial neural networks. This provides transmission and distribution operators with an accurate set of control options even as the system changes significantly.
Fundamental studies of heterostructured oxide thin film electrocatalysts for oxygen reduction at high temperatures
Searching for active and cost-effective catalysts for oxygen electrocatalysis is essential for the development of efficient clean electrochemical energy technologies. Perovskite oxides are active for surface oxygen exchange at evaluated temperatures and they are used commonly in solid oxide fuel cells (SOFC) or electrolyzers. However, the oxide surface chemistry at high temperatures and near ambient oxygen pressure is poorly understood, which limits the design of highly active catalysts. This work investigates heterostructured interfaces between (Lai. xSrx)CoO 3-3 (where x = 0.2 and 0.4, LSC80-2011 3 and LSC60-40 113 respectively) and (Lao. 5 Sro.5 )2CoO 4 ,3 (LSC2 14) enhanced ORR catalytic activity 1) via electrochemical impedance spectroscopy, atomic force microscopy, scanning electron microscopy, scanning transmission electron microscopy, and high resolution X-ray diffraction (HRXRD) and 2) using in situ ambient pressure X-ray photoelectron spectroscopy (APXPS) and in situ HRXRD. Here we show that the ORR of epitaxial LSC80-20 1 3 and LSC60-40113 is dramatically enhanced (~3-4 orders of magnitude above bulk LSC113) by surface decorations of LSC214 (LSC 1 31214) with coverage in the range from ~0.1 to ~15 nm. Such high surface oxygen kinetics (~ 110-5 cm-s1 at 550 C) are among the most active SOFC cathode materials reported to date. Although the mechanism for ORR enhancement is not yet fully understood, our results to date show that the observed ORR enhancement can be attributed to highly active interfacial LSCn 13/LSC214 regions, which were shown to be atomically sharp. Using in situ HRXRD and APXPS we show that epitaxial LSC80-20n3 thin films have lower coverage of surface secondary phases and higher Strontium enrichment in the perovskite structure, which is attributed to its markedly enhanced activity relative to LSC80-20113 powder. APXPS temperature cycling of epitaxial LSC80-20113 APXPS reveled upon heating to 520 *C the initial Sr enrichment which is irreversible, however subsequent temperature cycling demonstrates a small amount of reversible Sr enrichment. With applied potentials LSC80- 2013/214 shows significant Sr enrichment greater then LSC80-20 113, and the ability to stabilize high concentrations of both lattice and surface Sr which we hypothesize is a very important factor governing LSC80-2011 3214 enhanced ORR activity.
Human-automation interaction for lunar landing aimpoint redesignation
Human-automation interactions are a critical area of research in systems with modem automation. The decision-making portion of tasks presents a special challenge for human-automation interactions because of the many factors that play a role in the decision-making process. This is prominent in human spaceflight, where the astronaut must continually interact with the vehicle systems. In future lunar landings, astronauts working in conjunction with automated systems will need to select a safe and achievable landing aimpoint. Ultimately, this decision could risk the safety of the astronauts and the success of their mission. Careful study is needed to ascertain the roles of both the human and the automation and how design can best support the decision making process. The task of landing on the moon was first achieved by the Apollo program in 1969, but technological advances will provide future landings with a greater variety and extensibility of mission goals. The modem task of selecting a landing aimpoint is known as landing point redesignation (LPR), and this work capitalizes on an existing LPR algorithm in order to explore the effects on landing point selection by altering the levels of automation. An experiment was designed to study the decision-making process with three different levels of automation. In addition, the effect of including a human-generated goal that was not captured by the automation was studied. The experimental results showed that the subjects generally used the same decision strategies across the different levels of automation, and that higher levels of automation were able to eliminate earlier parts of the decision strategy and allow the subjects to select a landing aimpoint more quickly. In scenarios with the additional human goal, subjects tended to sacrifice significant safety margins in order to achieve proximity to the point of interest. Higher levels of automation allowed them to maintain high levels of safety margins in addition to achieving their external goal. Thus, it is concluded that with a display design supporting human goals in a decision-making task, automated decision aids that make recommendations and assist communication of the automation's processes are highly beneficial.
Point of departure : landscape, memory and change as passage for design
This thesis is the exploration of the natural and cultural environment through design. The natural landscape is a richly complex system reliant on interdependencies, change, and renewal. It is laden with multiple, even contradictory interpretations, yet it is one of intimate associations and often pastoral repose. Unlike the often static, simplistic order of the human environment, the natural environment is understood and enjoyed through formal and interactive relationships set in an emerging process of time. As such, a very positive reference for societies' state can be found in observing and transforming all evolving landscape that surrounds, nourishes, and defines us. The landscape, the "point of departure," becomes meaningful in its expression of the perpetual possibility of an occurrence, change or design. The vehicle for this investigation is a design projection for a small park in conjunction with the 1992 summer Olympic Games in Barcelona, Spain. The programmatic requirements are to supply temporary exhibit space for the Olympic/global ideal and to function as a formal and pedestrian link between the closed formal axis of the city and its extension to the Olympic Stadium. A strong design concern is the interpretation of the site as a temporal, spatial, and formal continuum of what existed before, the needs during the three week Olympic celebration, and its return to a daily routine as a new botanical garden. The first section is an elaboration of the relativistic character of the natural environment and its reference to both the process of design and the human experience. The second section describes the site in terms of landscape, its formal attributes and its place in geographic time. The third section describes the site in terms of memory, autobiographical and cultural time, the impact of man's relation to nature, and the specific plastic effects that it has had on the existing condition and form. The last section reveals the site in terms of change, or both the literal and lyrical passage of design. This part synoptically describes the temporal and formal configuration between what was, what is, and what might be.
The Postal Service Pension System and alternative methods for providing long-term financial welfare to retirees
The United States Postal Service continues to face difficult financial conditions, due primarily to electronic diversion of mail volume. The largest component of the Postal Service's cost structure is labor, with retirement benefits representing a significant portion of those costs. This thesis provides a historical retrospective of the development of the pension system that the Postal Service currently participates in, and assesses the impact that the pension system has had on the Postal Service through history. The ultimate objective of this thesis is to study the United States Postal Service pension system as it relates to its current obligations to the United States federal government, provide a review of alternative pension arrangements operating in other sectors, and analyze the leading alternatives as they apply to the Postal Service to understand their potential impact on the finances of the United States Postal Service. Two simulations models are developed in the study, based on an analysis of the current workforce, historical and projected retirement patterns, and the current pension contribution profiles of workers. The models are used to assess the impact of various plan designs on the Postal Service's cost structure, and on a typical individual employee's post-retirement income.
Multiple sensor acquisition board for wearable computing
The wearable multiple sensor acquisition device is targeted as the primary sensor hub for the next revision of the MIThril wearable computer architecture. The device provides 3-axis acceleration sensing, infrared transmission and reception of Tag IDs, and a 16kHz 8bit audio stream. To be usable for wearable applications, the WMSAD is contained in a 1.3" X 1.35" package and operates at 35mA at 5V, and meets the requirements imposed by these applications. The WMSAD will operate directly with the SAK2 to provide sensor data collection and recording.
Transportation linear referencing toolboxes : a 'reflective practitioner's' design approach
Seventy percent of the data of a typical transportation agency (e.g., bridges, accidents, etc.) has location as a primary reference. A Linear Referencing System (LRS) is the main way of identifying the location of this data and providing a storage key for it in a database. LRS is based on a one-dimensional offset on a predefined network. In theory, it is one of the simplest spatial cases. In reality, it can be spatially and analytically quite complex. LRS to quite recent date has been little formally researched. That research which has occurred has been the construction of large and comprehensive conceptual data models. This thesis is not primarily aimed at new "tool building research". The existing models have been based to only a limited extent on a fuller analysis of the nature of transportation and spatial data; they have not considered relevant field and wider methodological concerns (i.e., they followed a "model-driven" approach). The goal here is to create a more appropriate foundation and base from which LRS tools may be most appropriately built (i.e., a 'field-driven" approach). A "practitioners perspective" view of LRS was sought. Such a more holistic understanding was sought through the adoption of a "layered methodology" of research that involved gaining the perspectives of a variety of disciplinary viewpoints. This research framework was developed especially for this thesis based on the ideas and work of Schon and Reich. The approach involved in short a desk exercise in fundamental consideration of the nature of LRS, a deeper, cross-field synthesis and literature research, four in-depth state DOT LRS case studies, a panel of transportation field experts, a panel of national data model experts, and a limited object-orientated modeling exercise. The conclusion reached is that while LRS in the simple case can be modeled in general forms, it is also an "exception-driven" field. Thus, a "toolkit approach" may be more appropriate for LRS. It is inferred that this may hold for other similar application areas in transportation and planning. Further research would further develop the holistic layered methodology adopted here and further define the proposed LRS transportation application toolboxes.
Athletes' use of sport video games to enhance athletic performance
A design feature of contemporary sport video games allows elite athletes to play as themselves in life-like representations of actual sporting events. The relation between playing sport video games and actual physical performance has not yet been established. Drawing on data from interviews and observations of elite athletes playing sport video games, this thesis explores why elite athletes are playing these video games as their virtual selves, and establishes a framework for understanding how this play may enhance learning opportunities. Building on theories based in the disciplines of psychoanalysis, education, and neuroscience, this thesis argues that virtual play by athletes playing as themselves in sport video games has the potential to support and encourage physical performance.
Analysis of MRAM applications
Magnetic Random Access Memory (MRAM) is considered to be the most viable option for nonvolatile memory in the computer industry. This need for nonvolatile computer memory has resulted in the dramatic evolution of MRAM technology in the past ten years. Currently in the latter stages of development, emphasis is being put on experiments concerning optimization of density and reduction of the switching fields of the magnetic elements. Applications of MRAM technology are currently being explored by companies who seek to obtain relevant intellectual property in those areas. Once research is completed, companies must create a business plan that recognizes the initial, breakthrough markets and implement technology integration accordingly.
Design of a fluidic test bed for MEMS piezoelectric energy harvester
This document outlines the basic theory behind generating mathematical models, choosing materials and designing geometries for simulating a 900 mile Alaskan Pipeline. The use of dimensional analysis is useful for simulating the vibration spectrum given off in the pipeline due to turbulent flow of the fluid. In the design of Pm pg devices, that transform the mechanical vibration to electrical energy, the scaled down model will be used as a test bed for future prototype PMPG designs. After modeling the Alaskan pipeline and designing it around dimensional analysis, a Vernier Low-g accelerometer is used to measure the vibration spectrum. The frequency that was analyzed was 251.01 ± 0.447 Hz and when converted back to the Alaskan pipeline we achieved a frequency of 6.94Hz. Using this information we can design PMPG devices that will resonate in this frequency bandwidth to create a higher efficiency in mechanical to electrical conversion.
Automated, hands-on apprenticeship program
There are a plethora of medium and small-sized manufacturing companies that do not rely completely on autonomous systems. As a result, it is more economical to use a mixture of human labor and manufacturing robots; however, with the thousands of people who apply for these jobs, many do not have the experience to work along-side robots or understand how robots works. The research described by this thesis introduces a solution to this problem through TeachBot, an automated, hands-on apprenticeship program. TeachBot seeks to empower manufacturing workers with the skills necessary to work collaboratively with robots in the manufacturing industry. Through the use of ROS, the program teaches key topics in robotics such as encoders, kinematics, feedback, and programming through multiple interactive modules. TeachBot is setup with three main components: a JSON file of instructions, a JavaScript file, and a python file. The JavaScript file parses the instructions and sends commands to the python script. The python script then sends these commands to the robot. This process allows TeachBot to be modular and universal such that it can be modified easily and applied to any robot. This research focuses on implementing TeachBot onto the Sawyer Robot and the possible extension to the Universal Robot 5e.
Atmospheric delay modeling for satellite laser altimetry
NASA's Ice, Cloud, and Land Elevation Satellite (ICESat) is a laser altimetry mission with the primary purpose of measuring the mass balance of the ice sheets of Greenland and Antarctica. It will provide 5 years of topography measurements of the ice, as well as land and ocean topography. In order to accurate topography measurements the laser altimeter ranges must be corrected for certain biases. Atmospheric delay is one such bias. As the laser pulse travels through the atmosphere it will be refracted, introducing a delay into the travel time. This delay must be estimated to correct the ranges and the delay estimations need to be validated. Of particular concern are errors in the delay estimates that have the same characteristics as the expected mass balance variations. The main focus of this dissertation is to formulate algorithms for calculating the ICE-Sat atmospheric delay and estimate the expected delay values and errors. Our atmospheric delay algorithm uses numerical weather model data to estimate delay values. We have validated these algorithms using Automatic Weather Stations (AWS) in the polar regions and GPS data over the globe. The GPS data validation was also augmented by in-situ meteorology measurements at some the stations. The GPS validation process additionally allowed us to investigate the estimation of precipitable water vapor using GPS data. The validation studies have shown that our atmospheric delay algorithm errors are well within the ICESat error budget of 20 mm. The overall global delay errors are estimated to be approximately 5.4 mm and the polar delay errors are 12.2 mm. There are no discernible biases in the error and the seasonal variations in error magnitudes are well characterized.
Landscapes of convergence : a proposal for exchange at the San Diego-Tijuana border
This thesis addresses the relationship between San Diego, California and Tijuana, Mexico. Although these two cities are part of a single landscape and ecology, they are divided, not only by a physical wall, but very different cultural, social, political and economic realities. This thesis is a proposition about exchange. Economic exchange has always been the driving force for interaction between San Diego and Tijuana. Their relationship has operated at a very fundamental level, rooted in a market economy driven by the laws of supply and demand. My goal is to build on this interaction, to exchange beyond the mutual economic interests and to provide a forum for a broadened, more meaningful exchange. Social, cultural and environmental exchange will heighten understanding and mutual respect, and begin to dissipate the psychological barriers that exist between the two sides, serving to better connect the people of this border region. I believe the foremost place for such interaction is the border region, itself. Currently, it is a painfully disconnected, forbidding and blighted region that harshly articulates the uneasy relationship between the two sides. The border marks a physical line of convergence that could begin to celebrate intellectual and social convergence. My proposal offers an east-west solution to this north-south problem. It establishes a new directionality that runs parallel to the border rather than across it. This new corridor uses the landscape to emphasize that which is shared, while establishing points of reflection and dialogue. The intention is to reinvent the border region as a critical juncture between cultures and nations, making the border not a point or a line, but an engagement.
Fables of undiscovered cities
"Space... The final frontier... These are the voyages of the Starship Enterprise. Its continuing mission: To explore strange new worlds... To seek out new life; new civilizations... To boldly go where no one has gone before!" -- Jean-Luc Picard, Captain, Starship Enterprise; NCC-1701D. Humans have always felt a primordial urge to explore - to blaze new trails, to map new lands, and to ask profound questions about ourselves. The intangible desire to explore and challenge the boundaries of what we know and where we have been make us who we are and what we will become - the voyage of discovery consists not only in seeking new landscapes, but also in having new eyes: the acquiring of an external standard of criticism that incites the journey of self-discovery and self-reflection. By opening a new world, we rediscover the old. As the discipline of urban design developed, designers engaged disciplines assisted by numerous technologies and applications. We have ambitions to digitize and analyze every corner of our existing world, however, in grasping the world more precisely and effectively, we are giving up the possibility of obscurity and the unknown. This thesis is a voyage aiming at the exploration of new possibilities of urban entities: the creation of a series of 'undiscovered' dream worlds in order to rediscover the features of the real world we 'think' we inhabit. These alternative dream worlds are designed not only to expose, engage and open our eyes and minds, but also to evoke critical thinking and reflection on existing urban problems and urban structures of our present world. Stories and drawings are used to materialize those fictitious cities. The more convincing and detailed those cities appears, the more observations and analysis could be applied and further developed. And by doing so, readers are invited to start their own adventures in those "undiscovered" territories.
The development of a prototype Zone-Plate-Array Lithography (ZPAL) system
The research presented in this paper aims to build a Zone-Plate-Array Lithography (ZPAL) prototype tool that will demonstrate the high-resolution, parallel patterning capabilities of the architecture. The experiment will require the integration of micromechanical spatial light modulators with an existing zone-plate-array testbed lithography tool. The system development requires an efficient data-delivery system to promote throughput and a thoughtful optical channel to optimize the lithographic performance of zone-plates. Lithography results obtained from the prototype will be presented along with basic performance characteristics.
Uncovering the variability, regulatory roles and mutation rates of short tandem repeats
Over the past decade, the advent of next-generation DNA sequencing technologies has ushered in an exciting era of biological research. Through large-scale sequencing projects, scientists have begun to unveil the variability and function of millions of DNA mutations called single nucleotide polymorphisms. Despite this rapid growth in understanding, short tandem repeats (STRs), genomic elements consisting of a repeating pattern of 2-6 bases, have remained poorly understood. Mutating orders of magnitude more rapidly than most of the human genome, STRs have been identified as the causal variants in diseases such as Fragile X syndrome and Huntington's disease. However, in spite of their potentially profound biological consequences, STRs remain systematically understudied due to difficulties associated with obtaining accurate genotypes. To address this issue, we developed a series of bioinformatics approaches and applied them to population-scale whole-genome sequencing data sets. Using data from the 1000 Genomes Project, we performed the first genome-wide characterization of STR variability by analyzing over 700,000 loci in more than 1000 individuals. Next, we integrated these genotypes with expression data to assess the contribution of STRs to gene expression in humans, uncovering their substantial regulatory role. We then developed a state-of-the-art algorithm to genotype STRs, resulting in vastly improved accuracy and uncovering hundreds of replicable de novo mutations in a deeply sequenced trio. Lastly, we developed a novel approach to estimate mutation rates for STRs on the Y-chromosome (Y-STR), resulting in rates for hundreds of previously uncharacterized markers. Collectively, these analyses highlight the extreme variability of STRs and provide a framework for incorporating them into future studies.
Unobtrusive integration of magnetic generator systems into common footwear
A power generating system was designed to passively harness some of the kinetic energy available during walking. The system included a rotary arm extending down from the sole, which ultimately drove a pair of small electrical generators through a stepped-up gearbox. A one-way clutch mechanism was used to transmit torque to the gearbox. This allowed for additional spin following the initial impact of a step, also preventing lockup due to rotary inertia in the gears. The entire generator system was designed to fit in the heel of a standard running shoe, with the rotary arm compressing once during each heel strike. The final system produced a peak power of 1.61 Watts during the heel strike and an average power of 58.1 mW across the entire gait. To maximize power transfer, an ideal load was determined for the two DC generators connected in series. While the average power generated was below the desired 250 mW, initial calculations show this level can eventually be reached or exceeded with the addition of a flywheel to each generator shaft, or a spring to store more energy from the heel-strike.
Approaches for identifying consumer preferences for the design of technology products : a case study of residential solar panels
This thesis investigates ways to obtain consumer preferences for technology products to help designers identify the key attributes that contribute to a product's market success. A case study of residential solar PV panels is conducted in the context of the California, USA market within the 2007-2011 time span. First, interviews are conducted with solar panel installers to gain a better understanding of the solar industry. Second, a revealed preference method is implemented using actual market data and technical specifications to extract preferences. The approach is explored with three machine learning methods: Artificial Neural Networks, Random Forest decision trees, and Gradient Boosted regression. Finally, a stated preference self-explicated survey is conducted, and the results using the two methods compared. Three common critical attributes are identified from a pool of 34 technical attributes: power warranty, panel efficiency, and time on market. From the survey, additional non-technical attributes are identified: panel manufacturer's reputation, name recognition, and aesthetics. The work shows that a combination of revealed and stated preference methods may be valuable for identifying both technical and non-technical attributes to guide design priorities.
Institutional expansion, community relations, and the hospital next door
Hospitals play many roles in a city: alternately, they may be caretakers of the sick, economic engines, intellectual hubs, major employers, and neighbors. This last role has evolved greatly over the last 45 years. The relationship between hospitals and the communities in which they are located has been affected by constantly changing economic, political, and social factors. During the early days of urban renewal in the 1950s and early 1960s, large teaching hospitals in Boston experienced a surge of political and economic power that allowed them to expand with few constraints, often to the detriment of their residential neighbors. Today, the same hospitals must broker complex deals with their neighbors if they wish to expand, offering up a host of community benefits. The process by which the hospital-community power dynamic has evolved has been shaped by the mediating entity of the Boston Redevelopment Agency, which is in turn influenced by the Mayor's Office in Boston. Despite their many roles in the city, it is their sheer physical presence that drives hospitals' relationships with their neighbors. The health care and employment benefits they can provide are not major bargaining chips in disputes over expansion; the important considerations are the tangible elements of power - money and land. The primacy of physical presence as a relationship driver can be illustrated by the differences in the negotiation process that hospitals directly bordering residential communities and extending into them experience, as opposed to hospitals that are not directly on the residential fringe.
Optimizing inventory levels using financial, lifecycle and forecast variance data
Significant inventory write-offs have recently plagued ATI Technologies, a world leader in graphics and media processors. ATI's product-centric culture has long deterred attention from supply chain efficiency. Given that manufacturing lead time exceeds customer order lead time for its semiconductors, ATI relies heavily on their demand forecasting team to instigate supply chain activities. The PC business unit forecasting team translates market information into product-line forecast and also sets finished goods inventory levels intended to offset demand uncertainty. Today's inventory decisions are made in response to customer escalations, often ignoring financial implications. To add necessary rigor when setting these inventory levels, this thesis presents a model using wafer and unit cost, profit margin, product lifecycle stage and historical forecast error to categorize products into inventory risk levels. The resultant risk levels become a critical input to monthly demand-supply meetings with marketing, operations and senior executives - the outcome of which are wafer orders and assembly and test plans at the world's largest contract foundries and subcontractors. Finally, the 2006 acquisition of ATI by Advanced Micro Devices (AMD) offers unforeseen flexibility, scale and challenges to the outsourced semiconductor supply chain.
Lean principle application in the General Motor's product development process with special emphasis on peer reviews
Global Automotive, a large US based, global manufacturer of automobiles, has made significant gains in manufacturing competitiveness, in part through application of a lean manufacturing approach to high volume assembly. A similar approach applied to product development can result in significant improvements in product design throughput, speed, cost, design quality, and innovation. With major product programs taking in excess of 36 months and a billion dollars to complete, the potential impact of process improvements is substantial. This thesis examines elements of Global Automotive Product Development Process. Some general guiding principles for Lean product development are also reviewed from the existing literature. Special attention is given to metrics for measuring product development performance at Global Automotive. The thesis focuses on the role of peer reviews in the development process. The analysis is performed using a work order data set for two automotive development programs. Score cards from Peer Review and a survey of the component engineering community are also used to assess the effectiveness and current state of the peer review process. The study found evidence that high scores on peer reviews do not guarantee that late changes will occur, if anything component groups with average lower scoring peer reviews generated led to consistent levels of late stage changes. The objective of peer reviews should clearly be to find as many problems as possible and participants should be encouraged to delivery "low scoring" reviews. Keywords: Product Development, Lean, Peer Reviews, Design Defects.
Electronic applications of two-dimensional materials
Ubiquitous electronics will be a very important component of future electronics. However, today's approaches to large area, low cost, potentially ubiquitous electronic devices are currently dominated by the low mobility of amorphous silicon and organic semiconductor. Two-dimensional materials are good candidates for ubiquitous electronics because of their excellent properties such as transparency, flexibility, high mobility and low cost. This thesis focused on the development of the first devices and circuits based on transition metal dichalcogenides (TMDs), a family of two dimensional semiconductors. The transport properties of exfoliated few layer flakes MoS2 and chemical vapor deposition (CVD) grown single layer large area MoS2 are systematically studies. Integrated devices and circuits based on large-scale single-layer MoS2 grown by CVD are demonstrated for the first time. The transistors fabricated on this material demonstrate excellent characteristics such as record mobility for CVD MoS 2, ultra-high on/off current ratio, record current density and GHz RF performance. The demonstration of both digital and analogue circuits shows the remarkable capability of this single- molecular- layer thick material for mixed-signal applications, offering scalable new materials that can combine silicon-like performance with the mechanical flexibility and integration versatility of organic semiconductors.
Design and implementation of an online laboratory for introductory digital systems
In this thesis, I designed and implemented an online, web-based laboratory system for the Introductory Digital Systems Laboratory course at MIT (6.111). The intent is to allow a student access to a 6.111 labkit, program it, and view the results without ever entering the physical lab room. The lab architecture consists of primarily of two portions: a server and a client. The server programs the Xilinx FPGA on the labkit, and it controls the logic analyzer used to observe the labkit signals. The client is a Java applet that can send code to the server and retrieve logic analyzer data from the server. The user can view and manipulate the retrieved data through the client. The applet is embedded in a web page, along with a video stream that shows the labkit setup live to the user. The user interface is designed to be accessible from any browser, independent of platform.
Divalent metal nanoparticles
Metal nanoparticles hold promise for many scientific and technological applications, such as chemical and biological sensors, vehicles for drug delivery, and subdiffraction limit waveguides. To fabricate such devices, a method to position particles in specific locations relative to each other is necessary. Nanoparticles tend to spontaneously aggregate into ordered two- and three-dimensional assemblies, but achieving one-dimensional structures is less straightforward. Because of their symmetry, nanoparticles lack the ability to bond along specific directions. Thus, the technological potential of nanoparticles would be greatly enhanced by the introduction of a method to break the interaction symmetry of nanoparticles, thus inducing valency and directional interparticle interactions. When a nanoparticle is coated with a mixture of two different ligands, the ligands have been shown to phase-separate into ordered domains encircling or spiraling around the core. Topological constraints inherent in assembling two-dimensional vectors (e.g., ligands) onto a sphere (the core of the nanoparticle) dictate the necessary formation of two diametrically opposed defect points within the ligand shell. The molecules at these points are not optimally stabilized by intermolecular interactions and thus these sites are highly reactive. By functionalizing the polar singularities with a third type of molecule, we generate divalent nanoparticles with "chemical handles" that can be used to direct the assembly of the particles into chains. For example, taking inspiration from the wellknown interfacial polymerization synthesis of nylon, we place carboxylic acid terminated molecules at the polar defect points and join the newly bifunctional nanoparticles into chains by reacting them with 1,6-diaminohexane through an interfacial reaction.
Boardroom network and corporate governance : when who you know contributes to what you know
In this study, I examine whether boardroom centrality has a causal effect on firm performance. If boardroom ties work as a conduit for information about market, industry, feedback on peers' experience, etc., then I expect central boards' corporate decisions to incorporate richer information and, as a result, become more efficient. Alternatively, if boardroom ties can put a firm at a disadvantage by disseminating proprietary information or conveying incorrect or misleading information, then I expect to find a deterioration in the central firms' performance. To isolate the effect of network centrality on the firm's performance, I use an instrumental variable approach based on director deaths at distant firms within the network. Contrary to the prevailing evidence in the literature, the results suggest that boardroom centrality deteriorates firm performance. To the best of my knowledge, this paper is the first to resolve the disagreement in the literature on the sign of the boardroom centrality effect on performance by using an exogenous setting that can rule out the selection and ability channels from the set of explanations for the results.
Lean transformation methodology and implementation in biopharmaceutical operations
Amgen's Operations division is responsible for the production, release and distribution of commercial and clinical products. Due to industry consolidation, impending competition and revenue impacts, Amgen is facing the need to rapidly improve the Operations division and align different manufacturing sites. In order to achieve these goals, the Operations Improvement group is leading an initiative to bring about a lean transformation of Amgen's operations.This thesis analyzes the initial operational excellence efforts underway within Amgen Operations. The analysis includes an overview of the process by which the continuous improvement methodology and strategy were constructed, the creation of a training curriculum and the initial implementation of the continuous improvement methodology at specific manufacturing sites. In addition, the thesis explores the environment in which this program operates and the cultural and business drivers that support and detract from the efforts.The following conclusions were developed as a result of the analysis of the lean transformation efforts at Amgen. First, company and industry specific nomenclature is essential to make lean principles contextually relevant for the biopharmaceutical industry. Additionally, relevant metrics are needed to facilitate multi-site alignment and drive the desired behavior. Finally, continuous improvement efforts can effectively leverage a science-based culture by applying it to a new business context.
Interventions for refugee integration in cities
In recent years, conflict and climate change around the world are not only displacing people at an unprecedented rate but also increasing the years of their displacement. With over 25.4 million refugees globally, the highest number in history, countries are forced to change how they respond to this crisis. In most cases, housing refugees in temporary camps is not sustainable over a long, and a majority of the global refugees end up living in urban areas. Since cities are starting to play an essential role in welcoming this new population, it is imperative for the planning field to understand how the built environment impacts refugee integration. Successful integration into host society is not the sole responsibility of a refugee but rather a process that involves both the refugee and the host community. This thesis investigates factors that affect refugee integration and examines how they play out spatially on a local scale through a case study of the Roxbury neighborhood in Boston, Massachusetts. The research analysis and case study affirm the influence of place in the refugee experience of community and belonging. Just as displacement is a place-based trauma, refugee resettlement must be approached as a place-based intervention. This thesis highlights the role of planners by outlining the spatial implications of successful integration in addition to introducing a multidisciplinary approach that can empower refugees to not only successfully integrate but to have agency in their new homes.
Measurement and modeling of brain tissue and engineered polymer response to concentrated impact loading
Our brains are among the most mechanically compliant and structurally complex organs in our bodies. To predict how brain tissue deforms, and to protect it from deforming in ways that reduce our cognitive function, we must be able to measure, model, and ideally replicate brain tissue mechanics. While this is a grand challenge that many have sought to address, this need is acute when considering spatially localized deformation of brain tissue under high rates, such as in collisions that cause traumatic brain injury (TBI). This thesis sought to address this challenge at increasing levels of spatial and temporal complexity by employing dynamic contact mechanics as a tool to consider reduction of TBI. Strategies to reduce TBI include helmets designed to absorb impact energy, which are evaluated typically by simplified impact tests with engineered headforms equipped with brain tissue simulant materials and accelerometers.
Eyes Up : influencing social gaze through play
Autism can be a debilitating condition that affects a person's personal and social affairs throughout their lifetime. With 1 in 110 people diagnosed with an Autism Spectrum Disorder (ASD) [49], it is important that we develop assistive and learning technologies to help them achieve their potential. In this work I describe the development of a new technology-mediated therapeutic game, Frame It, and the subsequent use of Frame It in an intervention, called Eyes Up, with children diagnosed with autism. Eyes Up requires the player to attend to details of the human face in order to correctly construct puzzles of people's eyes and then assign an expression label to them. The intervention is intended as a play-centered activity with the goal of increasing attention to other people's face and eyes region and improving expression recognition abilities. Through the application of user-centered design principles and special considerations to our participants we have been able to develop an engaging game that sustains interest. Using an eye-tracking system in conjunction with specifically designed experiments, we have been able to test the system's ability to influence gaze behavior and expression recognition. Analysis of pre- and post- experimental measures reveals statistically significant increases in attention to the face and eyes and increases in expression recognition abilities.
Framework of non-intrusive load monitoring for shipboard environments
A Non-Intrusive Load Monitor (NILM) measures power at a central point in an electrical network in order to provide real-time energy management and equipment diagnostics. Results are presented from NILM systems installed aboard two US Coast Guard ships. The collected data is used for fault diagnostics and condition-based monitoring of mission-critical systems. A NILM system requires a complex software pipeline that captures and preprocesses data, accurately disaggregates load events from the aggregate power stream, analyzes the equipment for potential faults, and presents useful information to end-users in real-time. This thesis presents a framework for load identification, as well as an analytical and graphical platform that provides diagnostic information to operators in real-time about the health of electromechanical systems.
Mapping urban perception : how do we know where we are?
How do we remember urban space? How can we measure what is remembered? This thesis presents a new approach to study urban spatial perception in an efficient, automated, and scalable way. It explores the use of novel tools developed for online surveys and data visualization. Previous studies in human spatial perception have used methods such as face-to-face interviews, questionnaires, recognition tasks and surveys that ask subjects to draw sketch maps. Those conventional methods produced significant urban studies such as the one by Kevin Lynch (1960), but they are laborious, sensitive to the individuality of subjects, prone to examiners' biases and conducted with a limited number of subjects. Their results are also difficult to quantify. In contrast, the method developed here uses geo-tagged street views and a web-based visual survey. An online experiment conducted in this thesis collected 394 participants in 20 days who were asked to guess the locations of street views from a familiar neighborhood. Results are presented in the form of interactive visualizations. Analysis revealed that memory for exact location of place improves with degree of interaction and proximity to center, rather than number of encounters; memory for one location may vary dramatically between different viewpoints. The results also suggest that the irregularity of urban structure doesn't prevent the forming of strong mental images. While this new method cannot completely replace face-to-face interviews, it demonstrates the possibility of using available technology to scale visual surveys to hundreds or even thousands of people and rapidly visualize the resulting data. It thus opens up new possibilities for large-scale, fine-grained studies in urban perception.
The evolution prospects of the post-OPA90 Alaskan Oil Trade
In response to the grounding of the Exxon Valdez on March 24, 1989, the United States Congress enacted the Oil Pollution Act of 1990, a series of regulations requiring technical and operational changes in tank vessels trading in U.S. waters with the intention of preventing future spills. Although the effects of OPA90 have been felt worldwide, vessels serving the Trans-Alaska Pipeline System have been particularly affected by the legislation. Trading between Valdez, Alaska and West Coast U.S. ports the TAPS trade is one of the few routes actively plied by Jones Act vessels. The age and design of the vessels engaged in the trade will require that many be phased out in accordance with OPA90 regulations in the coming years. Fleet capacity accounting for mandatory vessel phase outs is analyzed with respect to various crude production estimates for years 2000 to 2020. Potential design changes that may extend these vessels' trading lives are discussed. Shipyard capacity and factors affecting construction of ships in American yards are reported as well as the status of U.S. government programs designed to support the U.S.-flag fleet. Additionally, the potential impact that construction of ten 46,000 DWT product tankers may have on trade capacity is considered. The schedule at which these vessels must be delivered in order to meet Alaskan production estimates through 2039 is analyzed.
Investigating the trade-offs involved in augmenting a DC brushless motor with an active heat sink in order to improve performance.
This thesis seeks to establish solutions to the issue of electric motor heating and the problems it presents for use of these motors in biomechatronic applications. As electric motors are used, their windings heat up and the resulting temperature limits torque. Larger motors may be used to obtain more torque, but this adds undesirable weight to the devices in which they are used. Cooling methods also exist, but do not necessarily consider suitability to bionic applications. This thesis therefore aims to improve the torque output of a given motor by effectively removing heat while minimizing the addition of mass. I hypothesize that the torque density of an EC-4 pole 30 48V Maxon DC brushless motor can be improved by augmenting it with an electronics fan and an annulus. Results showed that the housing to ambient thermal resistance of the motor was decreased by 68% from the experimentally found value of 11.5KW-3 to a value of 4.03KW- 3 by using a 4.72 x 10- 3 m3s-1 (10 cfm rated) electronics fan. The projected torque density of the motor was also found to be maximally improved from 0.382Nmkg-1 to 0.393Nmkg-1. These results were obtained under the assumptions that the motor could be reasonably represented by its brushed counterpart and that parallel plate fluid dynamics closely approximates annular fluid dynamics. While more investigation is necessary to fully validate the results, they do show that there is potential for using simple methods to significantly improve the torque density of small electric motors. It is possible then that added mass can work to noticeably improve motor performance. There is therefore scope to improve the use of these motors in biomechatronic devices. Smaller, more efficient motors will decrease the weight of these devices and improve their overall efficiency. Bionic devices will thus be one step closer to better mimicking human capability.
Design analysis of the four-bar Jaipur-Stanford prosthetic knee for Developing countries
Amputees in developing countries face a challenging prospect. Without an adequate prosthesis, they face a lifetime of limited mobility and dependence. Unfortunately, as millions fall below the poverty line and as such do not have access to proper medical and prosthesis care, many must resign themselves to such a lifestyle. Bhagwan Mahaveer Viklang Sahayata Samiti (BMVSS) is attempting to change this. BMVSS is the world's leading prosthetics and mobility provider, serving over 20,000 new individuals per year - all free of charge - in 27 countries. Through a partnership with a Stanford design course, the Jaipur-Stanford Knee, a novel prosthetic knee incorporating a four-bar design, was born. This knee design has become widely popular amongst amputees and was named one of the top 50 best inventions in 2009 by Time Magazine. However, despite the popularity and widespread media coverage of the knee's development, there currently exists no available technical literature on the design. This research provides a kinematic model of this knee to compare to the dynamics of a natural gait along with a materials analysis to offer insight into design and manufacture improvements in future design iterations and concepts.
System design for a rapid response autonomous aerial surveillance vehicle
The MIT/Draper Technology Development Partnership Project was conceived as a collaborative design and development program between MIT and Draper Laboratory. The overall aims of the two year project were to strengthen ties between the two institutions, to provide students with an opportunity to develop a first-of-a-kind system, and to foster a sense of entrepreneurship in the students working on the project. This first design team consisted of a mix of Master of Engineering and Master of Science students, along with undergraduate research assistants. The team began its work by reviewing the needs of the nation and the capabilities possessed by MIT and Draper which could be leveraged to address those needs. Candidate projects were then developed, and several were further refined through brief market assessments. Based on these assessments, a final project was chosen. The selected project, the Wide Area Surveillance Projectile (WASP), called for the development of a small, unmanned aerial vehicle which could be launched from an artillery gun to provide a rapid-response, time-critical reconnaissance capability for small military units or selected civilian applications. This thesis reviews the first year of work completed on the project. A systems view is used throughout, describing the top-level trades which were made to develop a product which would meet all of the user's needs. Specific attention is given to the interactions between the various subsystems and how these interactions contributed to the design solution developed by the team. In addition to this chronological description of the project, management lessons learned from the author's experience as project manager are presented, along with recommended approaches for future projects of a similar nature. These lessons may also find applications in the broader realm of rapid-prototyping engineering projects, as well as future projects undertaken as part of the MIT/Draper Technology Development Partnership Project.
Techniques for the characterization of sedation due to opiate administration
The focus of this thesis is identifying human physiologic markers during opiate sedation for applications in general anesthesia and drug overdose. Under this central topic, three themes are developed: characterizing the neural signature associated with altered consciousness due to opiate administration, characterizing the diminished respiration and behavioral effects of sedation due to opiate administration, and correlating these features. This work led to the development of signal processing techniques using state-space autoregressive equations to model respiratory data. Additionally, this project required designing and conducting a clinical experiment at the Massachusetts General Hospital with the permission of the Partners Institutional Review Board and the guidance of the Anesthesia, Critical Care, and Pain Management department at the Massachusetts General Hospital. The data used in this investigation were collected in the operating rooms at the Massachusetts General Hospital with the help of anesthesiologists, surgeons, nursing staff, and clinical research coordinators.
Cycle-time analysis and improvement using lean methods within a retail distribution center
Fulfillment cycle-time, or the time it takes to pick an item from inventory, pack it into a box, and load it on a truck for shipment, is one of the main inputs in determining how quickly an online retailer can promise customer order delivery. The faster the fulfillment cycle-time, the later an order can be received and still make the appropriate truck for guaranteed, on-time arrival (e.g. same-day, next day, 3-5 business days). Thus, the customer experience is improved, as they are allowed to place an order later and still receive their purchases quickly. To take advantage of this, the retailer must first be able to measure cycle-time appropriately within their facility. This thesis examines the outbound fulfillment process within an under-performing Amazon fulfillment center (Site A) with the purpose of fully characterizing and measuring fulfillment cycle-time. Comparisons are drawn with like Amazon facilities, and a lean operations approach is taken to identify and eliminate major forms of waste in an effort to shorten cycle-time. The baseline analysis within this thesis provides evidence that current-state cycle-time at Site A is in fact 15% faster than originally thought. However, process improvements were still needed to bring cycle-time in line with the network standard. The remainder of the work within this thesis focuses on these process improvements and develops the following recommendations: 1. Standardize the pick process with a move closer to single piece flow. 2. Reduce and control queue length prior to the pack process in order to reduce non-value-added wait time. 3. Reduce batch size for critical items that must move through the facility the fastest. 4. Rearrange process steps to allow completion in parallel rather than series. The method for evaluating cycle-time and the implementation of lean solutions introduced throughout this thesis are useful as a template for similar analyses throughout the Amazon FC network, as well as within other warehousing and online retailer operations.
Book illumination and architectural decoration : the Mausoleum of Uljaytu in Sultaniyya
This thesis examines the conventions of two-dimensional articulation in architecture and its relationship to book illumination in early fourteenth century Iran. By examining the illuminations in a series of imperial Qurans copied in the first quarter of the fourteenth century and comparing them to the architectural decoration of contemporaneous buildings in llkhanid Iran, the thesis proposes that it is the rigor of geometric elaboration in two-dimensional planes that make such a comparison across media plausible. The taste for increasingly complex two-dimensional geometric extrapolations and the creation of layered surfaces, such as those exhibited in the decorative designs of the Mausoleum of Uljaytu in Sultaniyya, Iran, ultimately engender a perception of architecture that alludes visually to an rendition of two dimensional space that is common to both painting and architecture.
Low-rise high density urban housing in Korea
The idea of low-rise high-density urban housing is based on two fundamental objectives: 1) To provide higher density by intensifying land use as urban growth escalates at an unprecedented rate. 2) To reconsider the essential qualities of house - a house with a garden, light and air. Modern high-rise apartments provide greater density and improved living conditions in terms of proper sanitation, electricity, and open space. But it lacks individuality and promotes a high degree of anonymity leading to limited social contact between neighbors. It dissociates the house from the ground and creates ambiguous open space between buildings. Single-family detached houses provides individuality and open space but detachment is not only meaningless but highly inefficient in terms of land use The courtyard house provides an alternative solution by combining advantages of individual house and high density housing. Its introverted nature allow dense clustering while maintaining a private open space. The design takes this traditionally horizontal aggregation of dwellings one step further to increase its potential density. It proposes a vertical courtyard house while maintaining access to light and air, visual and acoustical privacy, efficiency of construction, and a revitalization of street life and open space.
Functional analyses of mitotic microtubule-binding complexes
Mitosis is the process by which a single cell divides to form two identical daughter cells. Each daughter cell must inherit a full complement of the genetic material. Thus, a critical aspect of mitosis is the faithful segregation of each duplicated chromosome. Chromosome segregation is achieved through the attachment of a chromosome-localized macromolecular complex, termed the kinetochore, to microtubules. Microtubules are dynamic polymers comprised of tubulin heterodimers. The successful execution of mitosis additionally depends on the organization of the microtubules into a bipolar array, termed the mitotic spindle. The depolymerization of kinetochore-bound microtubules generates the force required to properly segregate the chromosomes. The work in this thesis analyzes the molecular basis for the function and activity of two key players in microtubule function. First, I investigate the mechanisms by which the Ska1 complex facilitates the continued association of the kinetochore with microtubules, even as the microtubules grow and shrink. I show that Ska1 uses multiple surfaces to interact with diverse tubulin substrates, and each of these surfaces are required for microtubule tip tracking and optimal mitotic progression. Second, I analyze cytoplasmic dynein, a microtubule-based motor that is critically required to maintain spindle bipolarity and execute numerous other cellular processes throughout the cell cycle. The execution of these diverse functions of dynein relies on precise temporal and spatial regulation of dynein activity. Dynein regulation is accomplished in part by the association of adaptor proteins with the dynein complex, including Nde1. Here, I show that Nde1 utilizes distinct intermolecular interactions to regulate different dynein functions. I also identify a previously uncharacterized interaction between Nde1 and the 26S proteasome. Finally, I explore a potential role for post-translational modifications in regulating dynein function. I find that the localization of dynein during mitosis is rapidly altered following the addition of small molecule inhibitors of ubiquitination enzymes. Together, these findings provide new insights into the function and regulation of diverse components of the mitotic machinery.
The impact of public housing interventions on a local context
Housing provisions by public agencies for low-income people present a dismal picture in most developing countries. The reasons are typically scarce resources and adoption of high standards, which result in a limited supply of complete dwelling units, inappropriate in their use of resources and, at the same time, not responsive to the occupants' needs and priorities. This study puts forward that, in the context of India, public interventions (which include governmental and other agencies) have not only had limited success in providing housing, but have tended to neglect the impacts the housing programs and projects have on a local area. The study examines the policies and programs at both the overall and local level, using case-studies to illustrate different types of public interventions generally in India, and specifically within East Calcutta. The study is outlined in four parts. The first part deals with describing the overall situation of housing policies in India and development policies of East Calcutta. The second part looks at the local context in detail, documenting its characteristics, and describing through three case-studies the impact of these interventions on the settlements. It illustrates the major impacts that public interventions have had on settlement formation, user- involvement in dwelling provision and security of tenure . The third part elaborates the reasons for wanting to know about the impacts and a process for documenting them. The emphasis is on understanding hard-to- measure qualitative impacts rather than quantifiable ones. The last part summarizes the range of issues and impacts and presents the findings about the hypotheses that were put forward initially. The study concludes that public interventions can play an important role in housing processes if they are designed to make use of the local context as an active input in areas of policy planning and project implementation.
Effect of auditory peripheral displays on unmanned aerial vehicle operator performance
With advanced autonomy, Unmanned Aerial Vehicle (UAV) operations will likely be conducted by single operators controlling multiple UAVs. As operator attention is divided across multiple supervisory tasks, there is a need to support the operator's awareness of the state of the tasks for safe and effective task management. This research explores enhancing audio cues of UAV interfaces for this futuristic control of multiple UAVs by a single operator. This thesis specifically assesses the value of continuous and discrete audio cues as indicators of course-deviations or late-arrivals to targets for UAV missions with single and multiple UAVs. In particular, this thesis addresses two questions: (1) when compared with discrete audio, does continuous audio better aid human supervision of UAV operations, and (2) is the effectiveness of the discrete or continuous audio support dependent on operator workload? An experiment was carried out on the Multiple Autonomous Unmanned Vehicle Experiment (MAUVE) test bed with 44 military participants. Specifically, two continuous audio alerts were mapped to two human supervisory tasks within MAUVE. These continuous alerts were tested against single beep discrete alerts. The results show that the use of the continuous audio alerts enhances a single operator's performance in monitoring single and multiple, semi-autonomous vehicles. The results also emphasize the necessity to properly integrate the continuous audio with other auditory alarms and visual representations in a display, as it is possible for discrete audio alerts to be masked by continuous audio, leaving operators reliant on the visual aspects of the display.
Vacant land types, patterns, and strategies in post-Katrina New Orleans
There were approximately 17,000 vacant lots in New Orleans in 2012, amounting to over 11 percent of total parcels in the city. Many of these lots have become vacant since Hurricane Katrina hit the Gulf Coast in 2005, but many were already empty. The population in parts of the older core of the city significantly declined from World War II until 2000. The migration of people into the recently drained low-lying subdivisions both within and outside of the city limits led to disinvestment and high vacancy rates in central neighborhoods of the city. This thesis seeks to define the current physical landscape of vacancy in New Orleans, within the context of these two historic narratives, Katrina and suburbanization before the storm, in order to appropriately target policy strategies for the reuse of vacant lots. This thesis uses images collected by the author of vacant lots throughout the city to define spatial types and conditions common to vacant land in New Orleans. A rigorous, data-driven mapping exercise explores patterns of vacancy in relation to physical and socioeconomic measures. This analysis supports the definition of three neighborhood types in which vacant land should be treated differently. These three types are based on pre-Katrina vacancy and post-Katrina flood depths, and consist of: 1) areas with significant pre-Katrina vacant land and little flooding, 2) areas with little pre-Katrina vacant land and high flood levels, and 3) areas with both significant pre-Katrina vacant land and high flood levels. The findings of this research indicate the need to revisit the physical footprint of New Orleans, with an emphasis on how the city should target its limited resources in the future to maximize both social justice and environmental justice imperatives, as well as mitigate the negative impacts of future disasters.
Across the great divide : chimeras and species boundaries
We have always been fascinated by borderline creatures. Chimeras, hybrids of multiple animals-and sometimes humans-appear repeatedly in mythology across cultures from ancient times to the present. Since the early 1980s, scientists have been creating cross-species chimeras, first combining mouse species that could not interbreed naturally, then moving on to create chimeras from even more distantly related animals such as sheep and goats. Scientists use chimeras to study fundamental processes of life such as pregnancy, fetal development, and the progress of disease. Chimeras allow scientists to perform experiments that would otherwise be impossible. Ancient chimera myths played on our anxieties about the boundary between man and animal. Interspecies chimeras strike the same chords of disgust and fear in some people as these ancient mythical chimeras did. This paper examines the science of chimeras and biological borderlines and the social implications of creatures that challenge accepted and comfortable ideas about the divisibility of the animal and human worlds. Can human-animal chimeras be made? Activists Stuart Newman and Jeremy Rifkin have filed a patent application for human-animal chimeras, such as the humanzee, to protest patents on all life forms. Newman and Rifkin believe chimeras are emblematic of abuses of biotechnology and are on a slippery slope to human cloning and elimination of the distinction between natural and manufactured things. They are not alone in believing scientists should be more concerned about the ethical implications of their work. However, a majority of scientists, bioethicists, and scholars find Newman and Rifkin's viewpoint extreme. The creation of chimeras between species-groups of animals that
Metamers in memory : predicting pairwise image confusions with deep learning
Previous experiments have examined what causes images to be remembered or forgotten. In these experiments, participants sometimes create false positives when identifying images they have seen before, but the precise cause of these false positives has remained unclear. We examine confusions between individual images as a possible cause of these false positives. We first introduce a new experimental task for examining measuring the rates at which participants confuse one image for another and show that the images prone to false positives are also ones that people tend to confuse. Second, we show that there is a correlation between how often people confuse pairs of images and how similar they find those pairs. Finally, we train a Siamese neural network to predict confusions between pairs of images. By studying the mechanisms behind the failures of memory, we hope to increase our understanding of memory as a whole and move closer to a computational model of memory.