Title
stringlengths
3
331
text
stringlengths
14
9.14k
Reputational entrepreneurship and the valuation of scientific achievement
Using citations as a measure of valuation and death as a shock that affects efforts to "sell" scientific work but not the quality of the work itself, we estimate the importance of "reputational entrepreneurship" on the valuation of life scientists' research. Insofar as reputational entrepreneurship is impactful, it is unclear whether the most effective reputational entrepreneurs are those selling their own work ("salesman") or those promoting the work of others (the "sales force"). While the salesman has more incentive to promote her work, the sales force is larger and likely to be seen as more credible. We find that by commemorating the death of a scientist, the sales force boosts the valuation of the deceased's work relative to what the salesman could have done had she remained alive. This suggests that while science seeks to divorce the researcher's identity from their work, scientists' identities nonetheless play an important role in determining scientific valuations.
An information-theoretic approach to estimating risk premia
Evaluation of linear factor models in asset pricing requires estimation of two unknown quantities: the factor loadings and the factor risk premia. Using relative entropy minimization, this paper estimates factor risk premia with only no-arbitrage economic assumptions and without needing to estimate the factor loadings. The method proposed here is particularly useful when the factor model suffers from omitted variable bias, rendering classic Fama-MacBeth/GMM estimation infeasible. Asymptotics are derived and simulation exercises show that the accuracy of the method is comparable to, and frequently is higher than, leading techniques, even those designed explicitly to deal with omitted variables. Empirically, we find estimates of risk premia that are closer to those expected by financial economic theory, relative to estimates from classical estimation techniques. For example, we find that the risk premia on size, book-to-market, and momentum sorted portfolios are very close to the observed average excess returns of these portfolios. An exciting application of our methodology is to performance evaluation for active fund managers. We show that we are able to estimate a manager's "alpha" without specifying the manager's factor exposures.
Commodity market modeling and physical trading strategies
Investment and operational decisions involving commodities are taken based on the forward prices of these commodities. These prices are volatile, and a model of their evolution must correctly account for their volatility and correlation term structure. A two-factor model of the forward curve is proposed and calibrated to the crude oil, shipping, natural gas, and heating oil markets. The theoretical properties of this model are explored, with focus on its decomposition into independent factors affecting the level and slope of the forward curve. The two-factor model is then applied to two problems involving commodity prices. An approximate analytical expression for the prices of Asian options is derived and shown to explain the market prices of shipping options. The floating storage trade, which appeared in the oil market in late 2008, is presented as an optimal stopping problem. Using the two-factor model of the forward curve, the value of storing crude oil is derived and analyzed historically. The analytical framework for physical commodity trading that is developed allows for the calculation of expected profits, risks involved, and exposure to the major risk factors. This makes it possible for market participants to analyze such physical trades in advance, creates a decision rule for when to sell the cargo, and allows them to hedge their exposure to the forward curve correctly.
Role of the precentral cortex in adapting behavior to different mechanical environments
We routinely produce movements under different mechanical contexts. All interactions with the physical environment, such as swinging a hammer or lifting a carton of milk, alter the forces experienced during movement. With repeated experience, sensorimotor maps are adapted to maintain a high level of movement performance regardless of the mechanical environment. This dissertation explored the contribution of the precentral cortex to this process of motor adaptation. In the first experiment, we recorded precentral neural activity in rhesus monkeys that were trained to perform visually-cued reaching movements while holding on to a robotic manipulandum capable of changing the forces experienced during the task. Preparation and control of the reaching movements were correlated with single cell activity throughout the precentral cortex, including the primary motor cortex and five different premotor areas. Precentral field potential activity was also modulated during the reaching behavior, particularly in the beta and high gamma frequency bands. When novel forces were introduced, single cell activity changed in a manner that specifically compensated for the applied forces and mirrored the time course of behavioral adaptation.
Dissecting the gene-regulatory circuitry of disease-associated genetic variants
Disease-associated nucleotides lie primarily in non-coding regions, increasing the urgency of understanding how gene-regulatory circuitry impacts human disease. Here, we use the increasing availability of functional genomics datasets and models elucidating how regulatory proteins control genes, to evaluate the impact of genetic variants on the activity of diverse regulators. First, we generate a comprehensive compendium of predicted binding intensities across the entire genome for over 500 transcription factors. Second, we create a novel dataset to connect how these binding intensities change in the context of disease datasets. Third, we develop a statistical framework to integrate these two datasets using dimensionality reduction, latent cluster discovery, and topic modeling. We use these techniques to show that regulatory proteins with analogous biological functions share similar global changes in binding due to genome-wide genetic variation. We also use our framework to discover a latent set of topics behind all genomic locations in chromosome 1, to link the locations in each of the topic clusters with a class of related diseases, and to show that relevant biological processes are statistically enriched in the genomic locations most related to each cluster.
Honduras wastewater treatment : chemically enhanced primary treatment and sustainable secondary treatment technologies for use with Imhoff tanks
(cont.) However, it is doubtful the costs associated with dosages required to achieve these removals are sustainable for communities such as Las Vegas. To address these deficiencies further sustainable practices for optimizing the Imhoff tanks as well as designs for both pre-treatment and secondary treatment options appropriate for use in Honduras were developed. The recommended system allows achievement of regulatory effluent levels while maintaining low annual operating costs for the system.
Enabling of e-manufacturing by utilizing industrial IT technology
The flow and coordination of information across an enterprise is handled through complex networks of manual and automated processes. Forty years ago, the proliferation of computers spawned a revolution in automating many functional silos within a business via Material Requirements Planning applications. These systems evolved over time into Enterprise Resource Planning (ERP) solutions as more functionalities were included in the scope of their planning modules. Only four years ago, the availability of high bandwidth Internet access at the corporate level also started revolutions beyond company walls, with Supply Chain Management and Customer Relationship Management applications. Companies have recently invested heavily in these Business-to-Business (B2B) and Business-to-Customer (B2C) solutions. However, electronic commerce, or "e-Commerce", has thus far been unable to achieve its "Shop Floor to Top Floor", "Sensor to Boardroom", or "Factory Floor to Executive Door" transparency of data as it was intended to do. The reason for this failure is that these applications typically lack direct links to the real-time status information from manufacturing operations. This thesis attempts to bridge the gap between the enterprise wide applications and the vast amount of data trapped in the controls and machinery on the manufacturing floor. The vision to integrate these pieces is referred to as electronic manufacturing, or more commonly "e-Manufacturing". This newly emerging e-Manufacturing market is expected to offer rapid growth for companies who can move fast enough to capture a sizeable share. While ERP vendors appear best positioned to push from the "top-down" into this space, this thesis demonstrates that the control vendors with a "bottom-up" strategy may prove to be more successful. The developments in this thesis are built upon ABB's Industrial ^IT technology. Given Industrial ^IT 's ability to quickly integrate to a variety of data sources in real-time, e-Manufacturing related feasibility studies were conducted in four of ABB's facilities. The thesis also suggests strategies for implementing these kinds of solutions successfully.
PML for the Navier-Stokes equations
The Perfectly Matched Layer Method (PML) has found widespread application as a high-accuracy, non-reflecting boundary treatment in many wave propagation simulations. However, in the area of computational fluid dynamics, its application has been mostly limited to the linearized Euler equations. Attempts to apply PML to the nonlinear Euler equations have found a tendency for the method to go unstable. Even so, in light of the method's computational efficiency and high accuracy, finding a robust and stable implementation is highly desirable. Here, the method is extended to the Navier-Stokes equations, and is implemented with a high-order discontinuous Galerkin finite element method (DGFEM). The weaknesses and strengths of the method are investigated, and its performance is assessed when applied to complex flows; in particular, a viscous cavity flow is investigated. Stabilizing adjustments to the method are made, and future work is indicated for increased utility and flexibility of the method.
Maximizing the benefits of mass transit stations : amenities, services, and the improvement of urban space within spaces
Little attention has been paid to the quality of the spaces within rapid mass transit stations in the United States, and their importance as places in and of themselves. For many city dwellers who rely on rapid transit service as their primary mode of travel, descending and ascending into and from transit stations is an integral part of daily life and their urban experience. Beyond being simply a piece of infrastructure offering mobility throughout a city, transit stations are an important part of the daily morning and evening rituals for many transit riders in cities with such rapid transit systems. Given their importance, it is surprising how underutilized are the interiors of stations as well as how poorly stations reveal what lies within their walls. The purpose of this thesis is to examine how ancillary uses affect the station environment; how non-elevated mass rapid transit stations within the Massachusetts Bay Transportation Area (MBTA) system are being improved through ancillary uses; which uses are particularly beneficial to transit authorities and riders alike as well as which uses require additional operations considerations; and to make suggestions as to how to further improve the station environments through the continued use of ancillary uses.
Corporate decision analysis : an engineering approach
We explore corporate decisions and their solutions under uncertainty using engineering methods. Corporate decisions tend to be complex; they are interdisciplinary and defy programmable solutions. To address these challenges, we take an engineering approach. Our proposition is that as in an engineering system, corporate problems and their potential solutions deal with the behavior of systems. Since systems can be studied with experiments, we use Design of Experiments (DOE) to understand the behavior of systems within which decisions are made and to estimate the consequences of candidate decisions as scenarios. The experiments are a systematically constructed class of gedanken experiments comparable to "what if' studies, but organized to span the entire space of controllable and uncontrollable options. In any experiment, the quality of data is important. Grounded on the work of scholars, we develop a debiasing process for eliciting data. And consistent with our engineering approach, we consider the composite consisting of the organization, their knowledge, data bases, formal and informal procedures as a measurement system. We then use Gage theory from Measurement Systems Analysis (MSA) to analyze the quality of the measuring composite.
Lean software engineering system for the DoD
Quality software engineering is crucial for the Department of Defense. The ability to engineer software that meets cost, schedule, and technical goals is a continuous challenge for both the commercial and government sector. This thesis presents an engineering model based on lean principles. The lean principles provide a foundation for a system that is based on value, communication, teamwork, efficient use of resources, elimination of waste and continuous process improvement. This system is flexible and can be tailored to meet the needs of projects of varying size and complexity. The model is intended to serve as a template for organizations to evolve their software engineering system to meet the needs of their customers.
Development of magnetic induction machines for micro turbo machinery
This thesis presents the nonlinear analysis, design, fabrication, and testing of an axial-gap magnetic induction micro machine, which is a two-phase planar motor in which the rotor is suspended above the stator via mechanical springs, or tethers. The micro motor is fabricated from thick layers of electroplated NiFe and copper, by our collaborators at Georgia Institute of Technology. The rotor and the stator cores are 4 mm in diameter each, and the entire motor is about 2 mm thick. During fabrication, SU-8 epoxy is used as a structural mold material for the electroplated cores. The tethers are designed to be compliant in the azimuthal direction, while preventing axial deflections and maintaining a constant air gap. This enables accurate measurements of deflections within the rotor plane via a computer microvision system. The small scale of the magnetic induction micro machine, in conjunction with the good thermal contact between its electroplated stator layers, ensures an isothermal device which can be cooled very effectively. Current densities over 109 A/m2 simultaneously through each phase is repeatedly achieved during experiments; this density is over two orders of magnitude larger than what can be achieved in conventional macro-scale machines. More than 5 Nm of torque is obtained for an air gap of about 5 zm, making this micro motor the highest torque density micro-scale magnetic machine to date. About 0.3 buNm for the large air gap of 70 m is also achieved in systematic tests that reveal the influence of strong eddy-currents and associated nonlinear saturation within the micro motor.
Computational analysis, design, and experimental validation of antibody binding affinity improvements beyond in vivo maturation
This thesis presents novel methods for the analysis and design of high-affinity protein interactions using a combination of high-resolution structural data and physics-based molecular models. First, computational analysis was used to investigate the molecular basis for the affinity improvement of over 1000-fold of the fluorescein-binding antibody variant 4M5.3, engineered previously from the antibody 4-4-20 using directed evolution. Electrostatic calculations revealed mechanistic hypotheses for the role of four mutations in a portion of the improvement, subsequently validated by separate biochemical experiments. Next, methods were developed to computationally redesign protein interactions in order to rationally improve binding affinity. In the anti-lysozyme model antibody D1.3, modest binding improvements were achieved, with the results indicating potentially increased sucesss using predictions that emphasize electrostatics, as well as the need to address the over-prediction of large amino acids. New methods, taking advantage of the computed electrostatics of binding, yielded robust and significant improvements for both model and therapeutic antibodies.
Examining the relationship between housing prices and the Atlanta BeltLine
A revival in linear park development has brought new open space to a growing number of communities previously characterized by low-income populations, obsolete infrastructure, and difficulties in attracting outside investment. This thesis examines the relationship between linear park development and escalation in property values using the case of the Atlanta BeltLine. Employing data from Fulton County in Georgia, I construct a linear regression model with a difference-in-difference specification to examine these effects at the point of park completion. I find notable property value appreciation effects due to the BeltLine's development, and seek to place these findings in context of larger conversations about equitable development and open space. Considering the history of the BeltLine's development, I examine ways in which Scott Campbell's conception of equity planning has been realized in Atlanta and recommend ways in which local, state, and federal government may improve equitable development planning efforts in conjunction with future open space projects.
Geochemistry of slow-growing corals : reconstructing sea surface temperature, salinity and the North Atlantic Oscillation
A 225-year old coral from the south shore of Bermuda (64°W, 320N) provides a record of decadal-to-centennial scale climate variability. The coral was collected live, and sub-annual density bands seen in x-radiographs delineate cold and warm seasons allowing for precise dating. Coral skeletons incorporate strontium (Sr) and calcium (Ca) in relative proportions inversely to the sea surface temperature (SST) in which the skeleton is secreted. [Delta]180 of the coral skeleton changes based on both temperature and the [delta]180 of sea water ([delta]Ow), and 6Ow is proportional to sea surface salinity (SSS). Understanding long-term climate variability requires the reconstruction of key climate parameters, such as sea surface temperature (SST) and salinity, in records extending beyond the relatively short instrumental period. The high accretion rates, longevity, and skeletal growth bands found in coral skeletons make them an ideal resource for well-dated, seasonal climate reconstructions. Growing between 2 and 6 mm/year and reaching more than im in length, slow-growing corals provide multi-century records from one colony. Additionally, unlike the fast growing (10-20 mm/year) species Porites, slow-growing species are generally found in both tropical and sub-tropical locations greatly expanding the geographical location of these records. A high resolution record (HRR, ~11 samples per year) was drilled for the entire length of the coral record (218 years). Samples were split and Sr/Ca, [delta]180, and [delta]13C were measured for each sample. Sr/Ca was used to reconstruct winter time and mean-annual SST. Oxygen isotopic measurements were used to determine directional salinity changes, in conjunction with Sr/Ca based SST reconstructions.
Understanding unemployment and local hiring in Lawrence, Massachusetts : a report for the City of Lawrence
The purpose of this project is to assess the state of employment in Lawrence, Massachusetts in an effort to understand why the city has consistently struggled with an unemployment rate that is double the state average. First, we evaluate employers' workforce demand and the supply of potential workers among Lawrence residents. We then test the efficacy of City incentives when it comes to generating local employment. Thus, we look at how new employers that take advantage of City incentives - such as tax-increment financing - fare when it comes to local hiring. We identify three major development projects and determine which local benefits they were awarded, how many jobs they promised to create and retain, and what the businesses actually accomplished in terms of job growth. Finally, we recommend next steps that the local government can take in order to raise the employability of Lawrence residents and connect them with jobs that are in high demand locally.
Simulating CONWIP and CONWIP- BtO production strategies
When ramping up production volumes, hardware startups are required to identify and tackle obstacles in multiple areas of business including finance, sales, engineering, manufacturing, and servicing. NVBOTS, a Boston based 3D printer manufacturing startup, is going through a similar phase of production ramp-up of its printers. This thesis documents the process that the MIT M.Eng. team went through to identify such obstacles at NVBOTS. This process provides insights into what other startups in similar positions may explore. The topic of manufacturing systems was identified as a potential area of improvement as NVBOTS ramps up its production. Discrete event manufacturing simulation models are developed to evaluate two base production strategies -- CONWIP and CONWIP-BtO hybrid. This thesis details the modeling mechanisms used to develop the simulation models. Performance trade-offs that exist for these two production strategies with respect to lead time and production floor inventory levels are analyzed. Effects of various policy levers such as CONWIP and BtO batch sizes are studied, and recommendations for these levers are made. The CONWIP policy is recommended for when lead time requirements for NVBOTS become strict. The CONWIP-Build to Order policy is not recommended given the low lead time benefit it offers compared to a pure Build to Order policy. Following feedback from NVBOTS on these two policies, an advanced Late Stage Differentiation CONWIP-BtO model is developed. This is done to evaluate the potential of late stage differentiation for when NVBOTS expands its product lines. Performance of this policy with respect to lead time and inventory levels is studied for different values of CONWIP and BtO batch sizes, number of workers and worker utilization. Potential plans of action for NVBOTS to tackle higher demands in the future are analyzed. The work described in this thesis covers roughly half of the project on manufacturing systems at NVBOTS. The other half is covered in Alexander Willem Anton van Grootel's thesis [1]. Van Grootel's thesis focuses on capacity estimation of NVBOTS' current facility and variants of the BtO policy. In contrast, this thesis analyzes CONWIP and CONWIP-BtO policies.
Fluid dynamics in action
In this thesis we formulate an effective field theory for nonlinear dissipative fluid dynamics. The formalism incorporates an action principle for the classical equations of motion as well as a systematic approach to thermal and quantum fluctuations around the classical motion of fluids. The dynamical degrees of freedom are Stuckelberg-like fields associated with diffeomorphisms and gauge transformations, and are related to the conservation of the stress tensor and a U(1) current if the fluid possesses a charge. This inherently geometric construction gives rise to an emergent "fluid space-time", similar to the Lagrangian description of fluids. We develop the variational formulation based on symmetry principles defined on such fluid space-time. Through a prescribed correspondence, the dynamical fields are mapped to the standard fluid variables, such as temperature, chemical potential and velocity. This allows to recover the standard equations of fluid dynamics in the limit where fluctuations are negligible. Demanding the action to be invariant under a discrete transformation, which we call local KMS, guarantees that the correlators of the stress tensor and the current satisfy the fluctuation-dissipation theorem. Local KMS invariance also automatically ensures that the constitutive relations of the conserved quantities satisfy the standard constraints implied e.g. by the second law of thermodynamics, and leads to a new set of constraints which we call generalized Onsager relations. Requiring the above properties to hold beyond tree-level leads to introducing fermionic partners of the original degrees of freedom, and to an emergent supersymmetry. We also outline a procedure for obtaining the effective field theory for fluid dynamics by applying the holographic Wilsonian renormalization group to systems with a gravity dual.
Sustainable agricultural management : a systems approach for examining food security tradeoffs
Estimates suggest that the world needs a 50% increase in food production to meet the demands of the 2050 global population (Tilman et. al. 2011). Cropland expansion is unlikely to be sufficient, and yield improvements that require more inputs may lead to more environmental damage. This work focuses on reallocating limited land and water resources to optimize cropping patterns. By combining optimization methods, surrogate modeling, global data sources, data assimilation, and hydrologic modeling, we identify opportunities for increasing food-crop production and cash-crop revenue, while maintaining sustainability constraints that limit cropland expansion and prevent groundwater depletion. We apply the framework in India's Krishna river basin and find that reallocating resources to meet or exceed current production can lead to 96% gain in net revenue as resources over an estimated current baseline. Resources in this case are moved to high-yielding cash crops. Imposing a self-sufficient southern diet which depends on rice reduces the gains to 77% while imposing a self-sufficient national diet with more emphasis on wheat eliminates all net revenue gains to the region. The approach described in this thesis, highlights the trade-offs between food production, cost and environmental impacts in achieving specified food-security objectives. This research contributes to the field in two ways: 1) it provides a novel method for combining remotely sensed data, surrogate models and optimization to understand agricultural trade-offs, and 2) it furthers the discussion on food and water security and sustainable resource management by demonstrating that resource reallocation with sustainability constraints provides revenue gains in certain situations.
Use of axiomatic design principles to develop vehicle suspension controls for variable stiffness and ride height
Axiomatic Design principles are used to design a vehicle suspension system. The use of Axiomatic Design helps to guide the design of a decoupled system. The Design Matrix (DM) illustrates the independence among the Functional Requirements (FRs) and the Design Parameters (DPs). The ultimate goal is the design of a fully independent suspension system in which the FRs stiffness, ride height, and damping can be varied as needed. To achieve the three FRs, three DPs are chosen - the volume of an air spring for stiffness, the volume of fluid in a fluid chamber for ride height, and orifice control for damping. This thesis investigates two DPs in depth, the air spring and fluid chamber. The nonlinearity of the air spring is studied and its effect on the system as a whole is simulated in Simulink. Two control systems are proposed in which stiffness and ride height are kept constant. The desired values for stiffness and ride height are predetermined by the user or by an optimization algorithm. The physical design for the control systems is also proposed in this thesis. The design for the air spring system uses an electropneumatic design, and the design for the fluid chamber system uses an electrohydraulic servovalve design.
A corporate fitness center : an example for the reuse of the Empire Stores, Brooklyn, N.Y.
The proliferation of over 500 fitness programs for the employees of American corporations marks a turning point for the way American corporations regard employee and corporate health. Typically, sports facilities were the province of recreation or education facility planners. A category of sports activities has been isolated, however, for its cardiovascular characteristics and is the basic component of a fitness program The physiological characteristic which are of concern are those activities which contribute to the "training effect" of the heart or the ability of the heart to pump blood and oxygen to the body. The benefits of this conditioning are manifold. Longitudinal medical studies indicate that there are positive relationships across a large population for aerobic exercises or exercises which demand oxygen and decreased risk of heart attack in later life. While the correlation between exercise and good health seems merely the confirmation of good sense, it is a recent occurrence that this relationship has been quantified by corporations and utilized to increase "corporate health," through the construction of fitness facilities for employees. The intention behind this thesis is to explore the existing information about fitness centers and design a facility as the reuse of an historic building in Brooklyn, New York.
Technology assessment and feasibility study of high-throughput single cell force spectroscopy
In the last decade, the field of single cell mechanics has emerged with the development of high resolution experimental and computational methods, providing significant amount of information about individual cells instead of the averaged characteristics provided by classical assays from large populations of cells. These single cell mechanical properties correlate closely with the intracellular organelle arrangement and organization, which are determined by load bearing cytoskeleton network comprised of biommolecules. This thesis will assess the feasibility of a high throughput single cell force spectroscopy using an atomic force microscopy (AFM)-based platform. A conventional AFM set-up employs a single cantilever probe for force measurement by using laser to detect the deflection of the cantilever structure, and usually can only handle one cell at a time. To improve the throughput of the device, a modified scheme to make use of cantilever based array is proposed and studied in this project. In addition, to complement the use of AFM array, a novel cell chip design is also presented for the fine positioning of cells in coordination with AFM cantilevers. The advantages and challenges of the system are analyzed too. To assess the feasibility of developing this technology, the commercialization possibility is discussed with intellectual property research, market analysis, cost modeling and supply chain positioning. Conclusion about this technology and its market prospect is drawn at the end of the thesis.
Geometric modeling and optimization in 3D solar cells : implementation and algorithms
Conversion of solar energy in three-dimensional (3D) devices has been essentially untapped. In this thesis, I design and implement a C++ program that models and optimizes a 3D solar cell ensemble embedded in a given landscape. The goal is to find the optimum arrangement of these solar cells with respect to the landscape buildings so as to maximize the total energy collected. On the modeling side, in order to calculate the energies generated from both direct and reflected sunlight, I store all the geometric inputs in a binary space partition tree; this data structure in turn efficiently supports a crucial polygon clipping algorithm. On the optimization side, I deploy simulated annealing (SA). Both advantages and limitation of SA lead me to restrict the solar cell docking sites to orthogonal grids imposed on the building surfaces. The resulting program is an elegant trade-off between accuracy and efficiency.
Landscape as a reference for design
This is a study of the ways in which the forms in landscapes - natural terrain adapted and inhabited - can serve as references in architectural design. As references for design, landscapes provide a richness of responses to local and evolutionary factors and a richness of associations which are central to our own identity and the identity of places or regions. In this thesis several perspectives on ways in which landscapes serve as references are analyzed. The landscape and surrounding context of each particular site importantly define its character and offer significant references for forms to be extended or generated. More broadly, landscapes can be viewed as sources for forms which can be transposed in multiple ways; the ultimate test of their value being whether they provide habitable, usable spaces. Landscapes can also be studied for the associations which they bring. These associations may explain feelings which we have about the quality and character of places . A series of principles for design are proposed. These principles reflect convergence amongst the several perspectives on how landscapes can serve as references and constitute a collection of suggestions for design. The principles are organized along a continuum of "forms", "processes of addition and change", and "associative qualities". Design studies for a site along the Neponset River at the south edge of Boston have been undertaken to aid in the development of the principles and illustrate their application. A mix of uses and building methods have been studied. The site for the studies is near the village center known as Lower Mills. The natural topography, the river's transition from narrow rapids to open estuary, and the historic collection of industrial buildings form a landscape rich in references and associations.
Modular invariance for vertex operator superalgebras
We generalize Zhu's theorem on modular invariance of characters of vertex operator algebras (VOAs) to the setting of vertex operator superalgebras (VOSAs) with rational, rather than integer, conformal weights. To recover SL₂ (Z)-invariance, it turns out to be necessary to consider characters of twisted modules. Initially we assume our VOSA to be rational, then we replace rationality with a different (weaker) condition. We regain SL₂(Z)-invariance by including certain 'logarithmic' characters. We apply these results to several examples. Next we define and study 'higher level twisted Zhu algebras' associated to a VOSA. Using a novel construction we compute these algebras for some well known VOAs.
Implementing "pull" production in a job shop environment
In a recent contract, CVN 78, Northrop Grumman Corporation has been experiencing significant pressure from the Navy to reduce cost in the design and construction of the new nuclear aircraft carrier class. Furthermore, the joint venture project between General Dynamics Electric Boat and Northrop Grumman Newport News to build the next fleet of Virginia Class Submarines has budgetary incentives tied to the contract. In order to meet these expectations, Northrop Grumman Newport News shipyard has responded by focusing on ways to better synchronize manufacturing in order to meet schedule and reduce costs. Migrating from the traditional push production to the concept of pull production, it is projected that inventory and operating expense will reduce significantly as pull will help to synchronize production efforts. There are different ways to approach the implementation of pull. Goldratt's Theory of Constraints was chosen as the most appropriate method in the job shop environment of the shipyard's Fabrication Shop. This thesis focuses on the design of a Drum-Buffer-Rope implementation of the Theory of Constraints in a high variability, high volume steel fabrication shop. Additionally, it describes how this method was selected over alternative pull systems. Finally, a case study of implementation design will be described along with an evaluation of the system design.
Three essays in industrial organization
This dissertation consists of three essays on the effects of intellectual property rights protection on market structure and social welfare in the Indian pharamaceutical industry. In contrast to pharmaceutical industries in the developed world, India had historically enforced a weak system of intellectual property rights protection that eliminated most legal barriers to entry in its pharmaceuticals markets. As a condition of its membership to the World Trade Organization, India became required to extend legal protection to all pharmaceutical products by 2005. The first essay analyzes the dramatic increase in the number of products released by domestic firms in India in the period leading up to the 2005 deadline. Speculation in the media linked this phenomenon to the imminent change in patent regime. The essay uses data on pharmaceutical products being sold in India in combination with data on drugs patented internationally to investigate the possibility that Indian firms launched products in the domestic industry as a strategic response to the anticipated change implied by the WTO. Results of the estimation do not provide conclusive evidence of strategic behavior by firms in markets where the patent enforcement could affect the future profitability of domestic firms.
Coloring time with CodaChrome
As new computationally enhanced tools become available, there is an opportunity to give more and more people access to new ways for personal, creative expression. We designed a new computational construction kit to allow children and adults to design and build interactive, dynamic color patterns on electronic jewelry and sculptures. We designed activities to introduce CodaChrome, our color pattern creation environment, along with ideas about color and material properties to children in the context of immersive design experiences. The process and product of these experiences reveal the way young people understand abstract concepts related to the notions of space, time and space-time interrelationships. This thesis reports on the design and evaluation of the activities, the development of the CodaChrome system, and the evolution of our methodology for investigating the formation of concepts like synchronicity and concurrency and their dependency on spatial connotations. The presented case studies contribute to the ongoing research on the media-dependence of classic epistemological questions regarding space and time, as manifested in the diversity of children's styles in making and thinking about dynamic color animations on light modules, which can be arranged in arbitrary spatial topologies.
The influences of learning behavior on the performance of work teams : a system dynamics approach
This thesis seeks to apply the tools of system dynamics to the study of the influences of learning-oriented behaviors on team performance and to investigate what factors foster team learning in the organizational environment and what factors hinder it. To address this issue, I examine existing works in the organizational learning field, including a thoughtful, descriptive model of team learning developed by Amy Edmondson of the Harvard Business School. Her model focuses on psychological safety -- a shared belief held by team members that the team is safe for interpersonal risk-taking -- as a construct that facilitates learning behavior, which in tum influences team performance. Drawing on Edmondson's team learning model and on textual analysis, I develop causal loop diagrams to capture endogenous processes which, theorists argue, influence team learning and promote performance. Using Edmondson's descriptive model as a starting point, I develop a formal simulation model to understand the dynamics created by learning initiatives. I describe the formulation in detail, relating the simulation model to Edmondson's model and to supporting theories on organizational learning, mainly to those developed by Chris Argyris, Edgar Schein and Peter Senge. Simulation tests are carried out to address the question: What are the factors that promote a team's engagement in learning behavior? The simulation results show three factors that play a critical role in fostering team learning and promoting performance: ( 1) less aggressive performance goals; (2) a minimum level of psychological safety; and (3) a high level of team self-confidence. Understanding such factors and their effects is of use to academics developing theories of organizational learning and to managers trying to implement or review learning initiatives.
Natural abundance of ¹⁵N as a tool for assessing patterns of nitrogen loss from forest ecosystems
Stable isotopes provide an integrated measure of the nitrogen cycling history of a site. Among ecosystems with contrasting nitrate loss patterns, the [delta]15 N of soil and plant material should be higher at sites with higher nitrate losses. An underlying assumption in natural abundance isotope studies is that soil [delta]15 N is at steady-state over time. I found that [delta]15 N was not at steady state in either the Oie or Oa horizon for the period 1969 t[delta]1992 for the reference watershed (W6) at the Hubbard Brook Experimental Forest (HBEF); when nitrate losses were high, [delta]15 N increased. I measured the [delta]15N of soils from 28 soil pits at Watershed 5 at the HBEF before and after clear-cutting in order t[delta]test the hypothesis that elevated nitrification and nitrate loss induced by clear-cutting would be associated with a concurrent increase in soil [delta]15 N. A mass-balance model confirmed that increases in nitrification and nitrate loss after clear-cutting could explain the increase in soil [delta]15N (l .6%0 in the Oie horizon and 1. 1 o/o[delta]in the Oa horizon) in the organic horizons after 3 years. I tested the hypotheses: (1) that foliar [delta]15 N will be higher in a clear-cut watershed than in a reference watershed due t[delta]elevated nitrification and nitrate loss; and (2) that foliar [delta]1 5 N in a clear-cut watershed will track the rapid changes in streamwater nitrate after clear-cutting. Increased foliar [delta]15 N coincided with increased streamwater nitrate concentration, suggesting that the increased nitrification that caused elevated streamwater nitrate concentration als[delta]caused enrichment of the plant-available ammonium pool. Finally, I tested the hypotheses: (1) that [delta]15 N in soil and litter increases across a spatial gradient of nitrate loss, and (2) that [delta]15 N in soil and litter are elevated when nitrification potential is elevated. The enrichment factor, defined as [delta]15 N1foliar - [delta]15Nbs is a method of comparing [delta]15 N values from different sites by normalizing for the spatial heterogeneity in mineral soil [delta]15 N values. When net nitrification potential was high, the enrichment factor was higher, when nitrification potential was low, the enrichment factor was lower. The enrichment factor may prove valuable for comparing sites with different nitrogen cycling patterns.
Design modeling and fabrication of experimental apparatus for compliant mechanism education kit
The purpose of this thesis is to design an educational kit to be used to teach practicing engineers about recent developments in the study and design of flexures. Flexure theory can be difficult to explain. This kit is a physical example of the FACT method for designing flexures. The first flexure is a linear motion flexure, which is a familiar design to practicing precision engineers. The second design is a flexure which moves in a screw motion, which has never been built before. The design of the screw flexure uses the FACT method to combine constraints to create a linked linear and rotational motion. The screw flexure is also designed to have a variable pitch, such that it ranges from pure rotational motion to linear motion. This thesis contains the modeling, design, and fabrication process for both the linear and screw flexure. Two working prototypes were manufactured of each flexure. They are assembled on a baseplate and include sensors to measure the motion of each flexure. One kit was used to explain the concepts behind the design of the flexures to two students. They were then able to answer a few questions about the concepts after experimenting with the flexures.
A robot for wrist rehabilitation
In 1991, a novel robot named Manus I was introduced as a testbed to study the potential of using robots to assist in and quantify the neuro-rehabilitation of motor skills. Using impedance control methods to drive a 2 d.o.f. planar robot, Manus I proved an excellent fit for the rehabilitation of the upper arm and shoulder. This was especially true in the case of rehabilitation after stroke. Several clinical trials showed that therapy with Manus reduced recovery time and improved long term recovery after stroke. This successful testbed naturally led to the desire for additional hardware for the rehabilitation of other degrees of freedom. This thesis outlines the mechanical design of one of four new rehabilitation robots. Its focus is the mechanical design of a robot for wrist rehabilitation. The anthropometric background data, the design's functional requirements, the strategic design selection and the detailed design are presented.
Systematic investigation of the effects of hydrophilic porosity on boiling heat transfer and critical heat flux
Predicting the conditions of critical heat flux (CHF) is of considerable importance for safety and economic reasons in heat transfer units, such as in nuclear power plants. It is greatly advantageous to increase this thermal limit and much effort has been devoted to studying the effects of surface characteristics on it. In particular, recent work carried out by O'Hanley demonstrated the separate effects of surface wettability, porosity, and roughness on CHF, and found that porous hydrophilic surface coatings provided the largest CHF increase, with a 50-60% enhancement over the base case. In the present study, a systematic investigation of the effects that the physical characteristics of the hydrophilic layers have on heat transfer was conducted. Parameters experimentally explored include porous layer thickness, pore size, and void fraction (pore volume fraction). The surface characteristics are created by depositing layer-by-layer (LbL) thin compact coatings made of hydrophilic SiO₂ nanoparticles of various sizes. A new coating was developed to reduce the void fraction by using polymers to partially fill the voids in the porous layers. All test surfaces are prepared on indium tin oxide - sapphire heaters and tested in a pool boiling facility at atmospheric pressure in MIT's Thermal-Hydraulics Laboratory. Results indicate that CHF follows a trend with respect to each parameter studied and clear CHF maxima reaching up to 114% enhancement are observed for specific thickness and pore size values. ZnO₂ nanofluid-generated coatings are also prepared and their boiling performance is compared to the boiling performance of the engineered LbL coatings. The results highlight the dependence of CHF on capillary wicking and are expected to allow further optimization of the nanoengineered surfaces.
Effects of different fuels on a turbocharged, direct injection, spark ignition engine
The following pages describe the experimentation and analysis of two different fuels in GM's high compression ratio, turbocharged direct injection (TDI) engine. The focus is on a burn rate analysis for the fuels - gasoline and E85 - at varying intake air temperatures. The results are aimed at aiding in a subsequent study that will look at the benefits of direct injection in turbocharged engines, ethanol's knock suppression properties, and the effects of ethanol concentration in gasoline/ethanol blends. Spark sweeps were performed for each fuel/temperature combination to find the knock limit and to assess each fuels' sensitivity to spark timing and temperature. The findings were that E85 has lower sensitivity to spark timing in terms of NIMEP loss for deviation from MBT timing. A 5% loss in NIMEP was seen at 3° of spark advance or retard for gasoline, whereas E85 took 5' to realize the same drop in NIMEP. Gasoline was also much more sensitive to intake air temperature changes than E85. Increasing the intake air temperature for gasoline decreased the peak pressure, however, knock onset began earlier for the higher temperatures, indicating that end-gas autoignition is more dependent on temperature than pressure. E85's peak pressure sensitivity to spark timing was found to be about 50% lower than that of gasoline and it displayed much higher knock resistance, not knocking until the intake air temperature was 130°C with spark timing of 30° bTDC. These results give some insight into the effectiveness of ethanol to improve gasoline's anti-knock index. Future experiments will aim to quantify charge cooling and anti-knock properties, and determine how ethanol concentration in gasoline/ethanol blends effects this knock suppression ability.
Understanding the value of boutique hotels
In recent decades, boutique hotels have witnessed a dramatic increase in popularity in the United States. The purpose of this paper is to provide the reader with an understanding of boutique hotel value and conditions that allow for boutique hotel success. First, it will provide a formal definition of boutique hotel, a definition which remains elusive despite the popularity of the hotel category. Second, it will provide a comparative analysis, based upon price-per-room paid by investors, of three different hotel categories: boutique, independent, and branded chain. In defining boutique hotel, the paper relies upon both written definitions and interviews with real estate developers and real estate brokers. The boutique hotel category is defined, and then contrasted with the definitions of independent hotels and branded chain hotels. In analyzing boutique hotel value, the paper considers hotels that have sold in the past five years in Boston, New York City, and Washington D.C. Price-per-room paid by investors for these hotels is compared across each of the three hotel categories, in each of the three cities. The paper analyzes the results of the value comparison of the different hotel categories.
Biogeochemical proxies for environmental and biotic conditions at the Permian-Triassic boundary
The extinction at the Permian-Triassic boundary marked one of the most profound events of the Phanerozoic Eon. Although numerous hypotheses have been proposed, the trigger mechanism continues to be debated. This thesis intends to examine the impact of oceanic conditions on the extinction event by analyzing hydrocarbon biomarkers. Hydrocarbon biomarkers are chemical fossils in sedimentary rocks that serve as proxies to measure the conditions that prevailed during deposition. In this thesis, biomarkers for redox conditions, depositional environment, microbial community and potential age-related biomarkers have been measured and are reported from four sections that span the Permian-Triassic boundary. The first section, from the Peace River Basin in modern-day western Canada, was deposited on the eastern margin of the Panthalassic Ocean and samples conditions in this global water body. The second section is from Kap Stosch, Greenland, and was deposited on the southern margin of an epicontinental sea situated in the northwest of the supercontinent Pangaea. The Great Bank of Guizhou, China is the third section studied, and it is a carbonate platform deposited on the southern edge of one of the smaller continental blocks that formed the eastern margin of the Paleo-Tethys Ocean. The fourth section, for the Permian-Triassic boundary is from Meishan, China, the type section for this boundary, and was deposited on the western margin of another one of the continental blocks at the edge of the Paleo-Tethys Ocean. The biomarker evidence from these sections was measured in ratios, absolute abundances and for 613C isotopic values. This evidence points to global marine conditions dominated by bacterial inputs in which photic zone euxinia was prevalent for extended time periods. Additional findings from compound-specific isotope data suggest that at isolated intervals, the chemocine may have extended even closer to the surface. The timing of these intervals implies that ocean conditions may have affected the extinction itself.
A methodology for interactive decision making in environmental management involving multiple stakeholders
A methodology for evaluating environmental management programs using integrated risk communication, assessment and management tools is developed. The main novelty of the methodology lies in the use of decision analysis methods to integrate the wide range of decision objectives which characterize environmental management problems, and risk assessment for impact evaluations under uncertainty, in a framework that emphasizes and incorporates input from stakeholders in all aspects of the process. The outcomes of the analysis are then used to guide the behavioural deliberative process that is engaged to reach a consensual, defensible decision. The first step of the methodology is that of identifying all consequences relevant to the implementation of the decision, i.e. the performance measures. These are identified through a decomposition process based on the use of conditional influence diagrams which allow to incorporate and structure the quantitative a1.1d qualitative issues of the decision problem. Aggregation of the evaluations of the performance measures is done by means of an additive utility function in which single-attribute utilities for the various performance measures are weighed by appropriate measures of their relative importance. The weights of the performance measures are assessed by the pairwise comparison method of the Analytic Hierarchy Process (AHP) applied to the hierarchical structure of the influence diagram. For the determination of the single-attribute utilities we employ a novel approach based on the AHP in which the comparisons are made not on the actual numerical values of the performance measures but, rather, on more intuitive concepts such as 'worst', 'moderate' and 'best'. Within this approach, the innovative introduction of elements of fuzzy logic allows us to account for linguistic imprecision in the expression of the stakeholders' preferences ...
Effect of ball impact location and temperature on softball bat handle vibration
The bat-ball collision is a crucial part of softball, and minimizing excessive vibrations in the bat after impact makes batting more comfortable for the hitter and optimizes the transfer of momentum between the bat and ball. To better understand this collision, the magnitude of bat handle vibrations as a function of ball impact location was measured across three bat brands and at three different temperatures. The bat barrel was struck with an impact hammer and an accelerometer placed near the handle of the bat collected the output data. Following an experiment conducted at room temperature, the bats were placed in extreme heat and cold and the same experiment was performed. The bats resonate in two main frequency ranges associated with the first bending mode and a hoop mode. The bending mode frequency was consistent across all impact locations, but did vary slightly between bat types. The hoop mode frequency varied depending on location. Hitting a ball with the bat's sweet spot, where a node of the bending mode is located, minimizes the magnitude of vibrations at the handle, primarily by eliminating the first bending mode. In general, cold temperatures tend to inhibit the bat's ability to minimize vibrational output, leading to more energy being transferred to the batter's hands.
Airline operating cost reduction through enhanced engine health analytics
Engine Health Management (EHM) is a comprehensive maintenance service offered by engine manufacturer Pratt & Whitney (PW) to its airline customers. In its current form, engine performance is monitored through recorded physical metrics, such as gas temperature, pressure, and altitude, taken as single snapshots at various phases of flight. The advent of the Enhanced Flight Data Acquisition, Storage and Transmission (eFASTTM) system, which allows for near-continuous recording of engine metrics, provides Full-Flight Data Analytics (FFDA) that may proactively alert and recommend maintenance activity to airlines. Adopting eFASTTM may help avoid Adverse Operational Events (AOE) caused by unexpected engine failures and the associated cost burdens. With respect to operating cost, airlines standardly report Cost Per Available Seat Mile (CASM) and Cost Per Block Hour (CBH). EHM services that prevent operational disruptions can help airlines reduce these unit-cost metrics, whose scrutiny by industry analysts affect investment guidance, stock performance, and overall business outlook. In this study, the value of FFDA services to airlines is investigated on the International Aero Engines V2500, a mature engine with customers' operational histories well-documented. Using a Poisson distribution to model the occurrence of six operational disruption types-Inflight Shutdown, Aircraft-On-Ground, Aborted Takeoff, Air Turn-Back, Ground Turn-Back, and Delay/Cancellation-the cost savings potential is quantified as a function of events avoided by a hypothetical FFDA service. Airline Form 41 financial data from the Bureau of Transportation Statistics is then used to estimate the magnitude of savings on CASM and CBH retroactively for 2012-16. Results show that unit cost reductions of 0.5% to 1.5% are possible through engine event avoidance, representing savings up to $104M annually, but outcomes are highly dependent on assumptions about cost of operational disruptions for each individual carrier. Overall, a baseline model and procedure is developed for valuating FFDA and associated EHM services. Further collaboration between airlines and Pratt & Whitney on data availability and accuracy will help refine this model, which is the first to bridge publicly available airline costs with engine history data, helping stakeholders transition to an eFASTTM ecosystem that promises greater operational efficiency and safety.
Essays in financial economics
In Chapter 1, I investigate trading volume before scheduled and unscheduled corporate announcements to explore how traders respond to private information. I show that cumulative trading volume decreases by more than 15% prior to scheduled announcements. The decline in trading volume is largest when information asymmetry is high, while the opposite relation holds for volume after the announcement. In contrast, trading volume before unscheduled announcements increases dramatically and shows little relation to proxies for information asymmetry. All the results for scheduled announcements are consistent with asymmetric information theories, where discretionary liquidity traders (DLTs) decrease volume when they know there is much adverse selection. However, DLTs do not seem to read information embedded in prices before unscheduled announcements. I further investigate the behavior of market makers and find that they act appropriately by increasing price sensitivity before all announcements. This implies that market makers extract timing information from their order books. Chapter 2 is joint work with Li He and professor Andrew W. Lo. We implement various statistical analyses on stock market returns, using CRSP equal weighted and value weighted index returns from 1926-2001, as Fama (1965) did more than 35 years ago. Investigating marginal and conditional stock return distribution, we report many characteristics of stock return distributions. First, stock return distribution is still not following a normal distribution even though many studies have been assuming this.
Theoretical and experimental studies of the ITER ECRH polarizer and rotator gratings
Electron Cyclotron Resonance Heating (ECRH) will be one of the major heating and current drive mechanisms for the ITER fusion experiment. A pair of reflection grating polarizers will be used in the ECRH high power microwave transmission lines to generate the required elliptically polarized microwave beam for ideal plasma coupling. A 'polarization rotator' and a 'circular polarizer' are used together to convert a linearly polarized beam, generated by a gyrotron, to an arbitrary elliptically polarized beam. This thesis presents numerical and experimental results characterizing the elliptical properties of microwaves reflected from a pair of polarizer gratings designed for operation at 170 GHz. First, a theoretical basis is presented for understanding the polarizing behavior of a reflection grating with an arbitrary groove shape. Vector transformations between incident and reflected fields calculated in High Frequency Structural Simulator (HFSS) are used to find the phase shift between the field components that reflect from the top and bottom grating surfaces. Using these results, we characterize the reflecting field by its ellipticity (#) and the angle of rotation of the main polarization axis (a). Next, detailed experimental measurements of the fields reflected from the aforementioned polarizer pair were taken with a Vector Network Analyzer. Very good agreement was seen between the numerical and experimental results and, to our knowledge, these are the first measurements of a polarizer/rotator pair in corrugated waveguide to be successfully compared with theory. Based on these results, we also calculated full polarization maps for grating pairs with alternative groove profiles. We also experimentally studied the mode conversion introduced by the polarization rotator as the grating is rotated about its axis. The presence of higher order modes will increase the ohmic losses along the transmission line.
Controllable single-bladed autorotating vehicle
Accurate delivery of cargo from air to ground is currently performed using autonomously guided parafoil systems. These parafoils offer limited maneuverability and accuracy, and are often relatively complex systems with significant deployment uncertainty. The author has proposed, designed, and developed a novel precision airdrop system in the form of a samara. Consisting of a single wing, payload, and control system, the guided samara is mechanically simple, and when correctly configured, is globally stable during deployment and steady descent. The proposed control mechanisms grant the vehicle omni-directional glide slope control during autorotating descent. This is the first documented effort to develop an actively controlled vehicle of this form factor. The research presented in this thesis includes the conceptual design of the vehicle and several control schemes, development of a six degree-of-freedom computer simulation predicting the vehicle's flight performance, and the design, fabrication, and flight testing of a guided samara prototype. Over the course of the development, numerous free-flights were conducted with unguided samara models. Flight results and simulation results in various configurations yielded the discovery of several mechanisms critical to understanding samara flight. The free-flight simulation predicted descent and rotation rates within 10% of those observed during flight testing. Simulations of the guided samara predicted stability during control actuation as well as considerable control authority. Using a programmable microprocessor, hobby servo, and 2-axis electronic compass, a control system was developed. The guided samara prototype was flight tested at NASA Langley to attempt to demonstrate omni-directional glide slope control during descent.
Building and sustaining effective relational contracts in multinational firms
The purpose of this thesis is to demonstrate how complex interactions in organizational transactions and behavior can be better understood by using theory related to relational contracts. Further, given this understanding, suggestions are made as to how firms can increase competitive advantage by building and sustaining better relational contracts in their organizations.
Responses to the everyday reliefs from the private
In pursuit of an architecture of the everyday, this investigation applies fascinations with and imaginations of the ordinary to architecture's possibilities for relief in today's increasingly privatized notion of the public. Our neoliberal reality dictates an incessant change in urban landscapes - from enclaves of difference to havens of increasing homogeneity ruled by the holders of capital. Transitioning urban ethnographies often occur in pursuit of accessible economies and shelter as resources become inaccessible. Though the cycle is inevitable, there remain opportunities for relief in the form of de-commercialized public space and public architectures for commerce. East Boston has historically served as an enclave to consistent influxes of foreign-born populations in Boston. The coexistence between various populations is both intermingled and separate - coded in our urban environments which host multiple worlds.
Object localization and identification for autonomous operation of surface marine vehicles
A method for autonomous navigation of surface marine vehicles is developed A camera video stream is utilized as input to achieve object localization and identification by application of state-of-the-art Machine Learning algorithms. In particular, deep Convolutional Neural Networks are first trained offline using a collection of images of possible objects to be encountered (navy ships, sail boats, power boats, buoys, bridges, etc.). The trained network applied to new images returns real-time classification predictions with more than 93% accuracy. This information, along with distance and heading relative to the objects taken from the calibrated camera, allows for the precise determination of vehicle position with respect to its surrounding environment and is used to compute safe maneuvering and path planning strategy that conforms to the established marine navigation rules. These algorithms can be used in association with existing tools, such as LiDAR and GPS, to enable a completely autonomous marine vehicle.
Tavarua : a mobile telemedicine system using WWAN striping
Tavarua is a platform designed to support mobile telemedicine systems over wireless wide area networks, WWANs. It utilizes network striping and other complementary techniques to send uni-directional near real time video and audio data streams from a imobile node to a stationary location. The key technical challenge is transmitting high-bandwidth, loss-sensitive data over multiple low-bandwidth, lossy channels. We overcome these challenges using dynamic adjustment of the encoding parameters and a novel video encoding technique (grid encoding) that minimizes the impact of packet losses. Using five WWAN interfaces, our system reliably and consistently transmits audio and diagnostic quality video, with median PSNR values that range from 33.716dB to 36.670dB, with near real-time latencies. We present a study of the characteristic behavior of WWANs, and a description of our system architecture based in part on the lessons gleaned from that study. Through a set of experiments where we transmit video and audio data from a moving vehicle we evaluate the system, focusing on consistency, reliability, and the quality of the audio and video streams. These experiments demonstrate that we can transmit high quality video and audio in varying conditions and even in the presence of hardware failures.
An analysis of the impact of mergers between community development corporations
This thesis explores the occurrence of mergers between community development corporations (CDC's) in the United States in the past five years. The research examines how mergers between CDC's affect their capacity to achieve their mission and serve their constituents. In addition, the author explores the drivers behind CDC mergers, the impacts from those mergers, and the factors that contribute to merger success. There is currently limited data and literature on CDC and non-profit mergers. This paper uses three case studies of mergers between CDC's to explore how and to what extent CDC capacities changed, as a result of the merger. A CDC capacity framework created by Glickman and Servon (1997) is operationalized and applied to each case study to analyze the capacity changes. The results from the case studies and review of the literature show that CDC's can likely benefit the most from a growth in programmatic capacity as a result of a merger.
Strategies for optogenetic stimulation of deep tissue peripheral nerves
Optogenetic technologies have been the subject of great excitement within the scientific community for their ability to demystify complex neurophysiological pathways in the central and peripheral nervous systems. Optogenetics refers to the transduction of mammalian cells with a light-sensitive transmembrane protein, called an opsin, such that illumination of the target tissue initiates depolarization; in the case of a neuron, illumination results in the firing of an action potential that can control downstream physiology. The excitement surrounding optogenetics has also extended to the clinic with a human trial using the opsin ChR2 in the treatment of retinitis pigmentosa currently underway and several more trials potentially planned for the near future. In this thesis, we focus on the use of viral techniques to transduce peripheral nerve tissue to be responsive to light. We characterize the properties of optogenetic peripheral nerve transduction, optimizing for variables such as expression strength, wavelength specificity, and time-course of expression. Within the scope of this thesis, three new methods for optogenetic peripheral nerve stimulation are described: (1) a method for optogenetic motor nerve control using transdermal illumination, (2) a method employing unique wavelengths to selectively target optogenetic subsets of motor nerves, and (3) a method for extending optogenetic expression strength and timecourse. The work is important because it lays the foundation for future advancements in optogenetic peripheral nerve stimulation in both a scientific and clinical context.
Generalized quantum defect methods in quantum chemistry
The reaction matrix of multichannel quantum defect theory, K, gives a complete picture of the electronic structure and the electron - nuclear dynamics for a molecule. The reaction matrix can be used to examine both bound states and free electron scattering properties of molecular systems, which are characterized by a Rydberg/scattering electron incident on an ionic-core. An ab initio computation of the reaction matrix for fixed molecular geometries is a substantive but important theoretical effort. In this thesis, a generalized quantum defect method is presented for determining the reaction matrix in a form which minimizes its energy dependence. This reaction matrix method is applied to the Rydberg electronic structure of Calcium monofluoride. The spectroscopic quantum defects for the ... states of CaF are computed using an effective one-electron calculation. Good agreement with the experimental values is obtained. The E-symmetry eigenquantum defects obtained from the CaF reaction matrix are found to have an energy dependence characteristic of a resonance. The analysis shows that the main features of the energy-dependent structure in the eigenphases are a consequence of a broad shape resonance in the 2E+ Rydberg series.
Design of a laparoscopic simulation device for testing and training
This thesis describes the development by Caroline Flowers of two prototypes of a benchtop laparoscopy simulator that mechanically simulates access ports using outer 'tissue' samples for port insertion and an inner cavity region where ex-vivo organs can be placed and operated on using laparoscopic tools. The alpha prototype was designed for testing tools for an MIT medical device design class, while the beta prototype was designed as a low-cost and more realistic substitute to simulators currently on the market.
Models to predict dynamic response of motorcycles with a outrigger & trailer
It is of interest, especially in the developing world, to explore the feasibility of using motorcycles in applications beyond personal transport. In particular, adding an outrigger wheel to a motorcycle may increase its capabilities for heavier duty operations like road haulage and agricultural mechanization. This thesis examines the feasibility of using motorcycles for low-speed high-weight towing and outrigger like attachments. Two different configurations will be evaluated. The first looks at how the addition of a large trailer affects the turning ability, stability, and power delivery of a motorcycle. The trailer is modeled as a single body system consisting of a single axle and large load and to attached by a definable hitch mounted near the rear wheel of the motorcycle. The second case examines the attachment of an outrigger like structure. This is of interest to farm like vehicles that wish to simply support a tool over a set of wheels using a motorcycle Here the motorcycle and third wheel are modeled as a single body system with a long simply supported beam that has a load applied from the forces due to the side car. A MATLAB model detailing the ability, stability, and power delivery of a motorcycle was created to determine these performances.
Time and change as ordering principles for urban design : an exploration
Urban design proposals traditionally have tended to deal with images of a final stable state in the environment. A need for acceptance and display of the process of change which, though present in all cities, is absent from most conceptions, is essential. To the extent that environmental change is inevitable, we should at least try to make sure that it is a guided process. The main intention will be to understand the nature of change and its measurable time by exploring ways in which a portion of Boston can remain flexible and receptive to individual and group energy conducive to change. Ways of managing future changes will involve the reconception of the study area as a spatiotemporal setting based in a timechange related program, thus testing the effectiveness and relevance of time and change as guiding principles for urban design. The setting will be a block in Boston's South End, where the Boston Center for the Arts is located.
Sensing and modeling human networks
Knowledge of how groups of people interact is important in many disciplines, e.g. organizational behavior, social network analysis, knowledge management and ubiquitous computing. Existing studies of social network interactions have either been restricted to online communities, where unambiguous measurements about how people interact can be obtained (available from chat and email logs), or have been forced to rely on questionnaires, surveys or diaries to get data on face-to-face interactions between people. The aim of this thesis is to automatically model face-to-face interactions within a community. The first challenge was to collect rich and unbiased sensor data of natural interactions. The "sociometer", a specially designed wearable sensor package, was built to address this problem by unobtrusively measuring face-to-face interactions between people. Using the sociometers, 1518 hours of wearable sensor data from 23 individuals was collected over a two-week period (66 hours per person). This thesis develops a computational framework for learning the interaction structure and dynamics automatically from the sociometer data. Low-level sensor data are transformed into measures that can be used to learn socially relevant aspects of people's interactions - e.g. identifying when people are talking and whom they are talking to. The network structure is learned from the patterns of communication among people. The dynamics of a person's interactions, and how one person's dynamics affects the other's style of interaction are also modeled. Finally, a person's style of interaction is related to the person's role within the network. The algorithms are evaluated by comparing the output against hand-labeled and survey data.
Thresholdizing lattice based encryption schemes
In this thesis, we examine a variety of constructions based on secret sharing techniques applied on lattice-based cryptographic primitives constructed from the learning with erros (LWE) assumption. Using secret sharing techniques from [BGG⁺17], we show how to construct paradigms of threshold multi-key fully homomorphic encryption and predicate encryption. Through multi-key fully homomorphic encryption [MW16] and threshold fully homomorphic encryption, we can construct a low-round multi party computation (MPC) scheme with guaranteed output delivery, assuming honest majority in the semi-honest and malicious settings. Applying the secret sharing scheme on predicate encryption constructions from LWE [GVW15], we can obtain a distributed predicate encryption scheme.
Hot-carrier reliability of MOSFETs at room and cryogenic temperature
Hot-carrier reliability is an increasingly important issue as the geometry scaling of MOSFET continues down to the sub-quarter micron regime. The power-supply voltage does not scale at the same rate as the device dimensions, and thus, the peak lateral E-field in the channel increases. Hot-carriers, generated by this high lateral E-field, gain more kinetic energy and cause damage to the device as the geometry dimension of MOSFETs shortens. In order to model the device hot-carrier degradation accurately, accurate model parameter extraction is critically important. This thesis discusses the model parameters' dependence on the stress conditions and its implications in terms of the device lifetime prediction procedure. As geometry scaling approaches the physical limit of fabrication techniques, such as photolithography, temperature scaling becomes a more viable alternative. MOSFET performance enhancement has been investigated and verified at cryogenic temperatures, such as at 77K. However, hot-carrier reliability problems have been shown to be exacerbated at low temperature. As the mean-free path increases at low temperature due to reduced phonon-scattering, hot-carriers become more energetic at low temperature, causing more device degradation. It is clear that various hot-carrier reliability issues must be clearly understood in order to optimize the device performance vs. reliability trade-off, both at short channel lengths and low temperatures. This thesis resolves numerous, unresolved issues of hot-carrier reliability at both room and cryogenic temperature, and develops a general framework for hot carrier reliability assessment.
Aestheticized abjection in feminist video art, 1996-2009
This thesis examines the work of three video artists -- Pipilotti Rist, Marilyn Minter, and Mika Rottenberg -- who all make work that is simultaneously mesmerizing and repulsive. While Immanuel Kant has argued that beauty and disgust are opposed, these works complicate this binary, as does my choice of the more minor terms "pretty" and "gross." My weaker descriptors encapsulate the desensitization to seductive and disgusting imagery that, in the media-saturated context of the late 90s/early 2000s, is the result of their pervasiveness and thus banality. These artists respond to abject feminist performance art of the 1960s and 70s, which some critics at the time worried attracted the male gaze while setting out to avert it. Theorists of disgust, however, have long understood seduction as always already part of disgust, which the artists in "Pretty Gross" set out to tool strategically. They respond to representations of women as objects of fascination on screen by borrowing resources and formal devices from mass media created to seduce viewers and consumers, but train their lenses instead on traditionally disgusting imagery, from menstrual blood to saliva-coated caviar. Rendering the disgusting palatable, these artists have attracted massive popular audiences and revenue. Yet all have raised a number of ethical quandaries for their critics, who struggle to defend their attempts to reclaim representations of women's bodies from an abusive history. The widespread visibility and influence of their work makes this critical interrogation especially urgent. Ultimately, I argue that Rist, Minter, and Rottenberg reflect, rather than resolve, tensions between ethics and aesthetics, gender and image, as well as attraction and aversion.
Comparisons of harmony and rhythm of Japanese and English through signal processing
Japanese and English speech structures are different in terms of harmony, rhythm, and frequency of sound. Voice samples of 5 native speakers of English and Japanese were collected and analyzed through fast Fourier transform, autocorrelation, and statistical analysis. The harmony of language refers to the spatial frequency content of speech and is analyzed through two different measures, Harmonics-to-Noise-Ratio (HNR) developed by Boersma (1993) and a new parameter "harmonicity" which evaluates the consistency of the frequency content of a speech sample. Higher HNR values and lower harmonicity values mean that the speech is more harmonious. The HNR values are 9.6+0.6Hz and 8.9±0.4Hz and harmonicities are 27±13Hz and 41+26Hz, for Japanese and English, respectively; therefore, both parameters show that Japanese speech is more harmonious than English. A profound conclusion can be drawn from the harmonicity analysis that Japanese is a pitch-type language in which the exact pitch or tone of the voice is a critical parameter of speech, whereas in English the exact pitch is less important. The rhythm of the language is measured by "rhythmicity", which relates to the periodic structure of speech in time and identifies the overall periodicity in continuous speech. Lower rhythmicity values indicate that the speech for one language is more rhythmic than another. The rhythmicities are 0.84±0.02 and 1.35±0.02 for Japanese and English respectively, indicating that Japanese is more rhythmic than English. An additional parameter, the 80th percentile frequency, was also determined from the data to be 1407±242 and 2021±642Hz for the two languages. They are comparable to the known values from previous research.
Disruptive technologies : an expanded view
The awareness of disruptive technologies and their potential effects on established firms was recently brought to the forefront of business thinking by Clayton Christensen in his book "The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail". While Christensen's work offers a fascinating view of technology change and the potentially lethal impact it may have on incumbent firms, his perspective on the contribution of technology change on product attributes and resultant firm disruption, appears, in my opinion, to be too limiting. The specific areas addressed by my thesis include: -- The expansion of Christensen's definition of disruptive technologies, -- An expanded understanding of the product attributes and subsequent competitive advantage that may result from the exploitation of an emerging technology, -- The role of market segmentation and technology interaction on the diffusion of an emerging technology and potential disruption of an incumbent technology, -- Inclusion of the potential for the down-market migration of products based on disruptive technologies in addition to the up-market scenario. The objective of my thesis is to broaden the spectrum of outcomes associated with technology change in order to help firms formulate a more comprehensive technology strategy. A framework for thought is provided regarding the potential outcomes of the exploitation of an emerging technology (possibly disruptive) in the context of product attributes and market influence in which the reader is encouraged to consider his or her own experiences.
Fast offset compensation for a 10Gbps limit amplifier
A novel offset voltage compensation method is presented that significantly modifies the existing tradeoff between control loop bandwidth, and therefore total compensation time, and total output jitter. The proposed system achieves comparable output jitter performance to traditional approaches while significantly reducing the total compensation time by nearly three orders of magnitude. Traditional offset compensation methods are based on simple offset measurement techniques that generally rely on passive compensation blocks and exhibit a direct inverse relationship between total compensation time and resulting output jitter. Therefore, current high-speed data-link systems suffer from extremely long offset compensation loop settling times in order to satisfy the strict protocol jitter specifications. In the proposed system, the new CMOS peak detector design is the enabling component that allows us break this relationship and achieve extremely fast settling behavior while preventing data dependence of the control signal. Simulated results show that the implemented system can achieve output jitter performance similar to existing methods while dramatically improving the compensation time. Specifically, the proposed system can achieve less than 2pS of peak-to-peak jitter, or less than 700fS of RMS jitter, while reducing the total compensation time from roughly 500[mu]S to less than 1[mu]S. The system was implemented in National Semiconductor's CMOS9 0.18[mu]m CMOS process. Packaged parts will be tested to verify agreement with simulated performance.
Decision analysis for geothermal energy
One of the key impediments to the development of enhanced geothermal systems is a deficiency in the tools available to project planners and developers. Weak tool sets make it difficult to accurately estimate the cost and schedule requirements of a proposed geothermal plant, and thus make it more difficult for those projects to survive an economic decision-making process. This project, part of a larger effort led by the Department of Energy, seeks to develop a suite of decision analysis tools capable of accurately gauging the economic costs and benefits of geothermal projects with uncertain outcomes. In particular. this project seeks to adapt a set of existing tools, the Decision Aids for Tunnelling, to the context of well-drilling, and make them suitable for use as a core software set around which additional software models can be added. We assess the usefulness of the Decision Aids for Tunnelling (DAT) by creating two realistic case studies to serve as proofs of concept. These case studies are then put through analyses designed to reflect project risks to which geothermal wells are vulnerable. We find that the DAT have sufficient flexibility to model geothermal projects accurately and provide cost and schedule distributions on potential outcomes of geothermal projects, and recommend methods of usage appropriate to well drilling scenarios.
High-power target development for accelerator-based neutron capture therapy
The production of clinically sufficient dose rates in Accelerator-based Neutron Capture Therapies (ABNCT) requires targets that can withstand ion beams of 2-10 kW or higher. Designing such a target requires knowledge of the current density profile which can exceed 1 mA/cm². A method has been developed to quantify the two-dimensional current intensity by utilizing the positrons emitted from the products of either the ¹²C(d,n) or ¹¹B(p,n) reaction. A desktop scanner was used to convert the dose profile measured with MD-55-2 radiochromic film into a map of beam current intensity. Analytic calculations coupled with Monte Carlo methods determined the resolution of this technique to be 0.22±0.01 mm. Liquid gallium metal was investigated as a possible coolant. Qualitative and quantitative comparisons between single submerged impinging jets of liquid gallium and water at low flowrates were supplemented with computational fluid dynamics. Experiments using an array of submerged jets were conducted to determine area-averaged Nusselt number correlations for water and gallium over a Reynolds number range of 7000<Re<38000. The spreading factor, β[sub]max, was introduced into the gallium correlation to account for surface wetting effects. Area-averaged heat transfer coefficients, h, produced by an array of gallium jets were found to exceed those of water for Re>13500. At a Reynolds number of 35000 an h of 10⁵ W/m²K was measured with the gallium array compared to 5.5xlO⁴W/m²K for water. Simulations of the thermal and mechanical stresses found that a gallium-cooled beryllium target could withstand beam powers of up to 20.2 kW.
Developments and advances in nonlinear terahertz spectroscopy
Nonlinear terahertz (THz) spectroscopy is a rapidly developing field, which is concerned with driving and observing nonlinear material responses in the THz range of the electromagnetic spectrum. In this thesis, I present several advances in nonlinear THz spectroscopy that expand the range of systems in which responses may be driven, the types of responses that may be initiated, and the way in which these responses may be observed. Sufficiently strong THz pulses are generated using the tilted-pulse-front technique, and are collected, focused, and detected using a THz spectrometer specifically designed for maximum peak THz electric field strength and maximum flexibility, allowing for a wide range of experimental geometries to be implemented. Further enhancement in the peak THz electric field strength is obtained through the use of metamaterial structures, which concentrate free-space THz fields in their antenna gaps. Impact ionization was observed in high-resistivity silicon, a material in which no nonlinear THz response had been previously seen, using metamaterial structures to enhance free space THz electric fields. Using three-dimensional metamaterial structures, the THz magnetic field is shown to also be capable of driving ionization processes both in high-resistivity silicon as well as air. Using metamaterial structures with open gaps, the THz electric field is shown to induce breakdown in air at both high and low pressures due to field ionization processes involving the gold metamaterial antennas. Furthermore, THz-driven electromigration of the gold metamaterial antennas is observed. Probing of THz-driven structural changes in both vanadium dioxide and perovskite ferroelectrics is demonstrated using femtosecond Xray pulses from the LCLS facility at the SLAC National Accelerator Laboratory. Finally, ongoing results involving energetic materials, stimulated Raman measurements, and Stark effect measurements are discussed. This work, coupled with the ongoing expansion of nonlinear THz techniques and potential applications demonstrates the continued development of nonlinear THz spectroscopy into a robust and valuable method for investigating fundamental processes in a multitude of systems.
New sublinear methods in the struggle against classical problems
We study the time and query complexity of approximation algorithms that access only a minuscule fraction of the input, focusing on two classical sources of problems: combinatorial graph optimization and manipulation of strings. The tools we develop find applications outside of the area of sublinear algorithms. For instance, we obtain a more efficient approximation algorithm for edit distance and distributed algorithms for combinatorial problems on graphs that run in a constant number of communication rounds. Combinatorial Graph Optimization Problems: The graph optimization problems considered by us include vertex cover, maximum matching, and dominating set. A graph algorithm is traditionally called a constant-time algorithm if it runs in time that is a function of only the maximum vertex degree, and in particular, does not depend on the number of vertices in the graph. We show a general local computation framework that allows for transforming many classical greedy approximation algorithms into constant-time approximation algorithms for the optimal solution size. By applying the framework, we obtain the first constant-time algorithm that approximates the maximum matching size up to an additive En, where E is an arbitrary positive constant, and n is the number of vertices in the graph. It is known that a purely additive En approximation is not computable in constant time for vertex cover and dominating set. We show that nevertheless, such an approximation is possible for a wide class of graphs, which includes planar graphs (and other minor-free families of graphs) and graphs of subexponential growth (a common property of networks). This result is obtained via locally computing a good partition of the input graph in our local computation framework. The tools and algorithms developed for these problems find several other applications: " Our methods can be used to construct local distributed approximation algorithms for some combinatorial optimization problems. " Our matching algorithm yields the first constant-time testing algorithm for distinguishing bounded-degree graphs that have a perfect matching from those far from having this property. " We give a simple proof that there is a constant-time algorithm distinguishing bounded-degree graphs that are planar (or in general, have a minor-closed property) from those that are far from planarity (or the given minor-closed property, respectively). Our tester is also much more efficient than the original tester of Benjamini, Schramm, and Shapira (STOC 2008). Edit Distance. We study a new asymmetric query model for edit distance. In this model, the input consists of two strings x and y, and an algorithm can access y in an unrestricted manner (without charge), while being charged for querying every symbol of x. We design an algorithm in the asymmetric query model that makes a small number of queries to distinguish the case when the edit distance between x and y is small from the case when it is large. Our result in the asymmetric query model gives rise to a near-linear time algorithm that approximates the edit distance between two strings to within a polylogarithmic factor. For strings of length n and every fixed E > 0, the algorithm computes a (log n)0(/0) approximation in n1i' time. This is an exponential improvement over the previously known near-linear time approximation factor 20( log (Andoni and Onak, STOC 2009; building on Ostrovsky and Rabani, J. ACM 2007). The algorithm of Andoni and Onak was the first to run in O(n 2 -) time, for any fixed constant 6 > 0, and obtain a subpolynomial, n"(o), approximation factor, despite a sequence of papers. We provide a nearly-matching lower bound on the number of queries. Our lower bound is the first to expose hardness of edit distance stemming from the input strings being "repetitive", which means that many of their substrings are approximately identical. Consequently, our lower bound provides the first rigorous separation on the complexity of approximation between edit distance and Ulam distance.
Field emission from silicon
A field emitter serves as a cold source of electrons. It has practical applications in various fields such as field emission flat panel displays, multiple electron-beam lithography, ion propulsion/micro-thrusters, radio frequency source, information storage technology, and electronic cooling. Silicon is an attractive material for building electron field emitters. To understand the physics of electron field emission from silicon and to push technologies of making quality field emitter arrays present both opportunities and challenges. This work focuses on an experimental study of electron field emission phenomena from silicon field emitter arrays. We demonstrate electron field emission from both the conduction band and the valence band of silicon simultaneously. A two-band field emission model is presented to explain the experimental data. Theoretical predictions for valence band emission were made in the past; however there was no direct observation until now. Experimental evidence of current saturation in field emission existed in the literature. We also report the observation of current saturation in n-type silicon field emitter arrays. A simple model is presented to account for the results. We report successfully fabricating 1/,m gate-aperture silicon field emitter arrays with a turn-on voltage as low as 14 V. The gate leakage current is observed to be less than 0.01% of the total emission current. Devices show excellent emission uniformity for different sized arrays. The low turn-on voltage is attributed to the small emitter tip radius. It was achieved by isotropic etching of silicon and low temperature oxidation sharpening of the emitter tips.
Cross section generation strategy for HCLWR
High conversion water reactors (HCWR), such as the Resource-renewable Boiling Water Reactor (RBWR), are being designed with axial heterogeneity of alternating fissile and blanket zones to achieve a conversion ratio of greater than one and assure negative void coefficient of reactivity. This study assesses the generation of few-group macroscopic cross sections for neutron diffusion theory analyses of this type of reactor, in order to enable three-dimensional transient simulations. The goal is to minimize the number of energy groups in these simulations to reduce computational effort. A two-dimensional cross section generation methodology using the Monte Carlo code Serpent, similar to the traditional deterministic homogenization methodology, was used to analyze a single RBWR assembly. Results from two energy group and twelve energy group diffusion analyses showed an error in multiplication factor over 1000 pcm with errors in reaction rates between 10 and 60%. Therefore, the traditional approach is not sufficiently accurate. Instead, a three-dimensional homogenization methodology using Serpent was developed to account for neighboring zones in the homogenization process. A Python wrapper, SerpentXS, was developed to perform branch case calculations with Serpent to parametrize few-group parameters as a function of reactor operating conditions and to create a database for interpolation with the nodal diffusion theory code, PARCS. Diffusion analyses using this methodology also showed an error in multiplication factor over 1000 pcm. The three-dimensional homogenization capability in Serpent allowed for the introduction of axial discontinuity factors in the diffusion theory analysis, needed to preserve Monte Carlo reaction rates and global multiplication factor. A one-dimensional finite-difference multigroup diffusion theory code, developed in MATLAB, was written to investigate the use of axial discontinuity factors for a single RBWR assembly. The application of discontinuity factors on either side of each axial interface preserved multiplication factor and reaction rate estimates between transport theory and diffusion theory analyses to within statistical uncertainty. Use of this three-dimensional assembly homogenization approach in generating few-group macroscopic cross sections and axial discontinuity factors as a function of operating conditions will help further research in transient diffusion theory simulations of axially heterogeneous reactors.
Politics, inherited institutions, and rebellion in the struggle over water futures in Chile
Following a wave of insurgent political action in 2011, the hegemony that governs life in Chile appears to be increasingly threatened. One area of politicized struggle has coalesced around water law. On one side of the struggle, water utilities, agro-export firms and entrenched political actors seek to retain the water laws inherited from the nation's 1973-1990 dictatorship. On the other, socio-political movements and recently elected political actors are challenging what they see as the political content of those laws that prioritize private economic gains.
Electronic tools for designing charts and graphs
This thesis explores the issues involved in designing an interactive chart and graph making system, especially tailored to the needs of the graphic designer. It defines a set of user interface requirements and describe the implementation of the prototype software system.
Study of biotechnology's impact on the market for anti-cancer drugs
In studies of creative destruction, scholars agree that, within research-intensive industries, the demise of incumbents is significantly determined by their lower productivity in researching the radically new technology (Henderson, 1993). Such differences in the research competence of incumbent vs. entrant firms are explained in the literature through theories about established vs. de novo firms (e.g., Nelson and Winter, 1982). A disconnect arises because, frequently, the most competent entrants are established (experienced) firms themselves (i.e., diversifying entrants). In fact, studies where diversifying and de novo entrants are compared find in the former the same mechanisms that scholars have argued take place in incumbent firms (e.g., Mitchell and Singh, 1993; Carroll et al., 1996). With this insight in mind, I present in this dissertation a study that decouples market incumbency from organizational experience. I walk away from the current hypothesis of incompetence in research and development of a radical new technology in the case of incumbents. I instead construct a framework highlighting the competitive disadvantages (organizational inertia) and advantages (competence re-use) that apply to all established firms (incumbents and diversifying entrants) vis-a-vis de novo firms.
Assessing the viability of level III electric vehicle rapid-charging stations
This is an analysis of the feasibility of electric vehicle rapid-charging stations at power levels above 300 kW. Electric vehicle rapid-charging (reaching above 80% state-of-charge in less than 15 minutes) has been demonstrated, but concerns have been raised about the high levels of electrical power required to recharge a high-capacity battery in a short period of time. This economic analysis is based on an existing project run by MIT's Electric Vehicle Team, of building a 200-mile range battery electric sedan capable of recharging in 10 minutes. The recharging process for this vehicle requires a power source capable of delivering 350 kW; while this is possible in controlled laboratory environments, this thesis explores the viability of rapid-charging stations on the grid-scale and their capability of servicing the same volume of vehicles as seen by today's gas stations. At this volume, building a rapid-charging station is not only viable, but has the potential to become a lucrative business opportunity.
Management responding strategy to customer online reviews : a case study of hotels in Taiwan
The hospitality industry in Taiwan is experiencing unprecedented opportunities and challenges. For decades, the industry has been growing rapidly, but the sudden decreases in visitor growth has led to increased competition among hotels. To attract more international guests, hoteliers have started to manage their online reputations by responding to online reviews. In this study, we analyzed online customer reviews and the responses of 31 hotels. A clear trend was observed: hotels are putting more resources into online management responses. We also interviewed ten hotels to learn how they manage these responses, what challenges they face in responding to online customer reviews, and how they use online reviews for other management purposes. We found that most hotels in the case study manage customer responses reactively rather than proactively; they lack strategic goals and methods for evaluating ROI. We also found that executive involvement and the hotel's internal communication style affect how customer responses can be used as a tool to improve the service-recovery process. Using online customer as a source of employee performance evaluation and linking customer feedback to encouragement scheme are also found in some hotels' practice. Future studies should further investigate how hotels' internal communication styles and response strategies and behavior affect service-recovery and customer loyalty. The use of online customer reviews to help improve other aspects of management such as human resource management is also suggested to be studied.
Enhanced personalized consumer experiences or infringement of privacy?
The consumer landscape has changed: the balance between supply and demand has shifted, with consumers facing a variety of products and services that are hard to tell apart, resulting in poor brand loyalty, and customers are now overwhelmed with a flood of constant information, leading to reduced attention spans that brands have to fight for. Brands have to be innovative and more efficient in the way they interact with consumers. Brands also have to make their way through the increasing amount of data collected, and focus only on value-adding data. Furthermore, in our society where speed now prevails, emotions have become the principal driver of people's decisions. Hence the opportunity provided by emotion analytics, which enable companies to analyze customers' cognitive behavior and emotional responses towards their brand, and to adequately enhance their customer experience to gain competitive edge. The study includes a detailed overview of why brands should focus their investment in analyzing customer emotions, as well as a description of how emotions work, and thus how they can be measured through several different but complementary technologies. The study also includes interviews of players in the industry, and an overview of current and potential future applications of emotion analytics for brands looking to improve their customer experience. Last, we will investigate how people perceive emotion technologies and their impact, how ethical and legal issues can be tackled, and what can be expected regarding the future of emotion economics.
Degradation despite regulation : water pollution in Billings Basin, Sao Paulo, Brazil
Billings Basin, a water reservoir in the Sao Paulo Metropolitan Region of Brazil, suffers from pollution. When the Billings Dam was built, industrial and energy sector interests prevailed, encouraged by Brazil's authoritarian government. The dam was part of a system that pumped water from the two most important rivers in the Sao Paulo Metropolitan Region for energy generation at the Henry Borden Hydroelectric power plant. While the reservoir was originally constructed as part of a large hydroelectric project, it has since become a crucial source of drinking water for the region. However, the engineer who planned the system warned that if no corrective measures were taken, the scheme would pollute the water in the reservoir. This is indeed what happened. In the mid-1970s, two state environmental laws were enacted to protect watershed areas in Sao Paulo. Rapid development, including the growth of uncontrolled settlements (favelas) also occurred around Billings Basin. The reservoir became polluted. However, the extent of pollution, while serious, did not seem proportional to the amount of demographic and developmental pressures on the environment of the reservoir. This paper asked whether the state environmental laws, aimed at protecting watershed areas of the Sao Paulo Municipal Region, had a substantial role in preventing total environmental degradation in Billings Basin and if not, which external pressures contributed to maintaining the pollution below a critical point. The study suggests that the laws, in fact, did not seem to have been responsible for the lower than expected levels of pollution because they were ineffective and had intrinsic weaknesses. The pollution in Billings Basin should have reached critical levels because of demographic and developmental pressures. However, the history of the basin showed that when the pollution was about to reach critical levels, external pressures - and not the laws - prompted efforts to guarantee a less than catastrophic level of environmental degredation. These external pressures seem to have been linked to such factors as the positive response to environmental issues by new political groups ascendant in state and metropolitan politics in Sao Paulo, pragmatic action in resource management by state water agencies, and the community-based actions of organized low-income groups living around the reservoir. These external forces may indicate the emergence of a democratization process in Brazil, and may herald a more decentralized and participative model for environmental management in the Sao Paulo Metropolitan Region.
Surface structural changes of perovskite oxides during oxygen evolution in alkaline electrolyte
Perovskite oxides such Ba0.5Sr0.5Co0.8Fe0.8O3-6 (BSCF82) are among the most active catalysts for the oxygen evolution reaction (OER) in alkaline solution reported to date. In this work it is shown via high resolution transmission electron microscopy (HRTEM) and Raman spectroscopy that oxides such as BSCF82 rapidly undergoes amorphization at its surface under OER conditions, which occurs simultaneously with an increase in the pseudocapacitive current and OER activity. This amorphization was not detected at potentials below those where significant OER current was observed. Lower concentrations of Sr²- and Ba²- are found in the amorphous regions of BSCF82. Perovskite oxides with lower OER activities such as LaCoO₃ (LCO) and LaMnO₃ (LMO) remained crystalline under identical electrochemical conditions. In addition, the OER activity and tendency for amorphization are found to correlate with the oxygen p-band center as calculated using density functional theory. This work illustrates that the surface structure and stoichiometry of oxide catalysts can differ significantly from the bulk during catalysis, and that understanding these phenomena is critical for designing highly active and stable catalysts for the OER.
Past price and trend effects in promotion planning; from prediction to prescription
Sales promotions are a popular type of marketing strategy. When undertaking a sales promotion, products are promoted using short-term price reductions to stimulate their demand and increase their sales. These sales promotions are widely used in practice by retailers. When undertaking a sales promotion, retailers must take into consideration both the direct and indirect effects of price promotions on consumers, and as a result, on the demand. In this thesis, we consider the impact of two of these indirect effects on the planning process of promotions. First, we consider the problem of the promotion planning process for fast-moving consumer goods. The main challenge when considering the promotion planning problem for fast-moving consumer goods is the negative indirect effect of promotions on future sales. While temporary price reductions substantially increase demand, in the following periods after a temporary price reduction, retailers observe a slowdown in sales.
Water Quality Modeling in Kranji Catchment
This thesis describes the process and results of applying the Soil and Water Assessment Tool (SWAT) to characterize bacterial fate and transport in the Kranji Catchment of Singapore. The goal of this process is to predict bacterial loading to Kranji Reservoir under the forcing of weather and other variables. Necessary data and input values were collected or estimated and input into the model. One of the most important of these values is the bacterial die-off rate. This rate must be accurate for the model to provide accurate predictions of bacterial loadings. In order to obtain a value for the bacterial die-off rate, an attenuation study was conducted. The results of this study were not typical. Bacterial growth was observed to occur during dark hours, and decay was observed to occur during sunlit hours. The resulting light and dark decay constants were combined for use in the model. The specific bacterial loading rates associated with the various agricultural activities occurring in the catchment are not available and thus were roughly estimated. Point source loadings were also estimated. Four years of model simulation daily output were analyzed, and results for specific subcatchments with differing character are discussed. This application of SWAT shows a good ability to make qualitative predictions of the presence or absence of bacteria; however, quantitative agreement between model predictions and field observations is poor. This run of the model is like a first draft-more refinement and more information are needed before it will make accurate predictions; however, the framework is in place.
Magnetically enhanced centrifugation for continuous biopharmaceutical processing
Effective separation and purification of biopharmaceutical products from the media in which they are produced continues to be a challenging task. Such processes usually involve multiple steps and the overall product loss can be significant. As an integrative technique, high gradient magnetic separation (HGMS), together with the application of functional magnetic particles, provides many advantages over traditional techniques. However, HGMS has a number of drawbacks; and its application is limited because it is inherently a batch process and it is difficult to recycle the magnetic nanoparticles. This thesis explores the development of a new type of continuous magnetic separation process, called magnetically enhanced centrifugation (MEC), which exploits the interactions of magnetic particles with magnetic field gradients, forced convective flows and large centrifugal forces. Magnetically susceptible wires in a uniform magnetic field facilitate the capture and aggregation of magnetic particles on wires, and a centrifugal force perpendicular to the magnetic force conveys the particle sludge parallel to the wires in a continuous mode. The primary focus of this thesis is multi-scale modeling and simulation to understand the underlying physics of MEC processes. The potential of MEC as an effective unit operation for biopharmaceutical downstream processing has been demonstrated. Unlike traditional batch-mode HGMS, MEC has a great advantage in that it can be operated continuously as magnetic particles captured on wire surface are constantly removed.
Impact of rim weight and torque in discus performance
The discus throw is one of the oldest sporting events in track and field. Despite this, there has not been very much dynamic analysis on the throw and the ability of an athlete to apply the proper forces and torques to the discus. This paper looks at measuring the ability of the athlete to create spin on the discus by applying torque in the throwing process. The results from the experiment described in this paper were inconclusive, though there was a general trend that as the normal force into the discus increases, the angular velocity increased. However, this was minimally correlated from the data supplied.
Evaluation infrastructure for mobile distributed applications
Sophisticated applications that run on mobile devices have become commonplace. Within the wide realm of mobile software applications there exists a significant number that make use of networking in some form. Unfortunately, such distributed mobile applications are inherently difficult to evaluate. Conventional evaluations of such distributed applications are limited to small, real-world deployments consisting of, perhaps, a handful of phones. Such tests often do not have the requisite number of users to produce the desired performance. Also, these experiments do not scale and are not repeatable. To address all these issues, we sought to evaluate distributed applications in a virtual environment. Besides being cheaper, such evaluations are reproducible and scale significantly better. This thesis documents our efforts in working towards this goal. We discuss the designs that we iterated through, along with the problems we faced in each of them. We hope these problems will inform future designs that can solve the challenges that we weren't able to solve efficiently.
Essays on the counterintuitive consequences of labor policies in service industries
In essays one and two, I examine how unstable schedules affect financial performance. In essay one, using 52 weeks of data from over 1,000 stores and more than 15,000 employees of a specialty retailer, I estimate the effect of unstable schedules on store productivity. I use an instrumental variable approach and a natural experiment to partially address the possible endogeneity of scheduling decisions. I find evidence that increasing the adequacy and consistency of employees' hours improves employee and store productivity and find partial support for the positive effect of predictability. To study the policy impact of these findings, I build a behavioral agent-based model of scheduling in essay two. My model provides a platform to conduct counterfactual analyses and thus increases the external validity of my findings.
Inductive compensation of operational amplifiers in feedback circuits
In this thesis I designed, implemented, and tested an integrated-circuit feedback compensator that uses inductors as compensation elements. Introducing inductors as feedback elements makes it possible to implement lead compensators using shunt topologies, which preserve the closed loop response of a system while compensating the open loop characteristics. My chip consisted of a marginally unstable two-pole amplifier, and a compensated but otherwise identical amplifier. Comparing the step responses of the original and compensated systems proved that the compensator successfully stabilized the unstable system. I used frequency domain analysis to determine how much phase margin my compensator added to the system. After characterizing and canceling out the effects of input and output loading, and the attenuation of my output buffer, I found that my compensator added 41.40 of phase to the system. This was less than the 65° that it was designed for, but more than enough to prove the feasibility of my design.
A robust multi-server chat application for dynamic distributed networks
This thesis presents the design and implementation of a robust chat application for dynamic distributed networks. The application uses a decentralized client-server communication model and a reliable communication service to make it robust. The application turned out to be a critical support tool for the MIT Lincoln Laboratory teams participating at the Joint Expeditionary Force Experiment, 2004. The thesis also describes an implementation of several tools for monitoring and analyzing performance of the application.
Cooperate to compete : composable planning and inference in multi-agent reinforcement learning
Cooperation within a competitive social situation is a essential part of human social life. This requires knowledge of teams and goals as well as an ability to infer the intentions of both teammates and opponents from sparse and noisy observations of their behavior. We describe a formal generative model that composes individual planning programs into rich and variable teams. This model constructs optimal coordinated team plans and uses these plans as part of a Bayesian inference of collaborators and adversaries of varying intelligence. We study these models in two environments: a complex continuous Atari game Warlords and a grid-world stochastic game, and compare our model with human behavior.
3-D dimensional imaging of multiphase flows : from bubbles to sneezes
Experimental spray flow analysis is a difficult fluid dynamics problem because of the high optical density of many sprays. Flow features such as ligaments and droplets break off the bulk liquid volume during the atomization process and often occlude each other in images of sprays. Therefore, accurate feature detection and measurement requires advanced three-dimensional (3D) imaging techniques. In this thesis, 3D computational photographic methods including light field imaging (LFI) and synthetic aperture (SA) refocusing are combined and extended to resolve multiphase flows in 3D over time. Multiple photographs of the same scene are recorded with a large depth of field by each of the cameras in an array. After calibrating the cameras, images from each of the cameras are transformed and combined at each desired depth to construct a 3D focal stack of the scene. Each depth slice image has a narrow depth of field. Features that are physically located at a particular depth appear in focus, while objects located at other depths appear blurred. The SA output focal stack images can be filtered to physically locate features that are small relative to the field of view. However, this task becomes more difficult for relatively larger features due to the presence of bigger out-of-focus blur artifacts. In this thesis, a Synthetic Aperture Feature Extraction (SAFE) technique has been developed to measure blobs in 3D. First, raw images from each of the array cameras are preprocessed. Blobs are detected and converted to white pixels, while the rest of the image is made black. These binary images are then refocused using a multiplicative refocusing method that only preserves the detected blobs in the neighborhood of their physical 3D location. For blobs that can be approximated as spheres, 3D centroids and radii can then be reliably extracted after post-processing the focal image stack. This process can be repeated over time while tracking particle motion. As a result, 3D spatial, size, and velocity data distributions can be calculated as functions of time to better understand the flow dynamics and characteristics. The SAFE technique has been verified using simulations and experiments involving flow of spherical soap bubbles in air. This 3D SAFE method is also applied to the emission of mucosalivary fluid from the mouth during sneezing. Sneezes feature turbulent, multiphase flows containing potentially pathogen-bearing droplets that can play a key role in the spread of numerous infectious diseases, including influenza, SARS, and, possibly, Ebola. The range of contamination of the droplets is largely determined by their size. Despite recent efforts, no consensus on the drop size distribution from violent expirations can be found in the literature. This uncertainty inhibits a mechanistic understanding of disease transmission. Here, high-speed imaging is used to visualize previously unreported dynamics of fluid fragmentation in detail at the exit of the mouth. Droplet radii, positions, velocities, and other measurements are calculated using blob detection and tracking. This is done in two dimensions by recording the scene with a high-speed side and top camera. 3D experiments are then performed using an array of nine cameras and implementing the aforementioned 3D SAFE imaging method. The 3D sneeze data are important for a more complete understanding of the range and contamination potential of airborne disease transmission.
Design and performance of Thomas Telford's Bonar Bridge and Mythe Bridge
This paper assesses two cast-iron arch bridges of Thomas Telford (1757-1834) - Bonar Bridge (1810-12) and Mythe Bridge (1824-26) - to draw a broader conclusion about his career in bridge building. The bridges are introduced and Telford's design influences are investigated. While Telford was influenced by theory through the advice of his contemporaries, he was more heavily influenced by experience, especially design precedents, and most of all by his own judgment, which placed great emphasis on both practicality and aesthetics. The structural performance of the two bridges is assessed and compared. The cast-iron arch's ability to resist vertical loading is the main focus of the analysis, following Heyman's framework for limit analysis of arches. Global equilibrium and graphic statics indicate that the each rib, when acting alone, is insufficient to support asymmetric loading, demonstrating that the secondary members are necessary, and therefore that neither bridge is grossly overdesigned. Deck-stiffening effects are tested following Billington's method, and are found to be negligible. The spandrel bracing members are found to be sufficient apart from the development of tension forces in Bonar Bridge. The later Mythe Bridge performs slightly better in all areas; overall, however, the performance is very similar. Based on these results, the paper concludes that Telford chose not to refine his design substantially over the course of his career. It is argued that this was a conscious decision, based on the progression of the industry from cast iron towards wrought iron, and that these bridges are significant because they bookend the short golden age of cast iron bridges, of which Telford was the unquestionable master.
Policy Aware Social Miner
The Policy Aware Social Miner (PASM) project focuses on creating awareness of how seemingly harmless social data might reveal sensitive information about a person, which could be potentially abused. It seeks to define good practices around social data mining. PASM allows people to create policies governing the use of their personal information on social networks. Using linked data, PASM semantically enhances the usage restrictions to ensure that potentially sensitive information is identified and appropriate policies are enforced. PASM also enables people to provide refutations for other information about them that is found on the Web. PASM encourages consumers of social information on the Web to use the mined data appropriately by enforcing data policies before returning the search results. PASM provides a solution to the following issue of privacy in social data mining - although people know that searches for data about them are possible, they have no way to either control the data that is put on the Web by others or indicate how they would like to restrict use of their own data. In a user study conducted to measure the performance of PASM in identifying sensitive posts as compared to the study participants, PASM obtained an F-Measure of 84% and an accuracy of 80%. Interestingly, PASM demonstrated a higher recall than precision, a property that was valued by the study participants as all but one participant indicated that they would prefer receiving false positives rather than false negatives.
Parylene-based chemical vapor deposition of electroluminescent polymer films used in polymer light emitting diodes
The effects of reaction and deposition conditions on the properties of the conjugated electroluminescent polymer poly(p-phenylene vinylene) (PPV) prepared by parylene-based chemical vapor deposition (CVD) are explored in this thesis. A reactor for CVD of PPV is designed and constructed, and both a,a'-dibromo p-xylene and a,a.'-dichloro p-xylene are tested as source monomers. By sufficiently baking out the CVD system, reproducible fabrication of unoxidized CVD PPV is achieved for the first time. The optical characteristics of the CVD PPV are in good agreement with solution-processed material, but the polymer does exhibit enhanced aliphatic hydrocarbon incorporation, which can influence the polymer bandgap and therefore emission color. For the same deposition conditions, these hydrocarbon defects are more prominent in polymer prepared from the bromine monomer. By correlating changes in polymer composition with the CVD reaction and deposition conditions, the source of the aliphatk hydrocarbon incorporation is determined to be fragmentation of the starting monomer during the pyrolysis step. Tuning the peak emission color of the CVD polymer through copolymerization is also addressed. Fabrication of device-quality CVD PPV films is achieved for the first time through a greater understanding of how the deposition conditions influence the polymer film structure. Several different growth morphologies are observed below the critical surface polymerization temperature, such as island, transitional, and coalesced or 'layered' growth. By re-designing the reactor configuration to allow film deposition at room temperature, single-layered CVD PPV devices with reasonable tum-on voltages and light output easily seen with the naked eye in a well-lit room are realized. A highly novel, parallel method for selective deposition of CVD PPV is also developed through use of chemical surface modification, achieving one-step patterned growth of the deposited material. Treatment of the substrate with evaporated iron, iron salts, or organo-iron complexes is found to inhibit polymer growth. Spatial control of the inhibitor through microcontact printing of carboxylic-acid terminated alkanethiols used in conjunction with metal salts, or photolithographic patterning of evaporated metal films, allows fabrication of selectively grown features as small as 5 μm and films as thick as 3500 A. This is more than sufficient for use as the active element in LEDs, and the selectively grown PPV is successfully incorporated into a functional device. Iron treatment also inhibits the deposition of other parylene-based CVD polymers such as parylene-N and parylene-C, resulting in selectively grown structures on the order of several microns in thickness. For the parylene systems, a wide range of transition metal elements, salts, and organometallic complexes are found to produce the same growth inhibition effect, which suggests that the chemical surface modification approach presented here may be a general technique for controlling the growth of vapor-processed parylene-based polymers.
Characterization of ferromagnetic saturation at 4.2K of selected bulk rare earth metals for compact high-field superconducting cyclotrons
The saturation magnetization of the rare earth ferromagnetic metals gadolinium and holmium was investigated. Cylindrical samples were placed in a superconducting test magnet and induced magnetic field measured at various applied fields. Data was obtained with Hall sensors mounted at the tips of the cylinders, and a powerful analytical calculation was derived to allow estimation of the saturation magnetization from this surface data. If the metal is saturated in a uniform, vertical magnetic field, the measured field at the surface due to the magnetization of the cylinder is just the saturation magnetization divided by a factor of two. Results show saturation magnetization values ranging from 0.5 to 1.5 T higher than iron, establishing the candidacy of these metals for advanced superconducting cyclotron pole tips.
Integrating sustainability into arts-focused neighborhood development in a hot real Estate market
Cities with industrial legacies often seek to redevelop former brownfield sites into opportunities for economic growth. Some of these same cities are also attempting to promote neighborhood-scale arts-oriented development for that same purpose. In this research, I explore whether and how cities with both rapidly intensifying real estate markets and a growing creative economy promote neighborhood-scale arts-oriented development projects. My research is based on the premise that integrating city-wide environmental, social, and economic sustainability into these projects is more likely to create civic spaces that meet the competing longterm interests of multiple stakeholder groups than projects focused on meeting contending needs in separate, dissociated locations. Based on a year-long study of the ARTFarm for Social Innovation in Somerville, Massachusetts, I examine the challenges of implementing mutually reinforcing environmental remediation, arts-based development, and sustainability in a rapidly intensifying real estate market. I base my analysis on key informant interviews, close readings of site planning documents, and other data gathered as a participant-observer at planning meetings. To date, ambiguous land use tenure agreements and a narrow focus on integration within the bounds of a 2.2 acre site have eroded the ARTFarm's ability to pursue multidimensional sustainability and meet stakeholder interests. I conclude that projects like the ARTFarm could act as a staging area and home base for sustainability initiatives and programming on a network of sites rather than being confined to activities on specific and consequentially often problematic sites. Cities could use these projects as the context to enlist private developers to help fund remediation by ensuring that a portion of the remediated land gets returned to the public for well-planned environmental and social uses. Shifting to a coordination role enables ARTFarm to deploy a distributed network of urban experiments that seek creative ways to optimize sustainability objectives on publicly owned land.
Parallel multigrid for large-scale LSS
This thesis presents two approaches for efficiently computing the "climate" (long- time average) sensitivities for dynamical systems. Computing these sensitivities is essential to performing engineering analysis and design. The first technique is a novel approach to solving the "climate" sensitivity problem for periodic systems. A small change to the traditional adjoint sensitivity equations results in a method which can accurately compute both instantaneous and long-time averaged sensitivities. The second approach deals with the recently developed Least Squares Sensitivity (LSS) method. A multigrid algorithm is developed that can, in parallel, solve the discrete LSS system. This generic algorithm can be applied to ordinary differential equations such as the Lorenz System. Additionally, this parallel method enables the estimation of climate sensitivities for a homogeneous isotropic turbulence model, the largest scale LSS computation performed to date.
Strategies to advance investments in coastal resilience solutions in Boston
Coastal flooding due to a combination of sea level rise, high tides, and coastal storm events is a significant risk to Boston's population, built environment, and economy. The City of Boston is proactively planning for built district-scale resilience solutions along the shoreline to protect vulnerable neighborhoods. The upfront implementation costs are over a billion dollars and annual maintenance costs add to several tens of millions. Recent studies have conducted a review of the menu of funding and financing options to pay for municipal investments in climate resilience. However, cities face barriers to implementing these new options given existing municipal processes and other near-term policy priorities. In order to advance investments in district-scale resilience solutions in Boston, this study investigates: What is the City of Boston's municipal process, key questions that need to be answered, and stakeholders that need to be involved in order to determine viability and to implement new mechanisms to pay for investments in coastal resilience? What are the key barriers and potential solutions for the City to pursue funding and finance for coastal resilience? This is a client-based masters thesis for the Boston Planning and Development Agency.
Illuminating the mental memoriam
Memories thread and unify our overall sense of being. With the accumulation of our knowledge about how memories are formed, consolidated, retrieved, and updated, neuroscience has reached a point where brain cells active during these discrete mnemonic processes can be identified and manipulated at rapid timescales. Here, I begin with historical studies that lead to the modem memory engram theory. Then, I present our recent advances in memory research that combine transgenic and optogenetic approaches to reveal underlying neuronal substrates sufficient for activating mnemonic processes. Our studies' conclusions are threefold: (1) we provide proof of principle evidence demonstrating that learning-related neural changes can be isolated at the level of single cells, and that these cells can then be tagged for subsequent manipulation; (2) a defined subset of hippocampus cells are sufficient to elicit the neuronal and behavioral expression of memory recall, as well as sufficient to modify existing positive and negative memories; (3) and finally, artificially activated memories can be leveraged to acutely and chronically suppress psychiatric disease-related states. We propose that hippocampus cells that show activity-dependent changes during learning construct a cellular basis for contextual memory engrams and that directly activating these endogenous neuronal processes may be an effective means to correct maladaptive behaviors.
Kinematically consistent, elastic block model for the eastern Mediterranean constrained by GPS measurements
I use a Global Positioning System (GPS) velocity field to constrain block models of the eastern Mediterranean and surrounding regions that account for the angular velocities of constituent blocks and elastic strain accumulation on block-bounding faults in the interseismic period. Kinematically consistent fault slip rates and locking depths are estimated by this method. Eleven blocks are considered, including the major plates, based largely on previous geodetic, seismic, and geologic studies: Eurasia (EU), Nubia (NU), Arabia (AR), Anatolia (AN), Caucasus (CA), South Aegea (AE), Central Greece (GR), North Aegea (NE), Southeast Aegea (SE), Macedonia (MA), and Adria (AD). Two models are presented, one in which the best-fitting locking depth for the entire region (-15 km) is used on all boundaries (Model A), and one in which shallower locking depths are used on the Marmara Fault, the Hellenic and Cyprus Arcs, and in the Greater Caucasus (Model B), based on a consideration of locally best-fitting locking depths. An additional block, Black Sea (BS), is postulated in a third model. The models are in fair to good agreement with the results of previous studies of plate motion, fault slip rates, seismic moment rates and paleomagnetic rotations. Notably, some block pairs in the Aegean region have Euler poles on, or near to, their common boundaries, in qualitative agreement with so-called pinned block models, e.g., for the transfer of slip from the right-lateral North Anatolian Fault system to a set of left-lateral and normal faults in central and northern Greece (McKenzie and Jackson, 1983; Taymaz et al., 1991a; Goldsworthy et al., 2002).
Efficient trusted cloud storage using parallel, pipelined hardware
Cloud storage provides a low-cost storage service with high efficiency and global accessibility via the Internet, but it also introduces security risks. One major security concern is the integrity and freshness of data stored on the cloud, that is, whether a storage provider can guarantee that the data received by its clients is always correct and up-to-date. Recent studies have focused on data integrity and freshness guarantees. However, systems that solely rely on cryptography are not able to immediately detect data freshness violations, while systems using resource-constrained trusted hardware are impractical due to long latency and low throughput. In this thesis, we describe a prototype of a trusted cloud storage system that efficiently ensures data integrity and freshness by attaching a piece of high-performance trusted hardware to an untrusted server. We propose a write access control scheme to prevent unauthorized writes and ensure all writes are fresh. We also introduce a crash-recovery mechanism to protect our prototype system from crashes and power loss events. In addition, we minimize the system overhead by (1) parallelizing and pipelining the operations that are carried out on the server and the trusted hardware and (2) judiciously partitioning the operations across the trusted and untrusted components. The throughput and latency of our prototype system are analyzed to provide customized solutions to performance-focused and budget-focused cloud storage providers. We believe this work takes a major step in making trusted cloud storage practical from an efficiency and cost standpoint.
Prediction of velocity distribution from the statistics of pore structure in 3D porous media via high-fidelity pore-scale simulation
Fluid flow and particle transport through porous media are determined by the geometry of the host medium itself. Despite the fundamental importance of the velocity distribution in controlling early-time and late-time transport properties (e.g., early breakthrough and superdiffusive spreading), direct relations linking velocity distribution with the statistics of pore structure in 3D porous media have not been established yet. High velocities are controlled by the formation of channels, while low velocities are dominated by stagnation zones. Recent studies have proposed phenomenological models for the distribution of high velocities including stretched exponential and power-exponential distributions but without an underlying mechanistic or statistical physics theory. Here, we investigate the relationship between the structure of the host medium and the resulting fluid flow in random dense spherical packs. We simulate flow at low Reynolds numbers by solving the Stokes equations with the finite volume method and imposing a no-slip boundary condition at the boundary of each sphere. High fidelity numerical simulations of Stokes flow are facilitated with the assist of open source Computational Fluid Dynamics (CFD) tools such as OpenFOAM. We show that the distribution of low velocities in 3D porous media is described by a Gamma distribution, which is robust to variations in the geometry of the porous media. We develop a simple model that predicts the parameters of the gamma distribution in terms of the porosity of the host medium. Despite its simplicity, the analytical predictions from the model agree well with high-resolution simulations in terms of velocity distribution.
Physically constrained maximum likelihood (PCML) mode filtering and its application as a pre-processing method for underwater acoustic communication
Mode filtering is most commonly implemented using the sampled mode shape or pseudoinverse algorithms. Buck et al [1] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [2] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. The first simulation presented in this thesis models the acoustic pressure field as a complex Gaussian random vector and compares the performance of the pseudoinverse, reduced rank pseudoinverse, sampled mode shape, PCML minimum power distortionless response (MPDR), PCML-MAP, and MAP mode filters. The PCML-MAP filter performs as well as the MAP filter without the need for a priori data statistics. The PCML-MPDR filter performs nearly as well as the MAP filter as well, and avoids a sawtooth pattern that occurs with the reduced rank pseudoinverse filter. The second simulation presented models the underwater environment and broadband communication setup of the Shallow Water 2006 (SW06) experiment.
Body to body, body to city, body to self
Our modern spaces are a result of a history of architects losing agency to technology. In the era of climate control spaces and the digital interfaces of social media, a sense of place and association with others is lost to enclosed spaces of satellite conversations detailed with attention to standardization rather than customization. These desires for comfort and control manifest in the lack of friction in our built realm. Spaces mirror the scaleless quality of the digital, impose no physical friction of environment and allow for isolation between bodies in the same room. Boarded in these spaces with the disappearing digital threshold, our friends fall in the same political silos as ourselves, empathy for others falters, context is arbitrary and we never have to be 'alone' when we have our phones. The tech industry tries to offer solutions to alleviate these problems with apps and devices. However, without a violent change in environment - engaging the physicality of the body, its senses and its association to others and site, the problems will persist. 'Bodyscapes' is a series of provocations at varying scales that subvert the language of corporate standardization to allow new opportunities for human interface where the public and private realm meet.
Role of RA in germ cell development in embryonic mouse gonads
Germ cells are the only cell type to undergo meiosis, a specialized cell division process necessary for the formation of haploid gametes. Timing of this process is sex-specific. Ovarian germ cells initiate meiosis during embryonic development, while testicular germ cells initiate meiosis after birth. In a series of gonad explant culture experiments, I show that retinoic acid (RA) is required for meiotic initiation in embryonic ovaries, because it is necessary for Stra8 (Stimulated by retinoic acid gene 8) expression. Stra8 is required for pre-meiotic DNA replication in embryonic ovaries and it is only expressed in testes after birth. I also show that a cytochrome p450 enzyme CYP26B 1, specifically expressed in embryonic testes but not ovaries, prevents Stra8 expression in testes during embryonic development. To confirm our results in vivo, and to examine if RA is sufficient to induce meiosis in embryonic testes, I generated Cyp26bl-/- and Cyp26bl-/-Stra8-/- mice. I show that germ cells in Cyp26bl-/- embryonic testes initiate a meiotic program but fail to complete meiotic prophase. Instead, germ cells proliferate until birth. RA also causes somatic cell defects. It inhibits Leydig cell differentiation and disturbs testis cord maintenance. Thus, RA has distinct effects in embryonic ovaries and embryonic testes. In ovaries, it is required for meiotic entry with no known effects on somatic cell development. In embryonic testes, RA is not sufficient for functional meiotic prophase and it induces proliferation in germ cells. RA also disrupts embryonic testicular somatic cell development.
We are Massachusetts Institute of Technology
MIT students go through the same daily routines, barely branching out to explore the broader MIT or learn from and collaborate with students from other fields. There are, however, proven benefits to participating in diverse social groups, as shown by sociologist Ronald Burt and others. We can help MIT students diversify their social groups by creating opportunities for lightweight, yet meaningful interactions across MIT subgroups. I designed and built a video-based system, "We are MIT," to facilitate more cross-group interactions. The video-based system, "We are MIT," consists of a physical video booth and a website. I outline design assumptions, design principles, and how I built the video booth and the website. Next, discuss lessons from three core iterations of the system, from a most basic prototype to an online video contest. A total of 17 MIT community members recorded or submitted videos for "We are MIT."
Synchronization on nulticore architectures
The rise of multicore computing has made synchronization necessary to overcome the challenge of sharing data between multiple threads. Locks are critical synchronization primitives for maintaining data integrity and preventing race conditions in multithreaded applications. This thesis explores the lock design space. We propose a hardware lock implementation, called the lock arbiter, which reduces lock latency while minimizing hardware overheads and maintaining high levels of fairness between threads. We evaluate our mechanism against state-of-the-art software lock algorithms and find that our mechanism has comparable performance and fairness.
Investigation of stall inception in centrifugal compressors using isolated diffuser simulations
In compression systems the range of stable operating is limited by rotating stall and/or surge. Two distinct types of stall precursors can be observed prior to these phenomena: the development of long-wavelength modal waves or a short-wavelength, three-dimensional ow breakdown which is typically known as a "spike". The cause of the latter is not well understood; in axial machines it has been suggested that over-tip spillage ow has a significant role, but spikes can also occur in shrouded vaned diffusers of centrifugal compressors, where these flows are not present, suggesting an alternative mechanism may be at play. Unsteady Reynold's Averaged Navier-Stokes simulations are performed for an isolated vaned radial diffusers from a highly loaded centrifugal compressor. Key to their success is the definition of pitchwise "mixed out" averaged inlet conditions derived from the impeller exit flow field from separate single passage stage calculations. This guarantees the relevant flow features are carried into the diffuser model, particularly the spanwise profile of flow angle and total pressure. It is shown that the isolated diffuser model compares well with experimental data and time-averaged, unsteady, full wheel simulations. The stability of the flow is tested via numerical forced response experiments whereby the inlet conditions are perturbed by a total pressure forcing small in temporal and spatial extent. An unstable rotating stall precursor is observed at low diffuser inlet corrected flow, due to the spanwise non-uniform flow angle and total pressure at the diffuser inlet. The radial pressure gradient imposed by the highly swirling bulk flow leads to flow angles in excess of 90 in the shroud endwall flow. This results in shroud-side separation at the diffuser vane leading edge and a region of recirculating ow in the vaneless space. Vorticity shed from the diffuser vane leading edge convects back to the vaneless space and joins the vortical structures within the recirculating flow. This is suggested to lead to spike stall inception.