Title
stringlengths
3
331
text
stringlengths
14
9.14k
Electrochemical study of corrosion phenomena in zirconium alloys
Shadow corrosion of zirconium alloy fuel cladding in BWR environments, the phenomenon in which accelerated corrosion is experienced when the cladding surface is in close proximity to other metals, has become a potentially life-limiting issue for BWR fuel. Recent results from experimentation at MIT, Halden, and Studvik suggest that a galvanic coupling drives the phenomenon between the cladding and the adjacent material. However, the actual processes involved are not understood. One key parameter that would help in the understanding of the phenomenon would be a measurement of the actual corrosion current between fuel cladding and adjacent materials in the actual in-reactor environment. The limitations placed on the bum-up of uranium oxide fuel correlates to the amount of corrosion seen through a directly measurable oxide thickness on the waterside of the zirconium alloy cladding. This oxide corrosion product directly correlates to distance from structural components, leading to the effect commonly referred to as shadow corrosion. In recent experiments, Studvik determined that there are large ECP differences associated with Inconel and zirconium alloys that correlate to increased galvanic current density when the materials are coupled.
Computational approach to construct interlocking wooden frames
This thesis explores the computational process of generating and constructing interlocking frames. Its outcome delivers a sophisticated software tool that creates a three dimensional interlocking pattern, analyzes the intersecting conditions between members, and immediately provides instruction of its assembly sequence in animated visualization. An interlocking frame is a system that consists of short members spanning on a large surface where members lock each other at their mid-spans by simple notches. Such a system should be designed with consideration of its assembly sequence, as a static interlocking form may be described but impossible to assemble in any sequence. Given a three dimensional digital model of an interlocking frame, the feasibility of the disassembly sequence can be assessed by analyzing the geometric contact constraints between each member. The assembly sequence can then be obtained by reversing the disassembly sequence, and helps a designer to evaluate different options in the earlier stage of design. The proposed tool uses the genetic algorithm and graph searching algorithms to find optimized notching configurations that guarantee an assembly sequence. It can analyze various types of assemblies defined by planar surface contact constraints, and has a potential for further development into a versatile, automated 4D simulation tool.
Radiation therapy of pediatric brain tumors : comparison of long-term health effects and costs between proton therapy and IMRT
Radiation therapy is an important component of pediatric brain tumor treatment. However, radiation-induced damage can lead to adverse long-term health effects. Proton therapy has the ability to reduce the dose delivered to healthy tissue when compared to photon radiation therapy, but this dose benefit comes at a significantly higher initial cost, as proton therapy is 2 to 3 times more expensive to deliver than photon therapy. This thesis provides a framework for the evaluation of health and cost effectiveness of proton therapy compared to Intensity Modulated Radiation Therapy (IMRT). Proton therapy and IMRT treatment plans of patients treated for low-grade gliomas (LGGs) were analyzed to provide risk estimates of long-term health effects based on the dose distributions. A Markov simulation model was developed to estimate the health effects and costs of proton therapy and IMRT. The model tracked a pediatric cohort treated for LGGs at age 5. In the model, the patients were at risk of acquiring IQ loss, growth hormone deficiency (GHD), hypothyroidism, hearing loss, and secondary cancer. Patients faced risks of death due to tumor recurrence, secondary cancer, and normal death. In addition, a review of literature was performed to estimate the costs and additional health risks not determined from the patient treatment plans. The simulation results show that proton therapy can be cost effective in the treatment of LGGs based on the health risks estimated from the patients treatment plans. The cost associated with IQ loss and GHD were the main contributors to the total costs from long-term health effects. Proton therapy also results in a lower level of IQ loss and a lower risk of acquiring other long-term health effects. However, the relative difference in IQ point loss between the treatment modalities is small in the limited number of patients studied. There is a need to further investigate the advantages of proton therapy in reducing the dose delivered to the relevant parts of the brain to lower the risks of adverse health effects, especially for IQ loss.
Seasonal and interannual variability in the hydrology and geochemistry of an outlet glacier of the Greenland Ice Sheet
In the spring and summer within the ablation zone of the Greenland Ice Sheet (GrIS), meltwater drains to the ice sheet bed through an evolving network of efficient channelized and inefficient distributed drainage systems. Distributed system drainage is a key component in stabilizing GrIS velocity on interannual time scales and controlling geochemical fluxes. During the spring and summer of 2011 and 2012, I conducted fieldwork at a large outlet glacier in southwest Greenland underlain by metamorphic silicate rocks. Data collected from a continuous 222Rn monitor in the proglacial river were used as a component of a mass balance model. I demonstrated that Jdis, the 222Rn fraction derived from the distributed system, was >90% of the 222Rn flux on average, and therefore, 222Rn can be used as a passive flow tracer of distributed system drainage. Supraglacial meltwater runoff estimated using two independent models was compared with ice velocity measurements across the glacier's catchment. Major spikes of Jdis, occurred after rapid supraglacial meltwater runoff inputs and during the expansion of the subglacial channelized system. While increases in meltwater runoff induced ice acceleration, they also resulted in the formation of efficient subglacial channels and increased drainage from the distributed system, mechanisms known to cause slower late summer to winter velocities. Sr, U, and Ra isotopes and major and trace element chemistry were used to investigate the impact of glacial hydrology on subglacial weathering. Analysis of partial and total digestions of the riverine suspended load (SSL) found that trace carbonates within the silicate watershed largely controlled the 'Sr/'Sr ratio in the dissolved load. Experiments and sampling transects downstream from the GrIS demonstrated that [delta]234U in the dissolved phase decreased with increasing interaction with the SSL. The (2 2 8Ra/2 26Ra) value of the dissolved load was significantly higher than that of the SSL and therefore, was not the result of the source rock material but of extensive mineral surface weathering and the faster ingrowth rate of 228Ra (t1 2=5.75 y) relative to 22 6Ra (t112=1600 y). In summary, extensive, repeated cycles of rapid supraglacial meltwater runoff to subglacial drainage networks leads to increased distributed system drainage and mineral weathering.
Seven eighty seven mid-body job precedence networks for improving production rate
In a complex manufacturing environment, generating schedules, identifying deviations, and recovering from delays have a significant impact on total operational performance. At Boeing's 787 plant, a precedence network was generated which defines the entire build sequence for a mid-body fuselage. This job-level build sequence enables planners to generate optimized and feasible resource-constrained schedules. The network also forms the foundation for a visual control system on the factory floor. This web-based tool is designed to improve routine production control decisions at all levels by presenting build status in a cohesive and concise format. Using this tool, the plant's stakeholders can effectively identify and prioritize schedule deviations before they cascade lead to major delays, resulting in an overall improvement in resource efficiency and production rate.
Driving toward monopoly : regulating autonomous mobility platforms as public utilities
Autonomous vehicles (AV) have captured the collective imagination of everyone from traditional auto manufacturers to computer software startups, from government administrators to urban planners. This thesis articulates a likely future for the deployment of AVs. Through stakeholder interviews and industry case studies, I show that there is general optimism about the progress of AV technology and its power to positively impact society. Stakeholders across sectors are expecting a future of autonomous electric fleets, but have divergent attitudes toward the regulation needed to facilitate its implementation. I demonstrate that, given the immense upfront capital investments and the nature of network effects intrinsic to data-intensive platforms, the autonomous mobility-as-a-service system is likely to tend toward a natural monopoly. This view is corroborated by key informants as well as recent industry trends. In order to better anticipate the characteristics of this emerging platform, I look back at the developmental trajectories of two classic public utilities - telecommunications and the electricity industry. I argue that the aspiring monopolists in autonomous mobility, like icons in these traditional industries, will succeed in supplanting a legacy technology with a new, transformative one, and use pricing and market consolidation tactics to gain regional dominance. The discussion on monopoly power is then adapted to the new business models of internet-enabled technology giants, and I examine two additional industry case studies in Google and Amazon. I argue that the autonomous mobility platform will first be designed to prioritize scale over everything else, including profits, and that firms are likely to pursue both horizontal and vertical integration strategies to achieve sustained market leadership. I conclude by recommending next steps for reining in platforms that may harm the public interest, and encourage planners to traverse disciplinary boundaries to better facilitate discussions between innovators and regulators.
Single-cell response to perturbations across biological scales : single organ, organ system and phenotypic individuals
The biological processes that sustain a complex organism require the orchestrated dynamics of complex cellular ensembles. Several vital systems - such as the immune system, the digestive system and more - must process internal and external signals to maintain functional homeostasis in response to perturbations at the systems-level. To further understand how groups of cells collectively respond to perturbations, we have applied single-cell RNA-sequencing and complementary techniques to explore cellular behaviors within complex systems at multiple relevant biological scales: from within a single organ, to an organ system, to across several human individuals with differing genetic backgrounds linked by a shared phenotype. More specifically, at the level of the organ, we have explored acute injury responses in the liver. We have identified and described a new compensatory phase of the liver response to injury, in which surviving hepatocytes upregulate their expression of critical liver function genes to maintain overall organ function. Next, we extended our approach from a focus on an acute injury targeting a single organ to exploring chronic damage resulting from a long-term high fat diet across multiple gastrointestinal and immune compartments. Our analysis revealed molecular pathways and changes in stem gene expression which may contribute to obesity-related disease. Finally, we characterized shared features across multiple unique human donors with a common phenotype, elite control of HIV-1. We identified and validated a subset of highly functional dendritic cells, and developed broadly applicable computational approaches to identify reproducible responses across donors and to nominate candidate targets for rationally modulating the system. Overall, our work demonstrates the utility of single-cell RNA-sequencing for uncovering important cellular phenotypes that inform systems-level responses at any biological scale.
Study of the Manhattan Office Market
Corporate real estate is increasingly seen as a strategic resource contributing to organizational performance rather than a mere operational asset focusing on overall business cost efficiency. There is considerable upside to be realized in making workplaces more efficient, productive and more conducive to work performance. Yet, the question whether good design correlates to increased financial outcomes has not been explored much. This thesis studies the economic impact of workplace performance by linking post-occupancy analysis to financial outcomes. The paper uses two data sets to explore if a correlation exists between good design and financial value by linking workplace performance and effective rents - Gensler's post-occupancy Workplace Performance Index (WPI SM) data, and CompStak's Manhattan rental database. The premium effect of WPI-scored leases is best observed when analyzed with respect to location characteristics (neighborhoods) and time-fixed effect (lease commencement date) reflecting a premium over non-scored leases. At the same time, there is a statistically significant indication that Below Average Work Performance, as reflected by their lower WPI Score, have lower effective rents compared to non-WPI scored leases. Workplaces with high WPI (SM) scores signify higher economic productivity compared to their lower scoring counterparts. The study is a first step towards linking workplace performance to effective rents to highlight the financial implications of developing high performing workplaces. The conclusions from the study are of value to stakeholders involved - real estate developers, landlords, tenants, architects, interior designers, and institutional investors.
Resilient decarbonization for the United States : lessons for electric systems from a decade of extreme weather
The past decade has seen an unprecedented surge of climate change-driven extreme weather events that have wrought over $800 billion in damage and taken more than 5,200 lives across the United States -- a trend that appears poised to intensify. At the same time, the need for a large-scale effort to decarbonize the U.S. electric power system has become clear, along with the growing climate risks and impacts that any such effort will face. This thesis argues that the principles of resilience can play a valuable role by enabling the decarbonization of the U.S. electric system, in the face of the escalating risks and impacts of climate-driven extreme weather. By emphasizing targeted hardening, proactive planning, graceful failure, and effective recoveries in the design, operation, and oversight of electric systems in the United States, we can both protect against growing climate risks and catalyze decarbonization efforts --
Influence of contact conditions on thermal responses of the hand
The objective of the research conducted for this thesis was to evaluate the influence of contact conditions on the thermal responses of the finger pad and their perceptual effects. A series of experiments investigated the thermal and perceptual effects of different contact conditions including contact force, contact duration, the object's surface temperature, and its surface roughness. The thermal response of the finger pad was measured using an infrared camera as the contact force varied from 0.1 to 6 N. It was determined that the decrease in skin temperature was highly dependent on the magnitude of contact force as well as contact duration. A second set of experiments investigated the effect of surface texture on the thermal response of the finger pad, and demonstrated, contrary to predictions, that a greater change in skin temperature occurs when the finger is in contact with rougher surfaces. The effect of varying surface texture on the perception of temperature was also investigated. The changes in temperature due to varying surface texture are perceptible, and demonstrate that the perception of surface roughness is not only influenced by changes in temperature, but in turn affects the perception of temperature. The final set of experiments examined the effect of varying the surface temperature of the thermal display on the perceived magnitude of finger force. Over the range of 20 to 38 'C, the surface temperature of the display did not have a significant effect on the perceived magnitude of force. The results of these experiments can be incorporated into thermal models that are used to create more realistic displays for virtual environments and teleoperated systems.
Equilibrium analysis of masonry domes
This thesis developed a new method to analyze the structural behavior of masonry domes: the modified thrust line analysis. This graphical-based method offers several advantages to existing methods. It is the first to account for the ability of domes to achieve a range of internal forces, gaining potentially an infinite number of equilibrium solutions that could not be derived otherwise. This method can also analyze non-conventional axisymmetrical dome geometries that are difficult or impossible to analyze with existing methods. Abiding by limit state conditions and the principles of the lower bound theorem, the modified thrust line method was used to ascertain the theoretical minimum thrust of spherical and pointed domes, a parameter that was previously unsolved. Several methods to estimate minimum thrust to-weight ratio were provided. For spherical domes, this ratio may be estimated as -0.583[alpha] + 1.123; for pointed domes, the estimated ratio is 0.551[delta] -1.061[delta]/[alpha] -0.615[alpha] + 1.164, where [alpha] and [delta] are the embrace and truncating angles, respectively.
Investigating Army systems and SoS for value robustness
This thesis proposes a value robustness approach to architect defense systems and Systems of Systems (SoS). A value robust system or SoS has the ability to provide continued value to stakeholders by performing well to meet the mission intent under a variety of future contexts. The proposed approach encompasses three methods, namely "Needs to Architecture" framework, Multi-Attribute Tradespace Exploration (MATE) and Epoch-Era Analysis. The architecting approach will commence with the "Needs to Architecture" framework. Stakeholders' needs are elicited and design concepts will be formulated. MATE is then used to screen, evaluate and select suitable design concepts. Subsequently, Epoch-Era Analysis is used to guide system architects to anticipate changes across foreseeable epochs, which are time periods of fixed needs and context. The tradespace analysis is repeated across all these epochs. Pareto Trace and Filtered Outdegree metrics will be used to identify passive and active value robust designs. The proposed value robustness approach is demonstrated conceptually using an Intelligence, Surveillance and Reconnaissance (ISR) system and an Army SoS case study. The proposed value robustness approach offers a potential methodology to design and evaluate complex defense systems such that they continue to be valuable to stakeholders over time. The method is also complementary to existing architecting methods such as modeling and simulation. The end product of applying this approach is a cost efficient defense system, which might be passively or actively value robust. High switching and modification costs might be avoided even if changes to the active value robust defense system are required. Through the use of the Army SoS case study discussion, the author suggests that a value robust defense SoS architecture is one that encompasses the desired ilities of changeability and interoperability.
The intersection of environmental planning and social justice : Denver's Platte River Greenway
Environmental justice activists and researchers in the last several decades have drawn public attention to the disproportionate exposure to environmental risk (primarily toxicity) that low-income communities and communities of color experience. The environmental justice movement has devoted much less attention to the broader array of environmental issues that affect the welfare of low-income and minority communities. These include risk from natural hazards (like flooding), access to open space, recreational opportunities, and livability. Environmental planning affects and can enhance justice by reducing risks and providing benefits (including benefits not traditionally associated with the environment, such as employment opportunities). I consider planning process issues, community building, use of space, economic issues, safety, livability, and cultural issues to understand the full range of justice implications of environmental planning. This thesis examines the planning and development of the Platte River Greenway in Denver to understand how environmental planning practice relates to justice. Initially planned and developed in the mid- to late-1970s, the Platte River Greenway is a 10.5-mile stretch of trails and pocket parks along an urban river that runs near many low-income and minority communities. The Platte River Greenway contributed to social justice in a number of ways. The planning process, however, did not explicitly engage justice as a goal. The one point early in the process when justice received explicit attention illustrates how such consideration can lead to greater parity in environmental benefits for disadvantaged communities. Based on this case, the thesis argues that justice should be a more explicit goal in environmental planning practice. The thesis offers recommendations for how environmental planners can actively frame and manage environmental planning processes to advance social justice.
The economic and ethical considerations and implications of the stratification of future oncology therapeutics
This thesis investigates the economic impact of stratified medicine on industry and the subsequent ethical implications for patients. Stratified medicine involves the use of clinical biomarkers to indicate differential response among patients in efficacy or potential side effects of therapeutic agents. The advent of stratified medicine should, in theory, result in the safer, more effective use of therapeutic agents to treat cancer. However, reluctance remains within the broader life sciences community, in particular within the pharmaceutical industry, to embrace stratified medicine. I hypothesize that this is due to economic concerns. Firstly, an historical analysis of the rate of market adoption of stratified therapeutics is conducted by comparing the adoption velocity and time to peak sales of stratified therapeutics relative to traditional chemotherapeutics. The aim is to analyze whether historically, stratified medicines have been more or less successful in terms of speed of market adoption. To supplement this analysis interviews are conducted with investment analysts who cover pharmaceutical and diagnostics companies to gauge their views on stratified medicine. This is important due to the fact that publicly traded companies have an obligation to their shareholders, and shareholder views are shaped by the analyses of these individuals. In order to assess the future economic impact of stratified medicine on industry, particularly given that clinical biomarkers are now being developed much earlier in the R&D timeline, a model was constructed to predict economic outcomes based on various parameters associated with biomarker development.
A design methodology for hysteretic dampers in buildings under extreme earthquakes
This research proposes a design methodology for hysteretic dampers in buildings under high levels of seismic hazard. Developments in structural materials have led to designs that satisfy strength requirements but are often very flexible. This trend, along with increasingly stringent building performance criteria, suggests a philosophy of controlling structural motion as opposed to merely designing for strength, particularly when related to earthquake design. Included in this thesis is a design algorithm that calibrates stiffness and yield force level, two controlling parameters in the implementation of hysteretic dampers, in order to obtain optimal structural response under two levels of earthquake severity. In addition, a parametric study illustrates the merits and drawbacks of various stiffness and yield force allocations.
Globalization of biopharmaceutical manufacturing
The biomanufacturing industry is changing due to increasing globalization. However, it is changing differently from other high tech industries like software/ semiconductor/ automobiles. In this study we use global biomanufacturing investment data, industry survey data as well as interviews with members of industry and academia to understand the extent of microbial biomanufacturing activity (total volume, number of facilities, type of facilities) and nature of biomanufacturing activity (complexity of products and processes across both mammalian and microbial production) in different regions of the world today. The study shows that traditional centers of expertise in US and EU still house most of the worlds biomanufacturing capacity. The facilities in US and EU perform a larger number of operations within their facilities and also more technically complex operations than facilities in Asia. US facilities support the most complex products (median unit operations =13) and processes (cell culture, purification) and maximum average products per facility(12.2). Asian facilities support simpler products (median unit operations =7), simpler processes (fermentation, fill/finish) and fewer products per facility on average (3.25). These results support the idea that managing technical complexity is one of the biggest challenges in biomanufacturing today and it can determine where a biologic can be manufactured. While economic forces push manufacturing of biologics to low cost locations, the need to develop expertise may prevent manufacturing from scattering across the world. Instead, there may be a more guided flow to locations with an expertise in certain types of products and processes.
Capacity challenges on the California high-speed rail shared corridors : how local decisions gave statewide impacts
In 2012, as a cost-control measure and in response to local opposition in the San Francisco Bay Area, the California High-Speed Rail Authority (CHSRA) adopted a "blended system" at the north and south bookends of the planned first phase of its high-speed rail line. In this blended operation, the high-speed rail line will share track and other infrastructure with commuter rail, intercity rail, and freight on the 50- mile Peninsula Corridor in Northern California and on 50 miles of right-of-way between Burbank, Los Angeles, and Anaheim in Southern California. This thesis provides a critical review of the blended system and discusses the level of cooperation and coordination necessary between host railroads and the high-speed rail tenant operator. In Northern California, the Peninsula Corridor Joint Powers Board's Caltrain commuter rail service between San Francisco and San Jose is experiencing record levels of ridership. This thesis explores the impact of both the electrification of the line and its extension into San Francisco's central business district on future ridership demand. With the California High-Speed Rail Authority competing spatially and temporally with Caltrain for access to high-revenue and high-cost infrastructure, we review different strategies for coordination and integration between the two agencies. In Southern California, the final form of the blended system is more nebulous than its northern counterpart. For the first few years of high-speed rail service, the Metrolink service operated by the Southern California Regional Rail Authority is expected to complement the high-speed rail system. However, since Metrolink operates on congested rail infrastructure, some of it owned by capacity-conscious freight railroads, there will exist the challenge of providing quality service and transfer opportunities for time-sensitive high-speed rail customers. The change to a blended system was a dramatic change of direction for the CHSRA; as a result, a new paradigm is needed for implementation of the system over the next 15 years. This thesis reviews the upcoming local design choices to be made on the local rail corridors and evaluates them from the perspective of the future statewide rail network. We find that the decisions made on the local blended corridor level will affect both the financial viability of the overall project and the quality of service experienced by customers across the entire California rail system.
Identity construction environments : the design of computational tools for exploring a sense of self and moral values
We live in a society where concepts of self, community and what is right and wrong are constantly changing. This makes it particularly challenging for young people to construct a sense of self and to identify and develop their most cherished personal and moral values. It also puts pressure on schools and society to help them do so. This thesis explores how new technologies can be used to create environments explicitly designed to help young people explore their inner worlds. I coined the term identity construction environments (ICE) to refer to computational tools purposefully designed with the goal of helping young people explore different aspects of the self, in particular personal and moral values. My contribution in this thesis involves three dimensions: theory, design and empirical research. At the theoretical level, I propose a framework through which people can think and learn about identity as a complex entity embracing multiple and contradictory values. At the design level, I describe an evolutionary process of building and investigating the use of three identity construction environments which are precursors to the one that is at the center of the empirical investigation described in this thesis.
Probabilistic quasi-optimization of building life cycle impacts and costs
In order to design buildings with reduced environmental impacts, it is important to analyze and compare a variety of design alternatives starting at early stages of the design process. This dissertation discusses the development of a probabilistic life cycle assessment (LCA) methodology for single-family residential buildings called the Building Attribute to Impact Algorithm (BAIA), which was created to reduce the amount of time and detail required to conduct LCAs, thus facilitating their use for early design exploration. Within BAIA, the building geometry, systems, occupant behavior, and materials are defined by flexible attributes, with options organized into hierarchies representing different levels of precision or under-specification. Parametric models based on these attributes provide estimates of the material quantities and use-phase energy consumption of the building, and Monte Carlo simulation is used to calculate the variability in predicted impacts and costs resulting from under-specified attributes. Two design guidance methods are explored: sequential specification - in which influential attributes are iteratively identified and specified - and genetic optimization. The latter is found to be more efficient because it identifies solutions with lower impacts and costs while maintaining a higher degree of flexibility in the probabilistic design, as measured by information entropy. In a genetically optimized design, quasi-optimum design solutions with 75% of the optimal reduction of costs and impacts are shown to provide a 40% increase in flexibility over the optimized design. These quasi-optimum solutions are analyzed to identify which attributes are flexible vs. critical (having quasi-optimum ranges that are greater than or less than half of their initial under-specified ranges, respectively). Twelve cases are studied representing different locations, analysis periods, uncertainty in energy-related impacts, and weightings of costs vs. impacts in the optimization objective. Of the geometrical attributes, the building aspect ratio and window-to-wall ratios are critical, while seven others (including orientation, number of stories, and window overhangs) are flexible in all cases. Most occupant-related attributes (including window shading and natural ventilation) are also flexible in all cases. Among the systems-related attributes, the mini-split heat pump efficiency, air leakage, and ratio of LED lighting fixtures are critical in most or all cases.
Fabrication of carbon nanoscrolls from patterned CVD-grown graphene
A planar process to roll lithographically defined sheets of chemical-vapor-deposition-grown graphene into carbon nanoscrolls by solvent evaporation is attempted. Graphene is observed to roll up by 350 nm on average from unrestrained edges, forming partial nanoscrolls. Resistance is measured while regulating charge carrier concentration with Si0 2 back gate. A large hysteresis is observed between increasing and decreasing backgate voltage sweeps, with a factor of two or greater difference in resistance that persists after backgate voltage returns to 0 V. The hysteresis is more pronounced and consistent in devices with a higher proportion of nanoscroll to flat graphene.
Stateful fuzzing for file systems
Correct file system behavior is vital to developing robust higher-level software and applications. However, correctly and efficiently investigating the wide range of file system behavior makes testing file systems a difficult task. In this thesis, I designed and implemented SibylFuzzer, a stateful fuzzer for testing file system behavior. Sibyl- Fuzzer is based on SibylFS, a third-party system comprised of a model for acceptable file system behavior and a procedure for comparing real-life file system implementation behavior against that model. SibylFuzzer uses SibylFS in two ways: first, as a source of file system knowledge to produce in-depth and meaningful tests; second, as a correctness standard such that any disagreement with a real-life file system's behavior indicates a potential bug within the real-life file system. I implemented SibylFuzzer in OCaml and performed all tests on a Linux file system.
Designing behavioral health integration in primary care : a practical outcomes-based framework
Patients with comorbid physical, behavioral, and social needs-often referred to as high-need patients-tend to be the most frequent utilizers of the health care system. The US health care system, with fragmented behavioral and medical health care sectors, is unable to effectively meet the complex needs of high-need patients. This results in high health care utilization, increased health care costs, and poor health outcomes among this population. Behavioral Health Integration in Primary Care (BHIPC) is widely promoted as a means to improve access, quality and continuity of health care services in a more efficient way, especially for people with complex needs. Hundreds of BHIPC programs are being implemented across health care settings in the US. However, the concept of BHIPC is wide-ranging, and it has been used as an overarching approach to describe integration efforts that vary in design, scope, and value. Research on how BHIPC is implemented in practice is limited. Practitioners and policymakers find it challenging to evaluate BHIPC programs and identify and scale-up its most critical elements. In this thesis, I develop a design-based framework that deconstructs the ambiguous concept of BHIPC into a set of tangible design elements and decisions. Furthermore, in order to inform how BHIPC is implemented in practice, I use this design-based framework to examine the behavioral health integration programs in four community health centers in Massachusetts. I found that by just comparing the underlying design elements, it is difficult to assess BHIPC programs and distinguish a successful program from an unsuccessful one. I therefore recommend and propose an outcomes-based framework for differentiating and evaluating BHIPC programs. I also recommend that future researchers refine and standardize the process measures I introduce so that they can be used as guideposts by primary care practitioners to develop their BHIPC programs.
Recent changes in the variability and seasonality of temperature and precipitation in the Northern Hemisphere
This study investigates recent changes in the variability and seasonality of temperature and precipitation in the Northern Hemisphere. The mean and variance of daily temperature and precipitation anomalies are calculated for each year over a 35-year period and compared to a base period. For temperature in the Northern Hemisphere, a noticeable warming trend amplified in the higher latitudes was observed, as well as a significant decrease in variability in the mid and high latitudes. For precipitation in the Northern Hemisphere, a drying trend and decreasing trend in variability were observed in the mid latitudes during summer. The seasonal cycles of both temperature and precipitation were also analyzed. The trends in temperature seasonal amplitude and phase were studied and revealed some influence of Arctic sea ice loss that changes the seasonality of local temperature, and Arctic amplification that potentially influences temperature seasonality in the mid and high latitude land regions. To determine whether the changes in temperature seasonality may affect temperature variance, analyses were performed by removing the phase trends from the temperature data using two methods. The phase trend-removed temperatures were found to have no prominent trends in variance. This suggests that changes in the temperature variance may be related to changes in temperature seasonality. To study what affects precipitation variability, the coefficient of variation (ratio of standard deviation to mean), which determines the shape of the mixed gamma probability distribution function (PDF) of precipitation, was studied. It was found that the mean and variance of precipitation have a fixed ratio over time, suggesting that the shape of the precipitation PDF has not changed. Therefore changes in the precipitation variance in the midlatitudes could be simply explained by the change in the mean precipitation in the same region.
Effect of seismic loading on steel moment resisting frames
In recent history, the use of Steel Moment Resisting Frames (SMRF) in many structural steel buildings has become popular among many engineers and designers. The use of these moment resisting frames allows for more open spaces between floors and columns than in buildings that use the more traditional braced frame construction. One of the critical aspects of the moment resisting frames is the connections between the beams and the columns. The Northridge earthquake near Los Angeles California in 1994 showed that the existing designs for SMRF connections were inadequate and unstable. As a result, new connection designs were needed for SMRF construction. This thesis will first discuss the causes for the failures of the SMRF connections that were discovered after the Northridge earthquake. Next, new performance and testing requirements for new connection designs will be examined. Lastly, one possible solution, the SidePlate connection system, will be analyzed.
A modified experts algorithm : using correlation to speed convergence with very large sets of experts
This paper discusses a modification to the Exploration-Exploitation Experts algorithm - (EEE). The EEE is a generalization of the standard experts algorithm which is designed for use in reactive environments. In these problems, the algorithm is only able to learn about the expert that it follows at any given stage. As a result, the convergence rate of the algorithm is heavily dependent on the number of experts which it must consider. We adapt this algorithm for use with a very large set of experts. We do this by capitalizing on the fact that when a set of experts is large, many experts in the set tend to display similarities in behavior. We quantify this similarity with a concept called correlation, and use this correlation information to improve the convergence rate of the algorithm with respect to the number of experts. Experimental results show that given the proper conditions, the convergence rate of the modified algorithm can be independent of the size of the expert space.
Enhancing service providers reliability by mitigating supply chain risk : the case of telecommunication networks
Service providers rely on the continuity of their service to sustain their businesses. While at first glance it may seem that service providers are not as dependent on their supply chain as product companies are, a closer look of some relevant systems shows that a stable and resilient supply chain is a key for both maintaining service and growing it. A wireless network provider which does not have spare parts in place to maintain existing cell sites will see an increase in outage duration and, thereby, customer churn. A cable/satellite service provider which does not have the equipment at the right place and in time to expand to a new market will see competitors capturing customers. In order to eliminate or at least mitigate these types of business risks for service providers, a transformation of the Time to Recovery (TTR) / Time to Survive (TTS) framework is shown to fit the service domain. TTR represents the time it takes for a supply chain system to recover from a disrupted supplier. TTS represents the time a supply chain system can continue to operate while its sources of supply are disrupted. The key metric which is introduced is value of service, which allows us to measure the actual lost value as a result of service disruptions.
A design for self-assembling robots in a system
This thesis presents the design, construction, control, and application for a novel concept of self-assembling robots in a system. The system is composed of multiple cooperative robots that are designed to self-assemble in a system, execute manipulative tasks, and self-repair, all without human assistance. The self-assembling feature employs four mechanical design guidelines: independent module, one touch assembly design, self-alignment, and self-guiding. The independent design feature also employs independent motor control boards and a wireless communication board. For a decoupling effect, we chose a motor with large gear ratio. For safety and modularization purpose, we implemented a newly designed Series Elastic Actuator to limit shock bandwidth by using its compliance and sense forces during manipulative tasks. This thesis introduces a control algorithm, according to design parameters. With the results of dynamic simulations, we developed a preliminary algorithm for picking up a based on subsumption architecture. Finally, we verified the design and algorithm via an application, picking up a module in unstructured environments.
EPB tunneling induced settlements in the Tren Urbano Project, Rio Piedras, Puerto Rico
Underground construction of the Rio Piedras section of the Tren Urbano project involved the construction of twin tunnels (6.3m diameter) with Earth Pressure Balance machines in weathered alluvial soil. The depth of the cover over the tunnel crown varies from 13m to 10m. The twin tunnels, which connect the Rio Piedras Station and University of Puerto Rico Station, each have a length of 433 meters. Precast concrete linings offered the final structural support. Ground deformations were monitored throughout the construction of both twin tunnels. Volume loss is defined as the volume of ground loss as a proportion of the final tunnel volume and is measured in the plane perpendicular to the tunnel heading. Volume losses corresponding to the process of tunnel construction are identified in this thesis. Settlement troughs both over single and twin tunnels (when symmetric) are often described by a Gaussian curve. However, previous studies have suggested that the settlement trough due to twin tunnels is not symmetric with respect to the midpoint between the two tunnels.
Reorganizing and the United States Coast Guard : a study in decision-making
The trend within organizations over the last fifteen years has been to decentralize, empower subordinates, and eliminate management layers. This has sometimes been called the "new" organization. Since 1986, the Coast Guard has conducted three reorganizations. Yet, the result of these reorganizations has been greater centralization, less empowerment for District Commanders, and an additional layer of management at Coast Guard Headquarters. This thesis explores why the Coast Guard-widely considered to be one of the "best" federal agencies-has bucked these "new" organization trends. The focus on this thesis is on the decision-making process. I examine two decisions to evaluate their success: (1) shifting support responsibilities from the field commander to a regional support and logistics command, and (2), adding a layer of management at Coast Guard Headquarters. Finally, I offer seven broad recommendations for how the Coast Guard should conduct its next reorganization effort. I offer three possible explanations for the Coast Guard's increased centralization. First, the Coast Guard is less centralized than recent reorganizations may indicate. Second, to me~t dramatic budgetary reductions, the Coast Guard must reduce the number of personnel due to the relatively high percentage of its operating budget dedicated to personnel-related expenses. The Coast Guard used centralization and consolidations to achieve this. Third, the Coast Guard unlike a private sector organization-is forced to look primarily at efficiency measures when faced with budgetary difficulties. The Coast Guard has used centralization as a means to become more efficient. The Coast Guard added a layer of management at Headquarters in an effort to force decision-making lower within the Headquarters organizational structure. Often, adding a layer of management is viewed as forcing decision-making higher within an organization. The Coast Guard viewed it as means to push decision-making lower. The results of shifting support from the District Commanders to the regional support commands has been mixed. Naval, electronic, and civil engineering support delivery is widely viewed as being superior to the previous decentralized system. The decentralization approach appeared to · work better for personnel, housing, medical, and administrative support. However, it is possible that the reduced level of resources and not the organizational structure is why this latter group is not working as well under a centralized system. The current Headquarters organization can work effectively if staffs are resourced appropriately and if decision-making authority is delegated.
Internet killed the Michelin star : the motives of narrative and style in food text creation on social media
Digital representations of food (food texts) have become mainstream content on social media sites and digital streaming sites. While they accomplish some similar goals to their analog counterparts (e.g. in-print cookbooks), like communicating information about the food's preparation or what its consumption would be like, the surplus of food texts has been ushered in by a transformation of media infrastructure such as the internet, cameras on cheap mobile phones, and digital social network platforms. The creators of the bulk of food texts have shifted from authority figures in the field to anyone who dines out and goes online. With this shift in media ownership comes a change in status -- from expert to everyone. As a result, the dynamics of food discourse has also changed. I use interviews and ethnographies with fine dining chefs, food industry professionals, and media makers to illustrate these convergences and divergences in the creation and consumption of food texts today. TL;DR: While the underlying purpose of the construction and consumption of food texts remain the same from analog to digital form, the authority of food culture and its complimentary narrative control has shifted as a result of the convergence of food texts and digital media affordances.
Lower bounds in distributed computing
Distributed computing is the study of achieving cooperative behavior between independent computing processes with possibly conflicting goals. Distributed computing is ubiquitous in the Internet, wireless networks, multi-core and multi-processor computers, teams of mobile robots, etc. In this thesis, we study two fundamental distributed computing problems, clock synchronization and mutual exclusion. Our contributions are as follows. 1. We introduce the gradient clock synchronization (GCS) problem. As in traditional clock synchronization, a group of nodes in a bounded delay communication network try to synchronize their logical clocks, by reading their hardware clocks and exchanging messages. We say the distance between two nodes is the uncertainty in message delay between the nodes, and we say the clock skew between the nodes is their difference in logical clock values. GCS studies clock skew as a function of distance. We show that surprisingly, every clock synchronization algorithm exhibits some execution in which two nodes at distance one apart have Q( lo~gD clock skew, where D is the maximum distance between any pair of nodes. 2. We present an energy efficient and fault tolerant clock synchronization algorithm suitable for wireless networks. The algorithm synchronizes nodes to each other, as well as to real time. It satisfies a relaxed gradient property. That is, it guarantees that, using certain reasonable operating parameters, nearby nodes are well synchronized most of the time. 3. We study the mutual exclusion (mutex) problem, in which a set of processes in a shared memory system compete for exclusive access to a shared resource. We prove a tight Q(n log n) lower bound on the time for n processes to each access the resource once. .
Lane changing models for arterial traffic
Driving behavior models for lane-changing and acceleration form an integral component of microscopic traffic simulators and determine its value in evaluation of different traffic management strategies. The state-of-art model for lane changing adopts a two-level framework: the first level involves a latent or unobserved choice of a target lane; the second level models the acceptance of adjacent gaps in the direction of the target lane. While this modeling approach has several advantages over past works, it assumes drivers to execute lane change within the same time step in which gap was found to be acceptable. In other words, under time steps typically adopted in model applications, the lane change duration is assumed to be negligibly small. However, past works report average lane change durations to the order of 5-6 seconds. Besides this practical maneuvering requirement, the assumption fails further in moderate or low density traffic conditions with ample gap sizes or low speed conditions, where lane changing maneuver can take longer than average. The work outlined in this thesis proposes an extension to the two-level framework for lane changing models through a third level that explicitly models the lane change duration.
Architecture, seismology, carpentry, the West, and Japan, 1876-1923
This dissertation follows British professors at Tokyo's late nineteenth century College of Technology (Kobudaigaku) and continues into the twentieth century with the Japanese students they trained. My first chapters map out an argument between British disciplines over Japanese 'adaptation' and/or 'resistance' to nature, a conflict driven by the development of the modem science of seismology in Tokyo. Seismology was a unique cross-cultural project - a 'Western' instrumental science invented and first institutionalized in a non-Western place. I discuss bow artifacts as diverse as seismographs, five-story wooden pagoda, and Mt. Fuji became 'boundary objects' in a fierce dispute between spokesmen for science and an over the character of the Japanese landscape and people. The latter chapters explain bow young Japanese architects and seismologists re-mapped the discursive and instrumental terrains of their British teachers, challenging foreign knowledge-production from inside colonizing disciplines. The text is framed around the story of the Great Nobi Earthquake of 1891. According to contemporary Japanese narratives, the great earthquake (the most powerful in modem Japanese history) was particularity damaging to the new 'foreign' infrastructure, and caused Japanese to seriously question, for the first time, the efficacy of foreign knowledge. 'Japan's earthquake problem' went from being one of bow to import European resistance into a fragile nation, to one of how to make a uniquely fragile imported infrastructure resist the power of Japanese nature. I critically re-tell this Japanese story as a corrective to European and American images of Meiji .Japan as a 'pupil country' and the West as a 'teacher culture'. "Foreign Knowledge" demonstrates in very concrete ways bow science and technology, art and architecture, gender, race, and class co-constructed Meiji Japan. Distinctions between 'artistic' and 'scientific' representations of culture/nature were particularly fluid in late nineteenth century Tokyo. Architects in my text often speak in the name of science and seismologists become an critics and even ethnographers. The narrative is also trans-national; centered in Tokyo, it follows Japanese architects, scientists, and carpenters to Britain, Italy, the United States, and Formosa.
Implementing rate-distortion optimization on a resource-limited H.264 encoder
This thesis models the rate-distortion characteristics of an H.264 video compression encoder to improve its mode decision performance. First, it provides a background to the fundamentals of video compression. Then it describes the problem of estimating rate and distortion of a macroblock given limited computational resources. It derives the macroblock rate and distortion as a function of the residual SAD and H.264 quantization parameter QP. From the resulting equations, this thesis implements and verifies rate-distortion optimization on a resource-limited H.264 encoder. Finally, it explores other avenues of improvement.
Compiling Gallina to go for the FSCQ file system
Over the last decade, systems software verification has become increasingly practical. Many verified systems have been written in the language of a proof assistant, proved correct, and then made runnable using code extraction. However, due to the rigidity of extraction and the overhead of the target languages, the resulting code's CPU performance can suffer, with limited opportunity for optimization. This thesis contributes CoqGo, a proof-producing compiler from Coq's Gallina language to Go. We created Go', a stylized semantics of Go which enforce linearity, and implemented proof-producing compilation tactics from Gallina to Go' plus a straightforward translation from Go' to Go. Applying a prototype of CoqGo, we compiled a system call in the FSCQ file system, with minimal changes to FSCQ's source code. Taking advantage of the increased control given by CoqGo, we implemented three optimizations, bringing the system call's CPU performance to 19% faster than the extracted version.
Optimal workloop energetics of muscle-actuated systems
Skeletal muscles are the primary actuators that power, stabilize and control locomotive and functional motor tasks in biological systems. It is well known that coordinated action and co-activation of multiple muscles give rise to desirable effects such as enhanced postural and dynamic stability. In this thesis, we study the role of muscle co-activation from an energetics perspective: Are there situations in which antagonist co-activation leads to enhanced power generation, and if so, what is the underlying mechanism? The mechanical energetics of muscles are traditionally characterized in terms of workloop measures where muscles are activated against oscillating, zero-admittance motion sources. We extend these measures to more natural, "mid-range" admittance loads, actuated by multiple muscles. Specifically, we set up the problem of a second-order mechanical system driven by a pair of antagonist muscles. This is the simplest problem where the influences of load dynamics and muscle co-activation on the output energetics may be investigated. To enable experimentation, a muscle testing apparatus capable of real-time servo emulation of the load is developed and utilized for identification and workloop measurements.
Economic advancement or social exclusion? : less-educated workers, cost-of-living and migration in high-tech regions
Several high-tech regions today show signs of displacement and exclusion of low-skill workers from the employment and wage benefits of a booming economy. Whether high-tech activities are responsible for these trends or if the ex ante characteristics of the region could predispose its residents to exclusion, in the absence of high-tech growth, are issues that regional scientists have left largely unexplored. Understanding what low-skill and high-skill workers undergo in the presence of this activity, and how that compares to the reality of those who reside in regions whose economy is not dependent on knowledge-intensive sectors, provides a backdrop for policy makers to evaluate industry-choice decisions in the interest of economic growth and social equity in regional development. To provide that backdrop, I empirically answer: How are the benefits of high-tech development distributed between less- and more-educated workers? How does this distribution compare to that of regions that do not follow an education-intensive development path? Are social equity and sustained growth possible under these conditions? Through regression analysis across 50 regions in the United States during the 1990s, I show that shifts in regional economic-base composition towards a greater concentration of high-tech activity cannot be held on its own responsible for exclusionary patterns in these regions.
Remote depth survey of the Charles River Basin
Unmanned vehicles may provide more time- and cost-effective methods of gathering hydrographic survey data when compared to traditional, manned survey vessels. A remote-controlled unmanned surface vehicle (USV) was outfitted with a depth transducer for the purpose of conducting a depth survey of the Charles River Basin. Two windsurfer fins were added to the stem of the USV kayak for directional stability without significant drag, permitting a maximum vessel speed of 4.4 knots. A total of 1485 latitude-longitude GPS points with corresponding depth measurements were taken. Charles Basin data was plotted with ArcGIS software and used to create depth contours and three-dimensional surface plots of the river bottom. This prototype survey USV displays promise and could become readily feasible with further development and autonomy.
The effects of native advertisement on the U.S. news industry/
The migration of news to the web has given advertisers new opportunities to target readers with ever more personal and engaging ads. This sponsored content, known as native advertising, is placed in news publications often camouflaged as legitimate news. Though native ads bring revenue to the struggling U.S. news industry, their ability to draw loyal readers off-site could hurt publishers in the long run. Herein, I measure the quality and the impact of the ads from Content Recommendation Networks (CRN) on the U.S. news industry, between March 2016 and February 2019. A CRN controls both the third-party ads and the house ads -- recommendations for news articles from the host publisher -- on a news publisher's website. During the 2016 presidential election, I found that 17% of ad headlines were political, and 67% of the stories were clickbait. Over the 2018 midterm elections, 15% of the ads were political, and 73% were clickbait. While third-party ads are more clickbait than house ads, the increase in clickbait between 2016 and 2018 is larger for the house ads. Further, I investigate the effect that a one-time exposure to these ads have on the perceived credibility on news articles. Four publishers were under study: CNN, Fox News, The Atlantic and Sacramento Bee. A one-time exposure to CRN ads was found to have no signicant eect on the credibility of traditional publishers. Yet, the CRN ads impacted the credibility of less well-known publishers: ads increased the credibility of the news on Sacramento Bee, and decreased it on The Atlantic.
System interface challenges in combining mature technologies with rigid architectures
This thesis examines the integration of mature technologies with rigid architectures through concepts from Systems Architecture, Systems Engineering, and Project Management. The research focuses on a project with John Deere to integrate large-scale GPS vehicle control for agricultural fertilizer sprayers into an existing platform for sports turf maintenance spraying via the John Deere ProGator with Select Spray sprayer attachment. Agricultural GPS control systems and the ProGator turf sprayer are long-running legacy products of differing scales for John Deere's product portfolio and their architectures are rigid. The architectures of these products are broken down using Operand-Process Methodology and Design Structure Matrices for component integration and mapping processes to stakeholder needs. Additionally, prototype development vehicles are used to gather stakeholder needs and generate product engineering requirements. The gathering, validation, and revision of these requirements along with the product development cycle is facilitated by Spiral Development to manage the project through iterations starting with mule concept machines through to full production release.
Model-code separation architectures for compression based on message-passing
Data is compressible by presuming a priori knowledge known as a data model, and applying an appropriate encoding to produce a shorter description. The two aspects of compression data modeling and coding - however are not always conceived as distinct, nor implemented as such in compression systems, leading to difficulties of an architectural nature. For example, how would one make improvements upon a data model whose specific form has been standardized into the encoding and decoding processes? How would one design coding for new types of data such as in biology and finance, without creating a new system in each case? How would one compress data that has been encrypted when the conventional encoder requires data-in-the-clear to extract redundancy? And how would mobile acquisition devices obtain good compression with lightweight encoders? These and many other challenges can be tackled by an alternative compression architecture. This work contributes a complete "model-code separation" system architecture for compression, based on a core set of iterative message-passing algorithms over graphical models representing the modeling and coding aspects of compression. Systems following this architecture resolve the challenges posed by current systems, and stand to benefit further from future advances in the understanding of data and the algorithms that process them. In the main portion of this thesis, the lossless compression of binary sources is examined. Examples are compressed under the proposed architecture and compared against some of the best systems today and to theoretical limits. They show that the flexibility of model-code separation does not incur a performance penalty. Indeed, the compression performance of such systems is competitive with and sometimes superior to existing solutions. The architecture is further extended to diverse situations of practical interest, such as mismatched and partially known models, different data and code alphabets, and lossy compression. In the process, insights into model uncertainty and universality, data representation and alphabet translation, and model-quantizer separation and low-complexity quantizer design are revealed. In many ways, the proposed architecture is uniquely suitable for understanding and tackling these problems. Throughout, a discourse is maintained over architectural and complexity issues, with a view toward practical implementability. Of interest to system designers, issues such as rate selection, doping, and code selection are addressed, and a method similar to EXIT-chart analysis is developed for evaluating when compression is possible. Suggestions for system interfaces and algorithmic factorization are distilled, and examples showing compression with realistic data and tasks are given to complete the description of a system architecture accessible to broader adoption. Ultimately, this work develops one architecturally principled approach toward flexible, modular, and extensible compression system design, with practical benefits. More broadly, it represents the beginning of many directions for promising research at the intersection of data compression, information theory, machine learning, coding, and random algorithms.
Augmented manual fabrication methods for 2D tool positioning and 3D sculpting
Augmented manual fabrication involves using digital technology to assist a user engaged in a manual fabrication task. Methods in this space aim to combine the abilities of a human operator, such as motion planning and large-range mechanical manipulation, with technological capabilities that compensate for the operator's areas of weakness, such as precise 3D sensing, manipulation of complex shape data, and millimeter-scale actuation. This thesis presents two new augmented manual fabrication methods. The first is a method for helping a sculptor create an object that precisely matches the shape of a digital 3D model. In this approach, a projector-camera pair is used to scan a sculpture in progress, and the resulting scan data is compared to the target 3D model. The system then computes the changes necessary to bring the physical sculpture closer to the target 3D shape, and projects guidance directly onto the sculpture that indicates where and how the sculpture should be changed, such as by adding or removing material. We describe multiple types of guidance that can be used to direct the sculptor, as well as several related applications of this technique. The second method described in this thesis is a means of precisely positioning a handheld tool on a sheet of material using a hybrid digital-manual approach. An operator is responsible for manually moving a frame containing the tool to the approximate neighborhood of the desired position. The device then detects the frame's position and uses digitally-controlled actuators to move the tool within the frame to the exact target position. By doing this in a real time feedback loop, a tool can be smoothly moved along a digitally-specified 2D path, allowing many types of digital fabrication over an unlimited range using an inexpensive handheld tool.
Redefining identity in the altered rural landscape
Within a place, there is a fluidity of demographic, a collision and interaction between identities that requires negotiation, both spatially and socially. This project aims to assemble a series of actions toward the design of a space to negotiate that realm of personal and social adaptation within the urban environment that comes with the relocation of self through immigration, or the disruption of a home by the presence of foreignness. The contemporary rural community must negotiate these conditions in a new way, as it is being affected by social changes that, unlike the urban context, it does not have the infrastructure to support. The architect enters the project as an active observer, her actions of interpretive investigation assembling a set of components of design gathered through strangers and locals that represent the identity of the site. These components will be used to design a public architecture that serves as the container of memory and generator of exchange, mediating between the physical landscape and the constructed landscape of the assembled personal identity of individuals. The project will serve as a vehicle to understand and assemble a rural public space that is inclusive of memory and provides agency for progress. As cultural groups are transferred through contexts, the constructed landscapes of identity and the physical landscapes are altered and derived by the juxtaposition of the two, forming a dynamic relationship that is simultaneously individual and multiple. This reciprocity is especially evident in the selected context of Arcadia, Florida where cultural identity is altered through a particular event, such as a drastic physical alteration (hurricane), instigating mutation in one or both landscapes, forcing a restructuring of the whole and an acknowledgment of not only absence of the lost, but also presence of the new identities.
Maintenance-based design of concrete parking structures
The purpose of this study is to determine what type of preventative maintenance for a concrete parking structure will produce the maximum economic benefit. Existing models for concrete deterioration are analyzed for their accuracy in predicting the service lives of concrete structures and a model appropriate for concrete parking structures is selected. The selected model is modified to account for the unique microclimate that is created within a parking structure and used to create deterioration curves that quantify the fraction of a structure deteriorated at a given time. Several preventative maintenance programs that summarize current practice in the repair of concrete parking structures are created. The programs are analyzed using the selected concrete deterioration model and the method of preventative maintenance that maximizes the net present worth of a concrete parking structure is identified.
Uncertainty assessment of complex models with application to aviation environmental systems
Numerical simulation models that support decision-making and policy-making processes are often complex, involving many disciplines, and long computation times. These models typically have many factors of different character, such as operational, design-based, technological, and economics-based. Such factors generally contain uncertainty, which leads to uncertainty in model outputs. For such models, it is critical to both the application of model results and the future development of the model that uncertainty be properly assessed. This thesis presents a comprehensive approach to the uncertainty assessment of complex models intended to support decision- and policy-making processes. The approach consists of seven steps, which are establishing assessment goals, documenting assumptions and limitations, documenting model factors and outputs, classifying and characterizing factor uncertainty, conducting uncertainty analysis, conducting sensitivity analysis, and presenting results. Factor uncertainty is represented probabilistically, characterized by the principle of maximum uncertainty, and propagated via Monte Carlo simulation. State-of-the-art methods of global sensitivity analysis are employed to apportion model output variance across model factors, and a fundamental extension of global sensitivity analysis, termed distributional sensitivity analysis, is developed to determine on which factors future research should focus to reduce output variability.
Design of a battery-powered induction stove
Many people in the developing areas of the world struggle to cook with stoves that emit hazardous fumes and contribute to green house gas emissions. Electric stoves would alleviate many of these issues, but significant barriers to adoption, most notably lack of reliable electric power, make current commercial options infeasible. However, a stove with an input power of 24V DC elegantly solves the issue of intermittent power by allowing car batteries to be used instead of a grid connection, while also allowing seamless integration with small scale solar installations and solar-based micro-grids. However, no existing commercial stoves nor academic research have attempted to create an induction stove powered from a low voltage DC source. This paper presents the design of a low voltage current-fed, full-bridge parallel resonant converter stove. The dynamics of this new topology are discussed in detail and simulations are provided to analyze the behavior. Additionally, a practical implementation of a 500W - 1 kW stove is described. This stove is the first of it's kind and represents a new contribution to both the field of induction cooking and the field of clean cooking solutions for the developing world.
Understanding and utilizing waveguide invariant range-frequency striations in ocean acoustic waveguides
Much of the recent research in ocean acoustics has focused on developing methods to exploit the effects that the sea surface and seafloor have on acoustic propagation. Many of those methods require detailed knowledge of the acoustic properties of the seafloor and the sound speed profile (SSP), which limits their applicability. The range-frequency waveguide invariant describes striations that often appear in plots of acoustic intensity versus range and frequency. These range-frequency striations have properties that depend strongly on the frequency of the acoustic source and on distance between the acoustic source and receiver, but that depend mildly on the SSP and seafloor properties. Because of this dependence, the waveguide invariant can be utilized for applications such as passive and active sonar, time-reversal mirrors, and array processing, even when the SSP or the seafloor properties are not well known. This thesis develops a framework for understanding and calculating the waveguide invariant, and uses that framework to develop signal processing techniques for the waveguide invariant. A method for passively estimating the range from an acoustic source to a receiver is developed, and tested on experimental data. Heuristics are developed to estimate the minimum source bandwidth and minimum horizontal aperture required for range estimation. A semi-analytic formula for the waveguide invariant is derived using WKB approximation along with a normal mode description of the acoustic field in a rangeindependent waveguide. This formula is applicable to waveguides with arbitrary SSPs, and reveals precisely how the SSP and the seafloor reflection coefficient affect the value of the waveguide invariant. Previous research has shown that the waveguide invariant range-frequency striations can be observed using a single hydrophone or a horizontal line array (HLA) of hydrophones. This thesis shows that traditional array processing techniques are sometimes inadequate for the purpose of observing range-frequency striations using a HLA. Array processing techniques designed specifically for observing range-frequency striations are developed and demonstrated. Finally, a relationship between the waveguide invariant and wavenumber integrations is derived, which may be useful for studying range-frequency striations in elastic environments such as ice-covered waveguides.
Human-automation collaboration in occluded trajectory smoothing
Deciding if and what objects should be engaged in a Ballistic Missile Defense System (BMDS) scenario involves a number of complex issues. The system is large and the timelines may be on the order of a few minutes, which drives designers to highly automate these systems. On the other hand, the critical nature of BMD engagement decisions suggests exploring a human-in-the-loop (HIL) approach to allow for judgment and knowledge-based decisions, which provide for potential automated system override decisions. This BMDS problem is reflective of the role allocation conundrum faced in many supervisory control systems, which is how to determine which functions should be mutually exclusive and which should be collaborative. Clearly there are some tasks that are too computationally intensive for human assistance, while other tasks may be completed without automation. Between the extremes are a number of cases in which degrees of collaboration between the human and computer are possible. This thesis motivates and outlines two experiments that quantitatively investigate human/automation tradeoffs in the specific domain of tracking problems. Human participants in both experiments were tested in their ability to smooth trajectories in different scenarios. In the first experiment, they clearly demonstrated an ability to assist the algorithm in more difficult, shorter timeline scenarios. The second experiment combined the strengths of both human and automation to create a human-augmented system. Comparison of the augmented system to the algorithm showed that adjusting the criterion for having human participation could significantly alter the solution. The appropriate criterion would be specific to each application of this augmented system. Future work should be focused on further examination of appropriate criteria.
Simulation and optimization of hot syngas separation processes in integrated gasification combined cycle
IGCC with CO2 capture offers an exciting approach for cleanly using abundant coal reserves of the world to generate electricity. The present state-of-the-art synthesis gas (syngas) cleanup technologies in IGCC involve cooling the syngas from gasifier to room temperature or lower for removing Sulfur, Carbon dioxide and Mercury, leading to a large efficiency loss. It is therefore important to develop processes that remove these impurities from syngas at an optimally high temperature in order to maximize the energy efficiency of an IGCC plant. The high temperature advanced syngas cleanup technologies are presently at various stages of development and it is still not clear which technology and configuration of IGCC process would be most energetically efficient. In this thesis, I present a framework to assess the suitability of various candidate syngas cleanup technologies by developing computational simulations of these processes which are used in conjunction with Aspen Plus® to design various IGCC flowsheet configurations. In particular, we evaluate the use of membranes and sorbents for CO2 separation and capture from hot syngas in IGCC, as a substitute to solutionbased absorption processes. We present a multi-stage model for CO2 separation from multi-component gas mixtures using polymeric membranes based on the solutiondiffusion transport mechanism. A numerical simulation of H2 separation from syngas using Pd-alloy based composite metallic membranes is implemented to assess their performance for CO2 sequestration.
Exploration of the mechanisms enabling team systems thinking
Aerospace systems are among the most complex anthropogenic systems and require large quantities of systems knowledge to design successfully. Within the aerospace industry, an aging workforce places those with the most systems experience near retirement at a time when fewer new programs exist to provide systems experience to the incoming generation of aerospace engineers and leaders. The resulting population will be a set of individuals who by themselves may lack sufficient systems knowledge. It is therefore important to look at teams of aerospace engineers as a new unit of systems knowledge and thinking. By understanding more about how teams engage in collaborative systems thinking (CST), organizations can better determine which types of training and intervention will lead to greater exchanges of systems-level knowledge within teams. Following a broad literature search, the constructs of team traits, technical process, and culture were identified as important for exploring CST. Using the literature and a set of 8 pilot interviews as guidance, 26 case studies (10 full and 16 abbreviated) were conducted to gather empirical data on CST enablers and barriers. These case studies incorporated data from 94 surveys and 65 interviews. From these data, a regression model was developed to identify the five strongest predictors of CST and facilitate validation. Eight additional abbreviated case studies were used to test the model and demonstrate the results are generalizable beyond the initial sample set. To summarize the results, CST teams are differentiable from non-CST teams.
Domesticating sprawl : Dearborn Michigan and the Green Moat
Over the last century of urban decentralization, the suburb migrated a critical distance beyond the traditional city, and transformed into sprawl. The homogenous landscape of sprawl is characterized by repeating horizontal imagery of featureless buildings foregrounded with grass berms, planned for experience through the mediating frame of the car's windshield. Contemporary design discourse has interrogated sprawl from many angles in search for ways to intervene in the most popular and most impenetrable form of American urbanism, issuing discussions ranging from those that raise polar alternatives to those that accept sprawl and meticulously analyze its forms and structure. However, this thesis asserts that the American Midwest is a unique and important territory that has not been adequately appraised in the sprawl debate. Not only does the underlying structure and ideology of Midwesternr landscape evoke certain comparisons to sprawl, one might argue that the American suburb was first borne out of the Midwest, more specifically around the Motor City Detroit. If the automobile is the enabling apparatus of sprawl, the birthplace of the automobile then coincides with the birthplace of the suburb. As both the originating source of suburban development and a current scene of booming sprawl, the metropolitan region of Detroit sees the confluence of the new and the old forms of decentralized urbanism and is accordingly an excellent proving ground for new insights and proposals.
Carbon capture and storage in the U.S. : a sinking climate solution
Coal-fired power plants produce half of the United States' electricity and are also the country's largest emitter of carbon dioxide, the greenhouse gas responsible for climate change. Carbon Capture and Storage (CCS) is a proposed technological solution that will sequester CO2 in the ground. Proponents of CCS have framed it as a "clean coal technology" and broadcast the story that it will solve both our dependence on coal and prevent future climate change impacts. However, the technology is not a practicable solution for climate change, even with the most generous timetables and goals for atmospheric carbon. It cannot be scaled in time, costs too much, has serious environmental risks, and will face public resistance. Yet, CCS remains a part of future U.S. energy policy because the coal and electric utility industries have funded an attractive message and story for it. Environmental advocacy organizations are unable to create an effective counter-story because they are split into two coalitions. Therefore, the public is not mobilized and there is no incentive for legislators to challenge coal and CCS.
Trait-based approaches to marine microbial ecology
The goal of this thesis is to understand how the functional traits of species, biotic interactions, and the environment jointly regulate the community ecology of phytoplankton. In Chapter 2, I examined Continuous Plankton Recorder observations of diatom and dinoflagellate abundance in the North Atlantic Ocean and interpreted their community ecology in terms of functional traits, as inferred from laboratory- and field-based data. A spring-to-summer ecological succession from larger to smaller cell sizes and from photoautotrophic to mixotrophic and ieterotrophic phytoplankton was apparent. No relationship between maximum net growth rate and cell size or taxonomy was found, suggesting that growth and loss processes nearly balance across a range of cell sizes and between diatoms and dinoflagellates. In Chapter 3, I examined a global ocean circulation, biogeochemistry, and ecosystem model that indicated a decrease in) phytoplankton diversity with increasing latitude, consistent with observations of many marine and terrestrial taxa. Ii the modeled subpolar oceans, seasonal variability of the environment led to the competitive exclusion of phytoplankton with slower growth rates and to lower diversity. The relatively weak seasonality of the stable subtropical and tropical oceans in the global model enabled long exclusion timescales and prolonged coexistence of multiple phytoplankton with comparable fitness. Superimposed on this meridional diversity decrease were "hot spots" of enhanced diversity in soc regions of energetic ocean circulation which reflected a strong influence of lateral dispersal. In Chapter 4, I investigated how small-scale fluid turbulence affects phytoplankton nutrient uptake rates and community structure in an idealized resource competition model. The flux of nutrients to the cell and nutrient uptake are enhanced by turbulence, particularly for big cells in turbulent conditions. Yet with a linear loss form of grazing, turbulence played little role in regulating model community structure and the smallest cell size outcompeted all others because of its significantly lower R* (the minimum nutrient requirement at equilibrium). With a quadratic loss form of grazing, however, the coexistence of many phytoplankton sizes was possible and turbulence played a role in selecting the number of coexisting size classes and the dominant size class. The impact of turbulence on community structure in the ocean may be greatest in relatively nutrient-deplete regions that experience episodic inputs of turbulence kinetic energy.
Unexpected consequences of demand response : implications for energy and capacity price level and volatility
Historically, electricity consumption has been largely insensitive to short term spot market conditions, requiring the equating of supply and demand to occur almost exclusively through changes in production. Large scale entry of demand response, however, is rapidly changing this paradigm in the electricity market located in the mid-Atlantic region of the US, called PJM. Greater demand side participation in electricity markets is often considered a low cost alternative to generation and an important step towards decreasing the price volatility driven by inelastic demand. Recent experience in PJM, however, indicates that demand response in the form of a peaking product has the potential to increase energy price level and volatility. Currently, emergency demand response comprises the vast majority of demand side participation in PJM. This is a peaking product dispatched infrequently and only during periods of scarcity when thermal capacity is exhausted. While emergency demand response serves as a cheaper form of peaking resource than gas turbines, it has recently contributed to increases in energy price volatility by setting price at the $1,800/MWh price cap, substantially higher than the marginal cost of most thermal generation. Additionally, the entry of demand response into the PJM capacity market is one of primary drivers for capacity prices declining by over fifty percent. This study investigates the large penetration of emergency demand response in PJM and the implications for the balance between energy and capacity prices and energy price volatility. A novel model is developed that dynamically simulates generation entry and exit over a long term horizon based on endogenously determined energy and capacity prices. The study finds that, while demand response leads to slight reductions in total generation cost, it shifts the bulk of capacity market revenues into the energy market and also vastly increases energy price volatility. This transition towards an energy only market will send more accurate price signals to consumers as costs are moved out of the crudely assessed capacity charge and into the dynamic energy price. However, the greater volatility will also increase the risk faced by many market participants. The new market paradigm created by demand response will require regulators to balance the importance of sending accurate price signals to consumers against creating market conditions that decrease risk and foster investment.
Exploring constraint removal motion planners
We present algorithms for motion planning that can tolerate collisions. Because finding a path of minimum cover is prohibitively expensive, we investigate algorithms that work well in practice and find solutions close to the true minimum cover solution. We introduce the notion of removal importance for obstacles and the family of iterative obstacle removing RRTs (IOR-RRTs). This family of algorithms operate similarly to the RRT but iteratively tolerate more collisions in trying to identify a path. One member of the family that performs well is the search informed IOR-RRT. This search technique first performs bidirectional collision-free search to find a clear path if possible. In failure, it iteratively selects an obstacle for removal using its removal importance. We measure the performance of our algorithms on a multi-link robot operating in both environments with feasible paths and those where collisions must be allowed.
Vibrational dynamics in water from the molecule's perspective
Liquid water is a fascinating substance, ubiquitous in chemistry, physics, and biology. Its remarkable physical and chemical properties stem from the intricate network of hydrogen bonds that connect molecular participants. The structures and energetics of the network can explain the physical properties of the substance on macroscopic length scales, but the events that initiate many chemical reactions in water occur on the time scales of [similar to] 0.1 - 1 picosecond. The experimental challenges of measuring specific molecular motions on this time scale are formidable. The absorption frequency of the OH stretch of HOD in liquid D₂0 is sensitive to the hydrogen bonding and molecular environment of the liquid. Ultrafast IR experiments endeavor to measure fluctuations in the hydrogen bond network by measuring spectral fluctuations on femtosecond time scales, but the data do not easily lend themselves to a direct microscopic interpretation. Computer simulations of empirical models, however, offer explicit microscopic detail but must be adapted to include a quantum mechanical vibration. I have developed methods in computer simulation to relate spectral fluctuations of the OH stretch in liquid D₂0 to explicit microscopic information. The experiments also inform the simulation by providing important quantitative data about the fidelity and accuracy of a chosen molecular model, and help build a qualitative picture of hydrogen bonding in water. Our atomistic model reveals that ultrafast experiments of HOD in liquid D₂0 measure transient fluctuations of the liquid's electric field. On the fastest time scales, localized fluctuations drive dephasing, while on longer time scales larger scale molecular reorganization destroys vibrational coherence.
Contact thermal lithography
Contact thermal lithography is a method for fabricating microscale patterns using heat transfer. In contrast to photolithography, where the minimum achievable feature size is proportional to the wavelength of light used in the exposure process, thermal lithography is limited by a thermal diffusion length scale and the geometry of the situation. In this thesis the basic principles of thermal lithography are presented. A traditional chrome-glass photomask is brought into contact with a wafer coated with a thermally sensitive polymer. The mask-wafer combination is flashed briefly with high intensity light, causing the chrome features heat up and conduct heat locally to the polymer, transferring a pattern. Analytic and finite element models are presented to analyze the heating process and select appropriate geometries and heating times. In addition, an experimental version of a contact thermal lithography system has been constructed and tested. Early results from this system are presented, along with plans for future development.
Evolving building system : expandable housing by means of corrugated metal sheets
Large housing programs in developing countries built out of permanent materials are likely to be too costly for low-income people. Such housing would have to be subsidized or allocated to middle-income groups. For this reason, some governments provide sites and services that allow low-income families to live in temporary units. This intervention has enabled low-income families to live on regulated demarcated and serviced land if not in permanent dwelling units. While doing so, they are able to build incrementally more permanent dwellings according with their life-cycle and their changing financial resources. This type of strategy supports the concept that housing is not a finished and static product but a continuous process over time. In order to adapt the initial temporary dwellings built by the low-income groups, and help with their transition to permanent buildings, this thesis proposed a building system which adapts to the dynamic and progressive building processes of these groups. The initial shelter is built out of corrugated metal sheets and steel members made out of thin metal sheets. The building system proceeds in stages from a simple temporary shelter -expanding and evolving- to a permanent dwelling. This transition is achieved by gradually strengthening the structure and transforming the surfaces of the dwelling with different levels of finishings.
Parameter estimation and control of nonlinearly parameterized systems
Parameter estimation in nonlinear systems is an important issue in measurement, diagnosis and modeling. The goal is to find a differentiator free on-line adaptive estimation algorithm which can estimate the internal unknown parameters of dynamic systems using its inputs and outputs. This thesis provides new algorithms for adaptive estimation and control of nonlinearly parameterized (NLP) systems. First, a Hierarchical Min-max algorithm is invented to estimate unknown parameters in NLP systems. To relax the strong condition needed for the convergence in Hierarchical Min-max algorithm, a new Polynomial Adaptive Estimator (PAE) is invented and the Nonlinearly Persistent Excitation Condition for NLP systems, which is no more restrictive than LPE for linear systems, is established for the first time. To reduce computation complexity of PAE, a Hierarchical PAE is proposed. Its performance in the presence of noise is evaluated and is shown to lead to bounded errors. A dead-zone based adaptive filter is also proposed and is shown to accurately estimate the unknown parameters under some conditions. Based on the adaptive estimation algorithms above, a Continuous Polynomial Adaptive Controller (CPAC) is developed and is shown to control systems with nonlinearities that have piece-wise linear parameterizations. Since large classes of nonlinear systems can be approximated by piece-wise linear functions through local linearization, it opens the door for adaptive control of general NLP systems. The robustness of CPAC under bounded output noise and disturbances is also established.
Design of a seven degree of freedom arm with human attributes
Studying biological systems has given robotics researchers valuable insight into designing complex systems. This thesis explores one such application of a biomimetic robotic system designed around a human arm. The design of an anthropomorphic arm, an arm that is similar to that of a human's, requires deep insight into the kinematics and physiology of the biological system. Investigated here is the design and completion of an arm with 7 degrees of freedom and human-like range of motion in each joint. The comparison of actuation schemes and the determination of proper kinematics enable the arm to be built at a low cost while maintaining high performance and similarity to the biological analog. Complex parts are built by dividing structures into interlocking 2d shapes that can easily be cut out using a waterjet and then welded together with high reliability. The resulting arm will become part of a bionic system when combined with an existing bionic hand platform that is being developed in the Intelligent Machines Laboratory at MIT. With a well thought out modular design, the system will be used as a test bed for future research involving data simplification and neurological control. The completion of the anthropomorphic arm reveals that is indeed feasible to use simple DC motors and quick fabrication techniques. The final result is a reliable, modularized, and anthropomorphic arm.
Demand forecast for short life cycle products : Zara case study
The problem of optimally purchasing new products is common to many companies and industries. This thesis describes how this challenge was addressed at Zara, a leading retailer in the "fast fashion" industry. This thesis discusses the development of a methodology to optimize the purchasing process for seasonal, short life-cycle articles. The methodology includes a process to develop a point forecast of demand of new articles, the top-down forecast at the color and size level and an optimization module to produce recommendations to define the optimal quantity to purchase and the optimal origin to source from. This thesis is the first phase of a two phases purchasing optimization process. The focus of this thesis is: a) the outline of an enhanced purchasing methodology b) the development of the most important input in the system: a point forecast of demand at the article, color, and size level, and c) the development of an IT prototype to automatically manage the purchasing methodology. The second phase of the purchasing optimization process focuses on the optimization module. The optimization module is beyond the reach of this thesis.
Spin effects in single-electron transistors
Basic electron transport phenomena observed in single-electron transistors (SETs) are introduced, such as Coulomb-blockade diamonds, inelastic cotunneling thresholds, the spin-1/2 Kondo effect, and Fano interference. With a magnetic field parallel to the motion of the electrons, single-particle energy levels undergo Zeeman splitting according to their spin. The g-factor describing this splitting is extracted in the spin-flip inelastic cotunneling regime. The Kondo splitting is linear and slightly greater than the Zeeman splitting. At zero magnetic field, the spin triplet excited state energy and its dependence on gate voltage are measured via sharp Kondo peaks superimposed on inelastic cotunneling thresholds. Singlet-triplet transitions and an avoided crossing are analyzed with a simple two-level model, which provides information about the exchange energy and the orbital mixing. With four electrons on the quantum dot, the spin triplet state has two characteristic energy scales, consistent with a two-stage Kondo effect description. The low energy scale extracted from a nonequilibrium measurement is larger than those extracted in equilibrium.
The contribution of published sustainability indexes to the construction of practical useful metrics for comparing strengths and weaknesses for achieving sustainability among countries
The thesis focuses on the evaluation of available national sustainability indexes, which measure and compare the performance of countries on various elements of sustainability. The first part presents an overview of the methodology used in existing published sustainability indexes. In addition, the elements that comprise an "ideal" multi-faceted index of sustainability are identified and comparisons with the existing indexes are made. In addition, the importance of two enablers is highlighted: The Potential for Innovation, and Ethical Concerns and Governance, which affect the long-term performance of all elements of sustainable development. In addition, results from a review of components of the main categories of the index and scores for illustrative countries are presented. Finally a series of potential improvements to the existing Key Performance Indicators (KPIs) are presented in addition to proposals for future research in order to further improve the proposed sustainability index.
Formation and maintenance of tropical cyclone spiral bands in idealized numerical simulations
Spiral bands are one of the most prominent features of tropical cyclones (TCs). These regions of clouds and rainfall are often the source of major TC hazards, such as inland flooding, mudslides, and tornadoes. Since the advent of radar technology, numerous ideas have been proposed to explain the existence of TC spiral bands. Previous hypotheses include the manifestation of atmospheric waves emanating from the TC inner core, boundary layer instabilities, and the interaction between surface cold pools and low-level vertical wind shear. Despite much effort, no consensus has yet been reached on the underlying physical mechanism responsible for TC bands. We approach this problem by examining the formation of TC spiral bands in a set of idealized three-dimensional simulations from the System for Atmospheric Modeling.
An accurate analytical framework for computing fault-tolerance thresholds using the [[7,1,3]] quantum code
In studies of the threshold for fault-tolerant quantum error-correction, it is generally assumed that the noise channel at all levels of error-correction is the depolarizing channel. The effects of this assumption on the threshold result are unknown. We address this problem by calculating the effective noise channel at all levels of error-correction specifically for the Steane [[7,1,3]] code, and we recalculate the threshold using the new noise channels. We present a detailed analytical framework for these calculations and run numerical simulations for comparison. We find that only X and Z failures occur with significant probability in the effective noise channel at higher levels of error-correction. We calculate that when changes in the noise channel are accounted for, the value of the threshold for the Steane [[7,1,3]] code increases by about 30 percent, from .00030 to .00039, when memory failures occur with one tenth the probability of all other failures. Furthermore, our analytical model provides a framework for calculating thresholds for systems where the initial noise channel is very different from the depolarizing channel, such as is the case for ion trap quantum computation.
Engineering of Human Immunodeficiency Virus gp120 by yeast surface display for neutralizing antibody characterization and immunogen design
The sequence diversity of glycoprotein gp120 of the envelope spike of Human Immunodeficiency Virus (HIV) allows the virus to escape from antibody selection pressure. Certain conserved epitopes, like the CD4 binding site, are required for viral fitness and antibodies against these epitopes are able to neutralize HIV from multiple clades. Passive immunization experiments suggest that eliciting such broadly reactive antibodies by vaccination may provide protection, but so far this has proven impossible. In this thesis, we establish a yeast surface display system for the development of gp120-based molecules for antibody characterization and immunogen design. A stripped core gp120 is constructed that retains the correct presentation of the CD4 binding site. Epitopes of several CD4 binding site-directed antibodies, including the gold standard antibody VRC01, are mapped with yeast displayed mutant libraries. A panel of immunogens that share the epitope defined by VRC01 but are diverse elsewhere on their surfaces is designed. Mice immunized sequentially with the diverse immunogens elicit an antibody response that is focused entirely on the VRC01 epitope. The serum cross-reacts with gp120 from multiple clades. Monoclonal antibodies from these mice are isolated and characterized.
Simplified methodology for indoor environment designs
Current design of the building indoor environment uses averaged single parameters such as air velocity, air temperature or contaminant concentration. This approach gives only general information about thermal comfort and indoor air quality, which is limiting for the design of energy efficient and healthy buildings. The design of these buildings requires sophisticated but practical tools that are not currently available, and the objective of this thesis is to develop such a tool. The development of the simple design tool had several phases. Each phase employed simplified models validated with measured data in order to assess model accuracy and reliability. The validation data was obtained from a state-of-the-art experimental facility at MIT. Based on the collected data, we first developed simplified boundary conditions for the diffuser jet flow, which is the key flow element in mechanically ventilated spaces. The boundary conditions employ resultant momentum from the supply diffusers without modeling the detailed diffuser geometry. Although simple, the models can simulate airflow from complex diffusers commonly used for air-conditioning with reasonable accuracy. Another simplification is the use of a zero-equation turbulence model to calculate indoor air distribution. The model uses the concept of eddy-viscosity and approximate turbulent viscosity with an algebraic equation. To test the turbulence model, an airflow program was developed. The program can simulate indoor airflow on a PC within several minutes, which is five to ten times faster than with the similar programs with a "standard" k-£ model. Finally, the airflow program was coupled with an energy analysis program. The combined program simultaneously analyzes internal heat transfer and air movement as well as the heat transfer through the building envelope. The impacts on the thermal comfort in the occupied zone are quantified, and we found that the thermal comfort in most cases is not
Population strategies to decrease sodium intake : a global cost-effectiveness analysis
Excessive sodium consumption is both prevalent and very costly in many countries around the world. Recent research has found that more than 90% of the world's adult population live in countries with mean intakes exceeding the World Health Organization's recommendation, and that more than a million deaths every year may be attributable to excess sodium. This study uses a simulation model to estimate, for the first time, the cost-effectiveness of government interventions to reduce population sodium consumption in every country in the world. It reveals substantial heterogeneity in cost-effectiveness by country that has never before been identified, and illustrates, also for the first time, the sensitivity of intervention efficacy to the theoretical-minimum-risk exposure distribution of sodium intake. The study makes a number of additional contributions. It offers a comprehensive appraisal of the methodological strengths and limitations of the surveys, imputation models, randomized controlled trials, prospective cohort studies, meta-analyses, and simulation models that together constitute the evidence base for public health recommendations on sodium intake, as well as for this study's own analysis. These methodological issues, some raised for the first time, are evaluated systematically to allow the relative quality of each input to be assessed and to inform prioritization of further research. The study also uses economic theory to ground a discussion of the proper nature and scope of government policies targeting population sodium consumption, and presents an up-to-date survey of sodium reduction initiatives around the world.
From bits to information : learning meets compressive sensing
A quantization approach to supervised learning, compressive sensing, and phase retrieval is presented in this thesis. We introduce a set of common techniques that allow us, in those three settings, to represent high dimensional data using the order statistics of linear and non linear measurements. We introduce new algorithms for signals classification in the multiclass and the multimodal settings, as well as algorithms for signals representation and recovery from quantized linear and quadratic measurements. We analyze the statistical consistency of our algorithms and prove their robustness to different sources of perturbation, as well as their computational efficiency. We present and analyze applications of our theoretical results in realistic setups, such as computer vision classification tasks, Audio-Visual Automatic Speech Recognition, lossy image compression and retrieval via locality sensitive hashing, locally linear estimation in large scale learning and Fourier sampling for phase retrieval - of particular interest in X-ray crystallography and super-resolution diffraction imaging applications. Our analysis of quantization based algorithms highlights interesting tradeoffs between memory complexity, sample complexity, and time complexity in algorithms design.
X-ray timing of the accreting millisecond pulsar SAX J1808.4-3658
We present a 7 yr timing study of the 2.5 ms X-ray pulsar SAX J1808.4-3658, an X-ray transient with a recurrence time of =2 yr, using data from the Rossi X-ray Timing Explorer covering 4 transient outbursts (1998-2005). Substantial pulse shape variability, both stochastic and systematic, was observed during each outburst. Analysis of the systematic pulse shape changes suggests that, as an outburst dims, the X-ray "hot spot" on the pulsar surface drifts longitudinally and a second hot spot may appear. The overall pulse shape variability limits the ability to measure spin frequency evolution within a given X-ray outburst (and calls previous zi measurements of this source into question), with typical upper limits of Jil < 2.5 x 10-14 Hz s-1 (2a). However, combining data from all the outbursts shows with high (6 a) significance that the pulsar is undergoing long-term spin down at a rate /i = (-5.6 ± 2.0) x 10-16 Hz s-1, with most of the spin evolution occurring during X-ray quiescence. We discuss the possible contributions of magnetic propeller torques, magnetic dipole radiation, and gravitational radiation to the measured spin down, setting an upper limit of B < 1.5 x 108 G for the pulsar's surface dipole magnetic field and Q < 4.4 x 1036 g cm2 for the mass quadrupole moment. We also measured an orbital period derivative of Porb = (3.5 + 0.2) x 10-12 s s-1 We identify a strong anti-correlation between the fractional amplitude of the harmonic (r2) and the X-ray flux (fx) in the persistent pulsations of four sources: SAX J1808.4-3658, IGR J00291+5934, and XTE J1751-305, XTE J1807-294. These sources exhibit a powerlaw relationship r2 x( fx7 with slopes ranging from y = -0.47 to -0.70. The three other accreting millisecond pulsars that we analyzed, XTE J0929-314, XTE J1814-338, and HETE J1900.1-2455, do not as fully explore a wide range of fluxes, but they too seem to obey a similar relation. We argue that these trends may be evidence of the recession of the accretion disk as the outbursts dim. We examine the energy dependence of the persistent pulsations and thermonuclear burst oscillations from SAX J1808.4-3658.
Fracture characterization from seismic measurements in a borehole
Fracture characterization is important for optimal recovery of hydrocarbons. In this thesis, we develop techniques to characterize natural and hydraulic fractures using seismic measurements in a borehole. We first develop methods to characterize a fracture intersecting an open borehole by studying tubewave generation and attenuation at the fracture. By numerically studying the dispersion relation for fluid pressure in the fracture, we show that the tubewave measurements made in the transition regime from low to high frequency can constrain fracture compliance, aperture and length, while measurements made in the high-frequency regime can place a lower bound on fracture compliance. Analysis of field data suggest a large compliance value (10- 0m/Pa) for a meter-scale fracture and supports scaling of fracture compliance and applicability of scattering based methods for fracture characterization on a reservoir scale. We next study Distributed Acoustic Sensing (DAS), a novel Fiber Optic (FO) cable based seismic acquisition technology. We relate DAS measurements to traditional geophone measurements and make a comprehensive study of factors that influence DAS measurements. Using a layered borehole model, we analytically compare the sensitivity of DAS measurements to P- and S-wave incidence at arbitrary angles for the cases when the FO cable is installed in the borehole fluid or when cemented outside the casing. In addition, we study the azimuthal placement of the cable, the effect of cable design, and the effect of environmental conditions on time-lapse measurements. We show that DAS is a reliable tool for time-lapse monitoring. Finally, we analyze time-lapse DAS Vertical Seismic Profiling (VSP) data collected during a multi-stage hydraulic fracture treatment of a well drilled into a tight gas sandstone reservoir. We develop a processing workflow to mitigate the unique challenges posed by DAS data and propose methods for DAS depth calibration. We observe systematic and long-lived (over 10 days) time-lapse changes in the amplitudes of direct P-waves and nearly no phase changes due to stimulation. We argue that the time-lapse changes cannot be explained by measurement factors alone and that they may be correlated to the stimulated volume. Though the current geometry is not ideal, DAS is promising for hydraulic fracture monitoring.
Regulation of meiosis I chromosome segregation by Spol3 and Cdc5 in Saccharomyces cerevisiae
Meiosis is a specialized cell cycle that generates gametes for the purpose of disseminating genetic material to the next generation. The reduction of chromosome number by half is brought about two chromosome segregation phases following a single DNA replication phase. In the first division, homologs segregate away from each other and in the second division the sister chromatids separate. These two consecutive meiotic divisions necessitate innovations in chromosome dynamics and hence the involvement of both meiosis-specific modulators and regulators of the mitotic cell cycle. The work described herein characterizes the roles of two essential meiosis I regulators, a mitotic protein kinase, Cdc5 and a meiosis-specific gene, Spol 3. The conserved polo-like kinase Cdc5 regulates many essential aspects of meiosis I including the removal of cohesion between sister chromatids for homolog segregation, sister-kinetochore co-orientation and exit from meiosis I. Spol3 likely cooperates with Cdc5 to regulate some of these processes. SpoI3 controls kinetochore co-orientation and the retention of centromeric cohesion which is essential for accurate sister chromatid segregation in meiosis II. In sum, this work elucidated the roles of two important regulators of the meiotic cell cycle and defines a component of the complex regulatory circuit necessary for the specialized meiotic divisions.
Computational approaches to modeling the conserved structural core among distantly homologous proteins
Modem techniques in biology have produced sequence data for huge quantities of proteins, and 3-D structural information for a much smaller number of proteins. We introduce several algorithms that make use of the limited available structural information to classify and annotate proteins with structures that are unknown, but similar to solved structures. The first algorithm is actually a tool for better understanding solved structures themselves. Namely, we introduce the multiple alignment algorithm Matt (Multiple Alignment with Translations and Twists), an aligned fragment pair chaining algorithm that, in intermediate steps, allows local flexibility between fragments. Matt temporarily allows small translations and rotations to bring sets of fragments into closer alignment than physically possible under rigid body transformation. The second algorithm, BetaWrapPro, is designed to recognize sequences of unknown structure that belong to specific all-beta fold classes. BetaWrapPro employs a "wrapping" algorithm that uses long-distance pairwise residue preferences to recognize sequences belonging to the beta-helix and the beta-trefoil classes. It uses hand-curated beta-strand templates based on solved structures. Finally, SMURF (Structural Motifs Using Random Fields) combines ideas from both these algorithms into a general method to recognize beta-structural motifs using both sequence information and long-distance pairwise correlations involved in beta-sheet formation. For any beta-structural fold, SMURF uses Matt to automatically construct a template from an alignment of solved 3-D structures.
An analytic examination of the effect of the stratosphere on surface climate through the method of piecewise potential vorticity inversion
An analytic study was performed to examine the effect of the stratosphere on the surface of the earth. The method of piecewise potential vorticity inversion was employed in the diagnosis of the magnitude of and dynamics behind the stratosphere-surface link in both the transient and stationary cases. The potential vorticity inversion results in both the transient and stationary models indicated that the stratosphere possesses a significant effect at the surface of the earth. It was determined that, compared to the stratosphere as a whole, it was primarily the lower stratosphere that had the most significant impact at the surface of the earth. The results of this analytic study therefore indicate that in modeling the surface of the earth, the dynamics detailed here between the lower stratosphere and surface must be included for the modeled surface weather or climate simulations to be accurate.
Offshore wind turbine nonlinear wave loads and their statistics
Due to the large influence of lateral flexible vibrations on offshore wind turbine foundations and the higher natural frequencies of the offshore wind turbine foundation relative to the dominant frequencies of the linear wave load model, the modeling of the dynamic behavior of the foundation under nonlinear wave loads and analysis of their statistical characteristics have become an important issue for offshore wind turbine design. This thesis derives an approximate model of the nonlinear wave loads in the time domain by Fluid Impulse Theory, verifies it with a boundary element method software WAMIT and validates it with experimental measurements. The load level crossing rates and the load power spectral density is obtained in multiple sea states. The simulated nonlinear wave loads are applied as the forcing mechanism on the offshore wind turbine and its foundation, and the mudline bending moments are computed and compared with experimental measurements. The system identification is conducted by fitting the model with the experimental data using linear regression method. The analytical extreme and fatigue prediction of the offshore wind turbine system are derived and evaluated in waters of finite depth and in multiple seastates. Key words: Nonlinear wave loads, nonlinear wave loads statistics, system identification, extremes and fatigue
Imaging the two gaps of the high temperature superconductor Pb-Bi₂Sr₂CuO₆₊x
The nature and behavior of electronic states in high temperature superconductors are the center of much debate. The pseudogap state, observed above the superconducting transition temperature Tc, is seen by some as a precursor to the superconducting state. Others view it as a competing phase. Recently, this discussion has focused on the number of energy gaps in the system. Some experiments indicate a single energy gap, implying that the pseudogap is a precursor state. Others indicate two, suggesting that it is a competing or coexisting phase. In this thesis, I report temperature dependent scanning tunneling spectroscopy of Pb-Bi2Sr2 CuO6+6 . I have developed a novel analytical method that reveals a new, narrow, homogeneous gap that vanishes near Tc, superimposed on the typically observed, inhomogeneous, broad gap, which is only weakly temperature dependent. These results not only support the two gap picture, but also explain previously troubling differences between scanning tunneling microscopy and other experimental measurements.
Cooperative exploration under communication constraints
The cooperative exploration problem necessarily involves communication among agents, while the spatial separation inherent in this task places fundamental limits on the amount of data that can be transmitted. However, the impact of limited communication on the exploration process has not been fully characterized. Existing exploration algorithms do not realistically model the tradeoff between expansion, which allows more rapid exploration of the area of interest, and maintenance of close relative proximity among agents, which facilitates communication. This thesis develops new algorithms applicable to the problem of cooperative exploration under communication constraints. The exploration problem is decomposed into two parts. In the first part, cooperative exploration is considered in the context of a hierarchical communication framework known as a mobile backbone network. In such a network, mobile backbone nodes, which have good mobility and communication capabilities, provide communication support for regular nodes, which are constrained in movement and communication capabilities but which can sense the environment. New exact and approximation algorithms are developed for throughput optimization in networks composed of stationary regular nodes, and new extensions are formulated to take advantage of regular node mobility. These algorithms are then applied to a cooperative coverage problem. In the second part of this work, techniques are developed for utilizing a given level of throughput in the context of cooperative estimation. The mathematical properties of the information form of the Kalman filter are leveraged in the development of two algorithms for selecting highly informative portions of the information matrix for transmission. One algorithm, a fully polynomial time approximation scheme, provides provably good results in computationally tractable time for problem instances of a particular structure. The other, a heuristic method applicable to instances of arbitrary matrix structure, performs very well in simulation for randomly-generated problems of realistic dimension.
Real estate opportunity funds : past fund performance as an indicator of subsequent fund performance
The returns of opportunistic real estate private equity investment funds were tested for evidence of performance persistence between subsequent funds by the same manager. Tests include regression analysis, construction of contingency tables, and calculation of rank correlation coefficients. Tests were based on return data from the period 1991 to 2001 and were similar to those used to analyze performance persistence in other investment vehicles such as mutual funds and hedge funds. Results indicate that manager performance in a given fund is a significant indicator of performance in subsequent funds, but that this persistence accounts for only a limited portion of fund return Gross fund returns exhibit a higher degree of serial correlation than net returns. Other fund characteristics, analyzed in conjunction with previous fund performance, are not shown to be significant indicators of performance.
Manufacture of aerospace-grade thermoset and thermoplastic composites via nanoengineered thermal processing
Aerospace manufacturers continue to rely on composite materials to make aerovehicles lighter and stronger, particularly employing carbon fiber reinforced plastics (CFRP) using carbon microfiber reinforcement with thermoset and thermoplastic polymer matrices. With the increasing use of such composites, the need for energy-efficient, cost-effective methods to produce composite structures is desired. Traditional curing processes such as autoclaves and ovens rely on convective heat transfer, which has fundamental inefficiencies and several limitations including infrastructure cost and throughput bottlenecks. Similarly, hot presses (usually for thermoplastic matrices) for processing composites through conductive heat transfer are limited to a narrow range of part geometries. Direct Joule heating with carbon nanotube (CNT) film network heaters has shown significant promise to overcome these key manufacturing challenges of composites in the aerospace industry.
Lessons learned from a statewide housing-first policy for homeless families
Massachusetts is the only state in the US to maintain an emergency shelter entitlement for homeless families with its own dedicated line item in the state budget. However, in the last decade that line item has increased by 134%. In fiscal year 2011, Massachusetts spent more money on homeless services while at the same time serving more families through the Emergency Assistance (EA) system than ever before. In an attempt to rein in the cost and volume of participants in the system, the state underwent a major reform that culminated in the launch of a new program, HomeBASE, on August 1st, 2011. The program adopted a housing-first approach to serving families at imminent risk of homelessness that offered financial assistance for families to secure their own housing unit rather than entering an emergency shelter. This thesis looks at the implications of the housing-first policy shift and determines whether the program was able to achieve its intended goals: to reduce the cost and volume of the EA system. I find that the costs associated with offering 12-months of rental assistance are less than half of the average cost of serving a family in EA shelter. However, the savings are partially offset by the increase in demand for assistance when offering a housing subsidy instead of emergency shelter. To understand the reasons for the increased demand, I compare families enrolled in HomeBASE to EA shelter families from previous years to determine which, if any, factors contributed to demand. I find that HomeBASE did not attract a different population of families but merely more of the same. Using this analysis I make recommendations for how the state can modify the program using targeting tools and stabilization services to achieve its intended outcomes. These recommendations are relevant for other homeless policymakers and service providers as more and more programs adopt a housing-first approach to homelessness.
Sudy of the NCF project material supply chain
The Naval Construction Force (NCF) performs construction projects in all areas of the world during both peacetime and war. While some of these projects occur in populated areas where project materials are readily available, many of these projects occur in remote areas or war zones, where project materials must be procured from the United States or elsewhere and shipped to the unit performing the construction. The construction scopes also vary from projects as small as concrete sidewalks to projects as large as full utility system installations, or complete facility and base construction. As a result of the diverse locations and project types that the Naval Construction Force experiences, the logistics of providing project material and construction equipment to multiple global locations is a major challenge. The Naval Construction Force still experiences delays and inefficiencies in supplying construction materials to its various projects and units deployed throughout the world, which in turn reduces the overall productivity of the deployed Construction Battalions. This research explores the current supply chain that the NCF has in place for obtaining construction project materials. It also explores the latest initiatives in information technology and construction supply chain management that are being applied in the commercial sector.
Poetic expression in architecture
A common element of twentieth century thought has been the analysis of each phenomenon to its internal logic, the reduction of everything to bare essentials. What has evolved is a notion, to some extent shared by all of us, that in a world which seems almost incomprehensible we can regain meaning by stripping away the superfluous and revealing the essential. What has been pushed aside in this mind-cleaning frenzy is that other side of human nature, the speculative, imaginative side, a side no less important than the definitive and rational one. My purpose in this thesis is to use poetry--that is, poetic verse--as a model for relearning the expression of architectural ideas in ways that will encourage people to speculate and to form imaginative connections. The study is in two parts: an essay and a design of a library for Barnard College.
Tritium thermal desorption testing of nuclear graphites irradiated at fluoride-salt-cooled high-temperature reactor conditions
The Fluoride-Salt-Cooled High-Temperature Reactor (FHR) is a next-generation nuclear plant design that combines successfully demonstrated technologies from other advanced reactor concepts such as tristructural isotropic (TRISO) coated particle fuel, molten flibe salt (LiF-BeF2) coolant, and an Air-Brayton power cycle. A prominent technical challenge for the FHR is maintaining the release of tritium generated from neutron irradiation of flibe below acceptable levels. One proposed method for partitioning tritium from the salt is through adsorption onto graphite. Demonstrating the viability of this type of tritium control system requires further experimental investigation since few studies have examined the combined effect of molten flibe, tritium, and graphite at relevant FHR temperatures. Studying tritium transport experimentally has been recently enabled by three in-core fluoride salt irradiations completed at the Massachusetts Institute of Technology Nuclear Reactor Laboratory.
White light emitting device for general illumination applications
In the 21st century, mankind faces problem of energy crisis through depletion of fossil fuels as well as global warning through the production of excessive greenhouse gases. Hence, there is an urgent need to look for new sources of renewable energy or ways to utilize energy more effectively. Solid state lighting (SSL) is a major area of research interest to use energy in a more efficient manner. Early light emitting devices (LEDs) were originally limited their use for low power indication lights. Later research produces high brightness LEDs (HB-LEDs) as well as blue color LEDs. This brings to reality of the entire visible light spectrum. White light is also made possible. As with other technologies, numerous obstacles will have to be surmounted in bringing LEDs from the laboratory to the marketplace. LEDs will also have to compete with established technologies such as incandescent and fluorescent lighting. This thesis will describe the current state of high powered LEDs, examine challenges faced by LEDs and look at future markets. Evaluation in the potential of LEDs for general illumination will be carried out through cost modeling and performance analysis.
Safety analysis of a compact integral small light water reactor
Small modular reactors (SMRs) hold great promise in meeting a diverse market while reducing the risk of delays during nuclear construction compared to large gigawatt-sized reactors. However, due to lack of economy of scale, their capital cost needs to be reduced. Increasing the compactness or power density of the nuclear island is one way to reduce capital cost. This work first assesses the transient analysis of a compact integral small light water reactor to examine its safety performance. Subsequently, a parametric optimization study with the goal of increasing its power density (i.e. improve its market competitiveness) while maintaining safety is performed. A model of the reactor is established using RELAP5/3.3gl, with reference to the features of Nuward SMR. Nuward is a compact 170 MWe Pressurized Water Reactor, whose key features include the use of Compact Steam Generators and a large water tank in which the containment submerges for passive heat removal.
High frequency power conversion architecture for grid interface
With the present ac-voltage distribution system, ac-dc converters are key components for driving many dc voltage applications from the ac grid voltage. There are a lot of electronic devices that natively operate from the dc voltage including light emitting diodes (LEDs), personal and laptop computers, and smart phones; for all of them there is a drive to increase functionality and to reduce the volume at the same time. The desire for further miniaturization is, however, facing a dominant obstacle strained by the performance requirements on power electronic circuits. In this thesis, a design technique for high-performance ac-dc power converters will be presented. A new grid interface ac-dc conversion architecture and associated circuit implementations are proposed along with novel control methods. This approach simultaneously address design challenges associated with high performance (e.g., high efficiency, high power factor, miniaturization, and high reliability/lifetime) of ac-dc power conversion systems. The proposed architecture is suitable for realizing ac-dc converters that switch in the HF range (3-30 MHz) with relatively low-voltage components and with zero-voltage switching (ZVS) conditions, enabling significant converter size reduction while maintaining high efficiency. Moreover, the proposed approach can achieve reasonably high power factor about 0.9, while dynamically buffering twice-line frequency energy using small capacitors operating with large voltage swings over the ac line voltage cycle. The ac-dc converter design shows that excellent combinations of power density, efficiency, and power factor can be realized with this approach.
Reusing a robot's behavioral mechanisms to model and manipulate human mental states
In a task domain characterized by physical actions and where information has value, competing teams gain advantage by spying on and deceiving an opposing team while cooperating teammates can help the team by secretly communicating new information. For a robot to thrive in this environment it must be able to perform actions in a manner to deceive opposing agents as well as to be able to secretly communicate with friendly agents. It must further be able to extract information from observing the actions of other agents. The goal of this research is to expand on current human robot interaction by creating a robot that can operate in the above scenario. To enable these behaviors, an architecture is created which provides the robot with mechanisms to work with hidden human mental states. The robot attempts to infer these hidden states from observable factors and use them to better understand and predict behavior. It also takes steps to alter them in order to change the future behavior of the other agent. It utilizes the knowledge that the human is performing analogous inferences about the robot's own internal states to predict the effect of its actions on the human's knowledge and perceptions of the robot. The research focuses on the implicit communication that is made possible by two embodied agents interacting in a shared space through nonverbal interaction. While the processes used by a robot differ significantly from the cognitive mechanisms employed by humans, each face the similar challenge of completing the loop from sensing to acting. This architecture employs a self-as-simulator strategy, reusing the robot's behavioral mechanisms to model aspects of the human's mental states. This reuse allows the robot to model human actions and the mental states behind them using the grammar of its own representations and actions.
A qualitative and quantitative study of the distribution of pelagic sediment in the Atlantic Basin
Pelagic sedimentation is the primary modifier of topography generated by ridge-associated volcanic and tectonic processes. This thesis represents an effort to understand the processes of, and the general distribution of, pelagic sedimentation on rough topography, particularly in the Atlantic Basin but with applications to the world ocean as a whole. This study utilizes a simple numerical model of sedimentation which, when applied to models of rough basement topography, allows us to study sedimentation effects in terms of commonly-measured stochastic parameters including seafloor RMS height, abyssal hill spacing, and slope distribution. We also address the effect of sediment compaction on seafloor morphology, and the impact of long-wavelength topography on stochastic measures of sedimented seafloor. Understanding gained allows the construction of inverse problems to obtain information about sediment distribution and basement morphology from multibeam bathymetric data in regimes where backscatter from rough, reflective basement highs obscures returns from wide-beam seismic systems. By using maximum likelihood estimation to compare slope distribution functions calculated from data to those from filtered model topographies, we estimate average sediment thickness L, basement RMS height H, and a measure of sediment mobility k. Using data from near-ridge surveys and off-axis transit lines, we invert for L, H, and K for 3-29 Ma seafloor from the western flank of the Mid-Atlantic Ridge (MAR) near 26* N, 2- 45 Ma seafloor from the western flank of the MAR near 260 S, 2-40 Ma seafloor from the eastern flank of the MAR near 25* S, and 1-38 Ma seafloor from the western flank of the MAR near 35* S. Variations in L with seafloor age allow us to constrain sediment rain rate and the corrosivity of bottom waters to calcite since the Oligocene. We hypothesize that sediment rain rates during much of the early and middle Miocene were only 10-50% of the average rate for the past -10 m.y. Variations in H suggest correlation between tectonic setting and topographic variability. A relatively narrow range of K is needed to describe intrahill sedimentation patterns.