Title
stringlengths
3
331
text
stringlengths
14
9.14k
New extension to Systems-Theoretic Process Analysis
From space shuttles to airplanes to everyday automobiles, today's systems are increasingly complex-and increasingly connected. In order to ensure that increased complexity does not simply bring an increased number of accidents, this new complexity demands new safety analysis tools. Systems-Theoretic Accident Model and Processes (STAMP) is a new accident causality model developed by Nancy Leveson at the Massachusetts Institute of Technology. This model has inspired several new methods, from accident analyses like Causal Analysis based on STAMP (CAST) to hazard analyses like Systems-Theoretic Process Analysis (STPA). Unlike traditional methods, which are based on chain-of-events causality models and generally identify only component failures, STPA can be used to identify design flaws, component interactions, and human factors that contribute to accidents. Though STPA takes a more thoughtful approach to human error than traditional methods--requiring analysts to consider how system conditions may lead to "errors"-it does not provide extensive guidance for understanding why humans behave the way they do. Prior efforts have been made to add such guidance to STPA, but there has yet to emerge a widely accepted, easy-to-use method for examining human behavior using STPA. The goal of this work is to propose a new method for examining the role of humans in complex automated systems using STPA. This method, called STPA-Engineering for Humans, provides guidance for identifying causal scenarios related to interactions between humans and automation and understanding why unsafe behaviors may appear appropriate in the operational context. The Engineering for Humans method integrates prior research on STPA and human factors into a new model intended for industry applications. Importantly, this model provides a framework for dialogue between human factors experts and other engineers. In this thesis, the Engineering for Humans method is applied to a case study of an automated driving system called Automated Parking Assist. Four different implementations of this system at different levels of automation are examined. Finally, it is demonstrated that STPA-Engineering for Humans can be used to compare how multiple system designs would affect the safety of the system with respect to the behavior of the human operator.
A study of firm's behavior in the B2B e-business regime
The economic essence of Internet-based B2B business has become an ever-important market concern after the dot-com mania collapsed in early 2001. Many theories have been developed to understand this new business pattern. Nevertheless, lots of puzzles remained unsolved. So far, even whether B2B e-business is a temporary phenomenon; or is it just the extension of the old VAN-EDI system is still under debate. This research tries to answer some of the most fundamental questions of why and how companies adopt e-business application by studying the e-business fast mover's behaviors in the following three domains: the initiative for firm to adopt e-business, the business model and strategy developed to leverage Internet-based network system, and the barriers to implementing e-business practice. (1) The initiative for firm to adopt B2B e-business: the improvement of economic efficiency is used to measure firm's incentive in adopting E-business. Internet-based business tends to reduce production and distribution cost; and increases market transparency. It is argued that benefits from lowered cost are offset by buyer's higher bargaining power. Nevertheless, study shows that market power is critical as advanced computation capacity improves firm's ability to detect buyer's behavior, firms with larger market power have access to better quality data and gain substantial edge over smaller competitors. (2) The business models and strategy developed by firms to leverage e-business: Strategies of existing large firms are to pay their suppliers to link to their system in order to leverage the reduced production cost. They can, however, increase revenue by improving IT-based marketing and service quality. Small firm's strategy is to link their system with large firm's interface to gain competitive advantage over rivals. Start-up's strategy has been to reinforcing network externality to gain market share as markups are thin. The new trend for start-ups will be to differentiate their functionability and create new value-added for production firms. (3) The barriers for firms to adopt e-business: In the industry level, major barriers including fragmented market structure, unstandardized product and production process. In the firm level, the major barriers including organization and culture restructuring, interoperability between ebusiness application and with legacy system, lack of qualified personnel and knowledge, and the interoperability with complementary companies.
Multiscale probing of colloidal gelation dynamics
Colloidal gels are viscoelastic materials characterized by the collective behavior of particles that form a space-spanning network. Although the network structure embodies the aggregation process of the particles, the kinetic pathway from a stable suspension to such a complex microstructure remains poorly understood. In this work, we explore the evolution of microscopic structure and dynamics of home-made colloidal particles in the early phase of gelation, by extending the applicability of Differential Dynamic Microscopy (DDM) to non-ergodic media. We demonstrate uncoupled development of the structure and dynamics that reveals an intermediate stage of gel formation, and compare the DDM results with the rheological features of evolving gels. We finally show how understanding the gelation at multiple length and time scales via DDM and rheology opens new ways to tune the mechanical properties of colloidal gels that bear inherent versatility.
3 essays on social networks and entrepreneurship
This thesis explores in three essays if, how, and why social relationships have a bearing on outcomes in the entrepreneurial process. The first essay attempts to determine which mechanism drives the children of business owners to expect to become and actually enter business ownership themselves. Results are inconsistent with arguments asserting that the intergenerational correlation is a result of: the transmission of human capital or financial capital; the expectation of inheriting a business; a heightened awareness of the viability of business ownership; or preferences for having lots of money. Findings are consistent with the notion that the intergenerational correlation in business ownership is a result of shared preferences/traits such as overconfidence. Social capital is a multifaceted concept. However, a disproportionate share of network research has been dedicated to the theorization and transmission of one form of social capital information. Indeed, network structure is generally considered a proxy for information flow. This assumption is often reasonable. In important contexts of social and economic interest it can, however, be misleading. This essay draws attention to the specific "substances of advantage" that flow to different types of actors across varied dyadic ties. Two concepts-(non)rivalry and (non)excludability-are introduced to explain why certain substances of advantage are (not) transmitted across different types of dyadic ties to actors with distinct categorical characteristics.
A dual/high-voltage automotive electrical power system with superior transient performance
Today's automotive electrical power system is based on an engine-driven alternator regulated at 14 V which charges a 12 V battery and delivers power to the loads. Installed electrical power is growing rapidly with model year, and future electrical power requirements are expected to exceed the capability of the present 14 V system by about 2005. A high/dual-voltage electrical system is necessary to meet these requirements. A new alternator system which substantially improves the present Lundell alternator design is proposed. This new system can double the output power and greatly improve efficiency without the need to rewind the machine. Inherent load-dump transient suppression and jump-start charging features are also achieved. The attributes of this novel system have been experimentally demonstrated. The main goal of this thesis is to investigate and develop analytical system and subsystem models for dual/high-voltage automotive electrical systems. Detailed time-domain and simplified averaged models for synchronous machines and three-phase rectifiers with constant-voltage loads are developed. A load-matching technique based on a switched-mode rectifier is introduced and used in conjunction with the developed machine/rectifier models to design the new high-power, highefficiency alternator system. Analytical models for two dual-voltage systems, one based on interleaved dc/dc converters and the other on a dual-stator Lundell alternator, are developed and used to investigate their characteristics. The analytical models for the dc/dc converter system are used in the design of a prototype which experimentally demonstrates the high performance features of the system while verifying the analytical results. A comparison of the characteristics of dual/high-voltage architectures is presented and it is shown that the load-matching technique can be used in a number of dual-voltage systems to improve performance. An attractive complete dual-voltage system, which incorporates the new alternator and a dc/de converter, is introduced.
Economic modeling of intermittency in wind power generation
The electricity sector is a major source of carbon dioxide emissions that contribute to global climate change. Over the past decade wind energy has steadily emerged as a potential source for large-scale, low carbon energy. As wind power generation increases around the world, there is increasing interest in the impacts of adding intermittent power to the electricity grid and the potential costs of compensating for the intermittency. The goal of this thesis research is to assess the costs and potential of wind power as a greenhouse gas abatement option for electricity generation. Qualitative and quantitative analysis methods are used to evaluate the challenges involved in integrating intermittent generation into the electricity sector. A computable generation equilibrium model was developed to explicitly account for the impacts of increasing wind penetration on the capacity value given to wind. The model also accounts for the impacts of wind quality and geographic diversity on electricity generation, and the impacts of learning-by-doing on the total cost of production. We notice that the rising costs associated of intermittency will limit the ability of wind to take a large share of the electricity market. As wind penetration increases, a greater cost is imposed on the wind generator in order to compensate for the intermittency impacts, making the total cost from energy from wind more expensive. Because the model explicitly accounts for the impacts of intermittency, the decision to add wind power to the grid is based on the marginal cost of adding additional intermittent sources to the system in addition to the cost of generating wind energy.
Inbound supply chain optimization with ship-mode variation in a fixed-capacity fulfillment center
each of the past two years. In order to scale with expected continued sales growth, Amazon has been investing heavily in its inbound supply chain, where product is received and allocated to various nodes, with cross-dock facilities, Amazon Robotics fulfillment centers and traditional fulfillment centers constituting a multi-echelon distribution network. In an Amazon Robotics fulfillment center, robotic drives retrieve and deliver portable inventory pods, where product is stowed and picked at fixed stations. Currently, approximately 65% of associate hours within the inbound department are utilized in the direct process of stow, while the other 35% of associate hours in the inbound department are utilized in support of the stow process in tasks such as corrugate removal and product container management. As a result, there is a continued emphasis on improving the efficiency of the nonvalue added tasks utilized in support of the stow process in order to utilize as many hours as possible in the value-added stow process. This thesis proposes a linear optimization-based analysis framework and capital allocation model that can be utilized to determine the investment viability for different automation systems and process improvements, which could improve efficiency and reduce overall cost in the Amazon Robotics fulfillment centers. This is especially the case within those fulfillment centers that are labor constrained. Labor constraints within a fulfillment center result in artificial limits set within Amazon's inventory placement algorithm, changing the origin of the shipment of product(s) to customers, which results in additional outbound transportation cost. This study will uncover unrealized cost-improvement areas by suggesting an inbound conveyance solution that can improve upon the current human-powered inbound system, and provides further areas of investigation for additional improvement. Implementation of the selected automation solution reduces inbound department hours by -3% with a payback period of -0.93 years for the fulfillment center in question, while improving labor-constrained fulfillment center capacity by as much as 1 %, and suggests further areas of investigation that can improve overall cost within the inbound supply chain by over 10%.
Optimal building heights around Bogotá's first subway line
The necessity of a more efficient and clean mode of transportation, combined with the possibility to increase real estate supply in Bogotá, makes the first subway line a remarkable opportunity to redefine the city's future. In order to do so, it is fundamental to understand the potential market demand in the area. Historically, the market has not been taken into account when defining land use regulations, that is why incorporating market factors would definitely contribute to a more appropriate land use policy. The purpose of the present research is to develop a model to estimate optimal building heights that can help in calculating optimal market densities in the first subway line catchment area. Estimating potential densities would be useful not only to make better land use and regulation policy, but also to find real estate opportunities in the subway corridor.
Essays on trades and security prices
This thesis consists of three chapters that investigate the complex relation between security prices and trades of market participants. In the first chapter, I study the evolution of stock prices after trades with different underlying motives using a novel data set of portfolio transitions. Institutional specifics allow me to identify portfolio transition purchases and sales as most likely induced by information-related and liquidity-related factors, respectively. I find that purchases permanently shift stock prices to new levels; moreover, these price changes are more significant after large trades, trades in stocks with a high degree of information asymmetry, and trades that reflect new rather than stale information. At the same time, sales trigger only temporary price pressure effects that are reversed in the following weeks. Thus, my findings provide supporting evidence for a long-standing tenet of market microstructure stating that information-motivated and liquidity-motivated transactions generate different price dynamics. In the second chapter, I analyze the price dynamics in response to trades in more detail; in particular, I focus on the properties of price impact. I explore the following questions: (1) how the price impact coefficients relate to various stock characteristics and differ across trading venues; (2) how they evolve during execution of multi-trade "packages"; and (3) what functional form best describes price impact functions. Regarding most of these questions, there exists an extensive theoretical literature which provides interesting insights. Using a unique data set of portfolio transition trades, I document a number of empirical facts about price impact, some of which can not be easily explained by existing models.
Prediction and analysis of degree of suicidal ideation in online content
Machine learning (ML) has increasingly been used to address the growing burden of mental illness and lack of access to quality mental health care. Recently such models have been applied to online data, such as social media postings to augment mental health screening. Despite the potential of these methods, online ML classifiers still perform poorly in multi-class settings. In this thesis, we propose the usage of novel document embeddings and mental health based user embeddings for triaged suicide risk screening. Machine learning to infer suicide risk and urgency is applied to a dataset of Reddit users in which the risk and urgency labels were derived from crowdsource consensus. We show that the document embedding approach outperforms count-based baselines and a method based on word importance, where important words were identified by domain experts. We examine interpretable features and methods that help to discern and explain risk labels. Finally, we find, using a Latent Dirichlet Allocation (LDA) topic model, that users labeled at-risk for suicide post about different topics to the rest of Reddit than non-suicidal users.
An inverse approach to understanding benthic oxygen isotope records from the last deglaciation
Observations suggest that during the last deglaciation (roughly 20,000-10,000 years ago) the Earth warmed substantially, global sea level rose approximately 100 meters in response to melting ice sheets and glaciers, and atmospheric concentrations of carbon dioxide increased. This interval may provide an analog for the evolution of future climate. The ocean plays a key role in the modern climate system by storing and transporting heat, salt, and nutrients, but its role during the last deglaciation remains uncertain. Prominent signals of the last deglaciation in the ocean are a gradual warming and a decrease of the seawater oxygen isotope ratio 5180 (a signature of melting land ice sheets). These changes do not occur uniformly in the ocean, but propagate like plumes of dye over hundreds and thousands of years, the aggregate results of turbulent advective and diffusive processes. Information about changing temperatures and oxygen isotopes is stored in the shells of benthic organisms recovered in ocean sediment cores. This thesis develops and applies an inverse framework for understanding deglacial oxygen isotope records derived from sediment cores in terms of the Green functions of ocean tracer transport and ocean mixed layer boundary conditions. Singular value decomposition is used to find a solution for global mixed layer tracer concentration histories that is constrained by eight last-deglacial sediment core records and a model of the modern ocean tracer transport. The solution reflects the resolving power of the data, which is highest at model surface locations associated with large rates of volume flux into the deep ocean. The limited data resolution is quantified and rationalized through analyses of simple models. The destruction of information contained in tracers is a generic feature of advective-diffusive systems. Quantifying limitations of tracer records is important for making and understanding inferences about the long-term evolution of the ocean.
Comparison of EC-Kit with Quanti-Tray : testing, verification, and drinking water quality mapping in Capiz Province, Philippines
This thesis accomplishes three tasks. First, it verifies the EC-Kit under different water source conditions by comparing it to a laboratory standard method, the IDEXX Quanti-Tray[tm]. The EC-Kit is a simple, inexpensive field test kit that contains complementary tests for Escherichia coli and total coliform: the Colilert[tm] 10-milliliter presence/absence test and 3MTMS Petrifilm[tm] test. This work was executed by analyzing 521 water samples collected in Capiz Province, Philippines as well as 40 water samples from the Charles River in Cambridge, Massachusetts. Second, it determines the risk level for drinking water sources according to E.coli and total coliform levels in Capiz Province for difference locations and source types. Third, this study contributes to an ongoing mapping project, aimed at creating an interactive, searchable map of water quality results from EC-Kit and Quanti-Tray[tm]. The results of the study reveal that each component of EC-Kit and the entire kit itself is correlated to Quanti-Tray[tm] in a statistically significant way. Moreover, from the calculations of error and proportional reduction in error for unimproved/improved water sources, it is possible to make better predictions with just the use of the Colilert[tm] test, but not just the use of the TM T Petrifilm . This is because the detection limits for PetrifilmTM are an order of magnitude higher than Colilert[tm], namely PetrifilmTM colony counts of 1-10/1 mL sample results fall within the High and colony counts of 10-100/1 mL of sample fall within the Very High risk level categories, whereas positive Colilert[tm] results fall within the Intermediate, High, and Very High risk level categories. Most importantly, the EC-Kit allows for the best reduction in error, with a proportional reduction in error of 63% for unimproved water sources and 60% for improved water sources. This finding is significant because it means that a simple, inexpensive field kit can change our understanding of the safety of drinking water compared to simply knowing the United Nations infrastructure designation of improved versus unimproved water sources. Furthermore, the statistical analysis revealed that while the EC-Kit does not exactly match the Quanti-Tray[tm] results, it still provides useful information for assessing at-risk water sources.
An energy buffer for constant power loads
Constant power loads (CPLs) are a class of loads steadily increasing in use. They are present whenever a load is regulated to maintain constant output power, such as with LED drivers in high quality lighting that is impervious to input fluctuations. Because CPLs exhibit a negative incremental input impedance, they pose stability concerns in DC and AC systems. This thesis presents a power converter for a constant power LED bulb that presents a favorable input impedance to the grid. The use of an energy buffer allows the converter to draw variable power in order to resemble a resistive load, while the output consumes constant power. A switched-mode power supply consisting of a cascaded boost and buck converter accomplishes this by storing energy in the boost stage output capacitor. Experimental results demonstrate that the converter exhibits a resistive input impedance at frequencies over 0.5 Hz while maintaining constant power to the LED load.
Harmonization of aviation user charges in the North Atlantic airspace
The purpose of this thesis is to explore various harmonization scenarios for North Atlantic en route user charges. The current charging system involves eight countries, each with their own method for computing user charges. The scope of the research is limited to revenue neutral approaches for service providers, meaning each air navigation service provider (ANSP) receives constant total charges in 2006. Therefore, the viability of different scenarios is compared in terms of its impact on airspace users. Two different interpretation of a "harmonized" system are considered. The first explores the harmonization of only the charging methodology, allowing service providers to set and collect their own charges. The second harmonization alternative fully harmonizes the North Atlantic user charges resulting in a single charge per flight. Within each of these alternatives four different charge scenarios were modeled using 2006 data. The four alternatives are a flat charge, distance-based rate, a combination weight and distance charge, and a fixed-plus-variable charge. Utilizing 47,516 North Atlantic flights drawn from a systematic random sampling of days in 2006, the average North Atlantic user charge was determined to be $393 and ranged from less than $1 to $3,868. The magnitude of the average North Atlantic user charge is small relative to the total flight costs airlines incur, thus all harmonization approaches will have only second order effects on the airlines' bottomline. Thus, the harmonization of the regions' user charges allows for the unique opportunity to develop a more rational system of charges without large disruptions to the majority of users. The thesis explores the impact of the various charge scenarios on user stakeholder groups in terms of aircraft size, North Atlantic distance, and origin-destination regions.
Design and application of a genetically-encoded probe for peroxiredoxin-2 oxidation in human cells
Hydrogen peroxide (H₂O₂) is a well-known oxidant species commonly produced in eukaryotic organisms as a result of cellular metabolism that plays a central role in numerous processes in cells, and dysregulation of this species can result in a number of different disease states in human cells. In the case of cancer, elevated metabolism is believed to result in higher rates of H₂O₂ production in these cells, as well as more susceptibility to H₂O₂-induced apoptosis than normal cells. To this end, researchers have identified several therapeutic compounds that are believed to kill cancer cells via the intracellular elevation of one or more oxidants. However, due to the limitations of current tools for detection of these species, little is known about which therapeutic compounds induce toxicity via elevation of specific oxidants, which would aid in the identification of susceptible tumors to these treatments.
p53 nuclear localization control, and p53-dependent regulation of DNA repair gene transcripts
The experiments presented in this thesis use mutation analysis, and study of the cells of mice with a deletion allele for the Trp53 gene, to explore both the regulation of p53, and its downstream functions mediated by specific activation of target genes. Chapter 2 addresses the regulation of nuclear localization of the p53 protein. Previous reports in the literature had suggested that the p53 negative regulator HDM2 was a nucleocytoplasmic shuttling protein that binds and carries p53 from the nucleus of the cell to the cytoplasm where it is destroyed by the proteasome. We determined that HDM2 with a mutated nuclear export sequence was still able to able to alter p53's cellular localization to a cytoplasmic pattern. The nuclear export sequence in the p53 C-terminus was required for this activity, as was the ability of HDM2 to ubiquitinate p53. Further studies indicated that ubiquitination of the p53 C-terminus was the basis for HDM2's ability to remove it from the nucleus and cause its efficient degradation. C-terminal ubiquitination causes the p53 nuclear export sequence to be activated or made more accessible to the nuclear export machinery of the cell. Chapter 3 summarizes cDNA microarray experiments in which Trp53-/- and Trp53+/+ fibroblasts were treated with a panel of genotoxic agents, and assayed for p53-dependent upregulation or downregulation of the approximately 15,000 gene sequences represented on the microarray. New candidate p53 target genes were revealed, among them the DNA repair gene ErccS, which encodes the xeroderma pigmentosum disease gene homolog Xpg, a participant in nucleotide excision repair and a mediator of base excision repair of oxidative DNA damage.
Cast analysis off the Macondo accident
On April 20, 2010, an explosion in the rig Deepwater Horizon performing drilling operations on the Macondo Prospect Well, in the Gulf of Mexico, led to the largest oil spill in the history of the petroleum industry. Eleven crewmembers lost their lives and around 4.9 million barrels of oil were discharged into the ocean until the continuous subsea blowout of the well was contained in September 19, 2010. Given the magnitude and the complexity of the accident, several safety analyses have been proposed by the international community at different levels of the system involved in the accident. Most of these studies use accident analysis techniques based on chain-of-event models, whose main objective is to identify root-causes. However, while this approach describes physical phenomena accurately, it does not explain the role of organizational and socio-economical factors, human decisions, or design inaccuracies in accidents in complex, adaptive, and tightly coupled systems like Macondo. In response to this need, N. Leveson developed the new accident-analysis technique Causal Analysis Based on System Theory (CAST), based on her model System-Theoretic Accident Model and Processes (STAMP). In STAMP accidents are not treated as chain of failure events, but as complex processes that result from a large variety of causes including component failures and faults, system design errors, unintended and unplanned interactions among system components, human operator errors, flawed management decision-making, inadequate controls and oversight, and poor safety culture. This thesis presents management recommendations based on a CAST analysis of the Macondo Accident. The goal is to help the oil and gas offshore drilling community achieve safer operations and understand the value of systems safety in achieving organizational goals.
Molecules and materials for the optical detection of explosives and toxic chemicals
Optical chemosensing, especially using amplifying fluorescent polymers, can allow for the highly sensitive and selective vapor-phase detection of both explosives and highly toxic chemicals, including chemical warfare agents. There are varieties of analyte targets, however, that remain challenging for detection by these methods. Research towards improving this technology has obvious implications for homeland security and soldier survivability. This dissertation details the development of new molecules, materials, and transduction schemes aimed at improving both the versatility and sensitivity of optical chemical detection. Chapter One provides an introduction to the field of fluorescent polymer sensors, principally focusing on their utility in the detection of nitroaromatic explosives. Brief descriptions of other analytical methods used for explosives detection are also included. Chapter Two describes the synthesis and optical properties of a new class of conjugated polymers that contain alkyl-amino groups directly bound to the arene rings of poly(phenylene ethynylene)s and poly(fluorene)s. These materials displayed red-shifted absorption and emission spectra, large Stokes Shifts, as well as long excited state lifetimes.
Designing and compiling functional Java for the Fresh Breeze architecture
The Fresh Breeze architecture is a novel approach to computing that aims to support a high degree of parallelism. Rather than striving for heroic complexity in order to support exceptional single-thread performance, as in the Pentium and PowerPC processors, it focuses on using a medium level of complexity with a view to enabling exceptional parallelized performance. The design makes certain sacrifices with regard to capability in order to achieve a lower degree of complexity. These design choices have significant implications for compiling for the architecture. In particular, Fresh Breeze uses immutable, fixed-size memory chunks rather than sequential, mutable memory to store data [1][2][3]. This work demonstrates Functional Java, a subset of the Java language that one can compile to Fresh Breeze machine code with relative ease, overcoming challenges pertaining to aliased arrays and objects. It also describes work on a compiler designed for the Fresh Breeze architecture and how the work overcomes various challenges in compiling Java bytecode to data flow graphs.
Olfactory-related receptors : methods towards enabling structural and functional studies
Mammalian noses can detect and distinguish an inestimable number of odors at minute concentrations. Four classes of G protein-coupled receptors (GPCRs) are responsible for this remarkable sensitivity: olfactory receptors (ORs), vomeronasal receptors (VNRs), trace amine-associate receptors, and formyl peptide receptors. Structural knowledge of these receptors is necessary to understand the molecular basis of smell. However, no structure exists for three main reasons. First, milligrams of protein are needed for crystallization screens, but most are expressed at low levels endogenously or in heterologous expression systems. Second, detergents capable of solubilizing and stabilizing these proteins in aqueous solution must be found. Third, the flexible nature of GPCRs can inhibit crystal lattice formation. Methods for overcoming each obstacle were developed. Milligrams of a VNR were expressed in HEK293 cells, and milligrams of 13 GPCRs were expressed in a cell-free system. All could be purified to >90%. The purified receptors had correct secondary structures, and could bind their ligands. The HEK293 and cell-free receptors had nearly identical structures and binding affinities, demonstrating that cell-free expression can be used for GPCR production and mutational studies. To demonstrate this, six variants of mOR103-15 with single amino acid substitutions were expressed. Ligand-binding measurements indicated which residues were involved in ligand recognition. The choice of detergent used in the cell-free system was critical, and significantly affected expression levels. A class of amphiphilic peptide detergents was designed and tested with the receptors. These detergents could be used to express milligrams of functional receptors. The peptide tail and head group properties did not significantly affect their function, suggesting that they may be a class of surfactants usable with multiple olfactory-related receptors, and even other membrane proteins. Lastly, the protein T4 Lysozyme (T4L) was fused in the 3rd intracellular loop of two receptors to increase potential crystal lattice contact points. Purified T4L variants had correct secondary structures, and could bind their ligands and initiate intracellular signaling. The methods described generated sufficient quantities of pure receptors for crystal screens. The large number of functionally expressed GPCRs indicates that these techniques can be applied to other olfactory-related receptors, and even other membrane proteins.
Studying the atmosphere of HD 189733 b using the Rossiter-McLaughlin effect
Transmission spectroscopy is a widely-used method for studying exoplanetary atmospheres. However, the differential data analysis techniques that are generally applied to high-resolution ground-based spectroscopic data are only sensitive to narrow spectral features and do not preserve broadband features. This makes them insensitive to the strong Rayleigh scattering slope of HD 189733 b that is due to possible atmospheric aerosols. The Rossiter-McLaughlin (RM) effect provides a way to probe broadband spectral features because its amplitude varies as a function of wavelength according to the effective planet radius. Previously, radial velocity (RV) variations caused by the RM effect were interpreted as being a tentative detection (2.5[sigma]) of the broadband scattering slope of HD 189733 b. We developed a new method that directly models the distortions in spectral lines (rather than the resulting RV variation) and applied this method to the same archival HARPS data that was used in the previous tentative detection. Here we will present this method and the ongoing work necessarily to problem-solve and fully implement it.
Amplitude sampling for signal representation
The theoretical basis for conventional acquisition of bandlimited signals typically relies on uniform time sampling and assumes infinite-precision amplitude values. This thesis explores signal representation and recovery based on uniform amplitude sampling with either assuming infinite-precision timing information or time restricted to a uniform grid. If time is allowed to lie on the continuum, the approach is based on a structure that is equivalent to reversibly transforming the input signal into a monotonic function which is then uniformly sampled in amplitude. In effect, the source signal is then implicitly represented by the times at which the monotonic function crosses a predefined set of amplitude values. We refer to this technique as amplitude sampling. This approach can be viewed alternatively as nonuniform time sampling of the original source signal whereas the resulting monotonic signal produces an associated amplitude-time function which is uniformly sampled in amplitude. The duality and frequency-domain properties for the functions involved in the transformation are derived. Reconstruction from amplitude samples is shown to be possible through iterative algorithms. If both time and amplitude are restricted to equally-spaced values, then the sampling strategy, referred to as lattice sampling, simultaneously uses both uniform amplitude and uniform time sampling. A class of bandlimited signals is characterized that can be sampled and reconstructed in this manner in order to derive spectral characteristics of quantized discrete-time signals.
Assimilating hybridized architecture
The thesis searches for means of operation to deal with hybridized architecture. As a conceptual framework, sociology theory appears to be an insightful precedent, for it analyzes and classifies how multiple constituents join together. Sociologist Milton Gordon delineates three assimilation processes; these include Anglo Conformity, Cultural Pluralism, and Melting Pot. From these theories, it is suspected that the Melting Pot model has the most potential for generating unconventional program usage while being the most challenging model in reconciling pre-determined functions. The thesis uses the Melting Pot model as a means of operation to push the limits of assimilating hybridized architecture. Anglo Conformity is when an individual gives up his/her attribute to fit into the larger context. It can be represented as A + B + C = A, assuming A is the majority. Cultural Pluralism also known as the "salad bowl," is when different individuals keep their own qualities while sharing common interests. A + B + C = A + B + C. Melting Pot is when different individuals merge together by absorbing and contributing each individual's quality; out of this interaction comes a new entity.
Towards verifiable adaptive control for safety critical applications
To be implementable in safety critical applications, adaptive controllers must be shown to behave strictly according to predetermined specifications. This thesis presents two tools for verifying specifications relevant to practical direct-adaptive control systems. The first tool is derived from an asymptotic analysis of the error dynamics of a direct adaptive controller and uncertain linear plant. The analysis yields a so called Reduced Linear Asymptotic System, which can be used for designing adaptive systems to meet transient specifications. The tool is demonstrated in two design examples from flight mechanics, and verified in numerical simulation. The second tool developed is an algorithm for direct-adaptive control of plants with magnitude saturation constraints on multiple inputs. The algorithm is a non-trivial extension of an existing technique for single input systems with saturation. Boundeness of all signals is proved for initial conditions in a compact region. In addition, the notion of a class of multi-dimensional saturation functions is introduced. The saturation compensation technique is demonstrated in numerical simulation. Finally, these tools are applied to design a direct-adaptive controller for a realistic multi-input aircraft model to accomplish control reconfiguration in the case of unforeseen failure, damage, or disturbances. A novel control design for incorporating control allocation and reconfiguration is introduced. The adaptive system is shown in numerical simulation to have favorable transient qualities and to give a stable response with input saturation constraints.
Using behavioral analytics and machine learning to improve churn management
New trends are shaping the telecommunications, media and technology (TMT) industries. Consumers are demanding to be connected anytime to hundreds of thousands of applications that are one click away. In addition, loyalty levels are decreasing and customers do not hesitate to switch providers if they do not receive value for their money. Because of this, churn management is a key driver of profits. However, few companies excel at churn management and most underestimate its impact. The thesis is focused on describing a technological solution targeted to improve churn management capabilities within companies that belong to the TMT sector and explore the opportunities and hurdles of selling this kind of solution in a B2B context. The hypothesis is that a world class churn management solution can effectively deploy statistical models to score customers by their likelihood to churn and execute targeted treatments for each segment through the operator service channels. The study will focus on how behavioral analytics and machine learning can increase customer's life time value and boost margins in TMT companies. Throughout the research, I will describe the best practices within the industry to establish a state of the art churn management solution.
Multi-channel blind system identification using the Laguerre expansion for characterization of circulatory hemodynamics
A new tool for real-time characterization of both systemic and local circulatory hemodynamics has been developed. Given two peripheral circulatory waveform measurements this new signal-processing algorithm generates two low order models that represent the distinct branch dynamic behavior associated with the measured circulatory signals. The framework for this methodology is based on a multi-channel blind system identification technique that has been reformulated to use a Laguerre basis function series expansion. The truncated Laguerre series expansion allows a highly compact representation of the cardiovascular dynamics. This new algorithm has been applied to experimental arterial blood pressure measurements derived from a swine model and shown to consistently provide accurate identification of the vascular hemodynamics. The parameters of the circulatory dynamics that are quantified in real-time via this newly developed algorithm, Laguerre Model Blind System Identification (LaMBSI), can be used to identify or quantify systemic and local cardiovascular features of interest. The LaMBSI algorithm identifies a set of six parameters per channel when applied to measured circulatory signals, 5 distinct model coefficients plus 1 common Laguerre basis pole shared by both channels. The two sets of identified parameters can be treated as feature vectors and standard statistical techniques can be used to extract information from this compact time series of data. In this thesis, a multi-parameter linear regression is used to predict cardiac output based on the LaMBSI feature vectors identified from two pulsatile arterial pressure signals. The promising results from this linear regression model serves as a proof-of-principle that the
Advanced filters and components for power applications
The objective of this thesis is to improve the high frequency performance of components and filters by better compensating the parasitic effects of practical components. The main application for this improvement is in design of low pass filters for power electronics, although some other applications will be presented. In switching power supplies the input and output filters must attenuate frequencies related to the fundamental switching frequency of the converter. The filters represent a major contribution to the weight, volume and price of the power supply. Therefore, aspects of the design of the switching power converter, especially those related to the switching frequency, are limited by the high frequency performance of the filters. The usual methods of improving the high frequency performance of the filter includes using larger, better components. Filter performance can improve by using higher quality inductors and capacitors or by adding high frequency capacitors in parallel with the filter capacitor. Also, an additional filter stage can be added. All of these methods add significant cost to the design of the power supply. If the effect of high-frequency parasitic elements in the components can be reduced (at a low cost) the performance of the filter can be enhanced. This allows the development of filters with much better high frequency attenuation, or the reduction of filter size and cost at a constant performance level. In filtering and other applications, the ability to reduce the effect of parasitic elements will be a technique that will enable many high-frequency designs. Specifically, this thesis will present two techniques that can be used to reduce the effects of parasitic inductance and capacitance. One technique,
Feasibility of semi-continuous solar disinfection system for developing countries at a household level
A study to assess the feasibility of a novel solar water disinfection system developed by the author, Semi-Continuous Solar Disinfection (SC-SODIS), was conducted. Three aspects of SC-SODIS feasibility were considered: technical, social and economic feasibility. This study focused on developing countries and specifically, Nepal. To address the technical feasibility, field data included measuring the performance of the prototype system under climatologic conditions found in Lumbini, Nepal during the month of January 2003. The social and economic feasibilities were determined from preliminary feedback from local people and calculation of construction costs from locally available materials respectively. Results suggest SC-SODIS is a feasible technology for developing countries and specifically Lumbini, Nepal. SC-SODIS can be considered a sustainable technology as it is technically simple, effective at microbial inactivation as measured by the E.coli indicator organism, can be made from locally available materials and is economical. Preliminary feedback from locals show SC-SODIS is socio-culturally acceptable. Limited time did not allow study of the operation and maintenance problems that the system might present over the long term.
A study investigating copper smelting remains from San Bartolo, Chile
Introduction: Research on the metallurgy of archaeological artifacts has focused primarily on the examination of objects to reveal their design, their composition, the properties of the material people selected to achieve the design, and the fabrication processes used in managing the metal to produce the end product. Recently that focus has begun to broaden, and archaeologists are taking a step back to investigate the earliest stages of prehistoric metal processing that precede object manufacture, namely ore mining and extractive metallurgy. However, little archaeological work on mining and extraction has been accomplished to date, in part because so few metal processing sites have been identified. These sites are very difficult to find because of the lack of standing architecture, particularly smelting installations. Prehistoric smelting furnaces tend to be small and are either excavated beneath the ground surface or are above ground but made of impermanent materials.
Activity-based outdoor mobile multiplayer game
Traditional outdoor recreation is physically and emotionally rewarding from goal directed social activities and encourages a connection with the real world but can be logistically difficult. Online gaming allows people to play together despite physical distances and differences in time zones. Players enjoy new experiences in awe-inspiring interactive worlds while effectively inactive. This project is a physically active outdoor social game that embeds a layer of fantasy and challenge in the real world employing location-based technologies available on mobile phones. Requiring the game be multiplayer in real-time and played in a physical space presents certain limitations in the design of input and output mechanics. This project demonstrates how those constraints were managed to create a compelling experience. A sixteen people evaluation validates the concept while observations and feedback suggest future improvements.
Product availability improvement for an analytical consumables supply chain : distribution and transportation
This thesis work focuses on supply chain operation optimization on column consumables at Waters Corporation in order to improve product availability to 95% to customers. This thesis project is conducted by collaboration of Han, Hua and Lee as a team through interviewing stakeholders, performing historical data analysis and model simulation. Hua's thesis focuses on safety stock allocation in different stages of column supply chain and Lee's thesis focuses on replenishment policy of products with consistent high demands and production policy for build-to-order products. This thesis focuses on improving product availability of products with relatively low demands and high variability. Due to the high uncertainty of geographical demand distribution and unbalanced on-hand inventory at major distribution centers around the world, pooling inventories into a single global distribution center is proposed to increase the company's capability to better fulfill customer requests. A single global distribution model is designed for Waters Corporation's column products to analyze the change in product availability. Further optimization on inventory using lot size-reorder point model with manufacturing lot size constraint reduces the total cost of inventory. As a result of constructing a single global distribution center, the product availability can be improved to 100% for products with current product availability of 70% or higher. For products with current product availability lower than 70%, an improvement to 80% to 90% can be achieved. The total on-hand inventory of all products within the scope of this thesis can be reduced by 14%. A discussion about potential influence of single distribution center on transportation and impact on customers is also included.
High-precision optical and microwave signal synthesis and distribution
In this thesis, techniques for high-precision synthesis of optical and microwave signals and their distribution to remote locations are presented. The first topic is ultrafast optical pulse synthesis by coherent superposition of mode-locked lasers. Timing and phase synchronization of ultrabroadband Ti:sapphire and Cr:forsterite mode-locked lasers is studied. Subfemtosecond (<0.4 fs) timing synchronization over 12 h is demonstrated. In addition to the timing lock, phase synchronization to a local oscillator with subfemtosecond accuracy (<0.5 fs) over 1000 s is achieved. Drift-free subfemtosecond timing and phase synchronization enables a phase-coherent spectrum over 1.5 octaves that has a potential to generate single-cycle optical pulses at 1 pm. The second topic is long-term stable microwave signal synthesis from mode-locked lasers. Although mode-locked lasers can produce ultralow-noise microwave signals as a form of optical pulse trains, the transfer of stability from optical to electronic domain is a highly non-trivial task. To overcome the limitations of conventional photodetection, an optoelectronic phase-locked loop based on electro-optic sampling with a differentially-biased Sagnac-loop is proposed. Long-term (>1 h) 3-mrad level phase stability of a 10.225 GHz microwave signal extracted from a mode-locked laser is demonstrated. The third topic is timing stabilized fiber links for large-scale timing distribution. Precise optical timing distribution to remote locations can result in synchronization over long distances. In doing so, acoustic noise and thermal drifts introduced to the fiber links must be canceled by a length-correction feedback loop. A single type-II phase-matched PPKTP crystal is used to construct a compact and self-aligned balanced optical cross-correlator for precise timing detection.
Anisotropic ductile fracture of metal sheets : experimental investigation and constitutive modeling
Anisotropic mechanical properties are common in plastically deformed or thermomechanically processed metallic materials, e.g. in rolled or extruded sheet. Among them, the anisotropy of large strain plastic deformation and ductile fracture under multi-axial loading is highly relevant to various industrial applications such as metal forming, impact failure of structures, etc. In this thesis, a comprehensive study of the plasticity and ductile fracture of anisotropic metal sheets is presented, covering experimental characterization, constitutive modeling and numerical implementation. On the basis of an extensive multiaxial experimental program, the anisotropic plasticity of the present aluminum alloy is modeled using a macroscopic phenomenological model and a polycrystalline plasticity model, respectively. The proposed phenomenological modeling makes use of a linear-transformation- based orthotropic yield function with pressure dependence, as well as a combined isotropic/kinematic hardening law, and is able to capture most features of the anisotropic plastic behavior under various multi-axial stress states with good accuracy and computational efficiency. At the same time, a physically-motivated self-consistent polycrystalline plasticity model is utilized to describe the texture-induced anisotropy and through-thickness heterogeneity of the present sheet material. A Reduced Texture Methodology (RTM) is developed to provide the computational efficiency needed for industrial applications. In additional to an accurate prediction of all macroscopic material behaviors, the polycrystalline model reveals that the development of the crystallographic texture is the underlying mechanism of plastic anisotropy and heterogeneity. The anisotropic ductile fracture of the present aluminum alloy extrusion is investigated using a hybrid experimental-numerical approach. The experimental results show a strong dependency of the strain to fracture on the material orientation with respect to the loading direction. A new non-associated anisotropic fracture model is proposed which makes use of a stress state dependent fracture locus and an anisotropic plastic strain measure obtained through the linear transformation of the plastic strain tensor. It is shown that the use of the Modified Mohr-Coulomb (MMC) stress state weighting function in this anisotropic fracture modeling framework provides accurate predictions of the onset of fracture for all fourteen distinct fracture experiments. The proposed plasticity and fracture modeling framework is successfully validated on a industrial stretch-bending operation.
Exploratory analysis on Toronto's Strong Neighbourhood Strategy 2020's impact on crime in Kingsview Village-The Westway
The Strong Neighbourhood Strategy 2020 (TSNS 2020) is an initiative started by the City of Toronto in 2014. The strategy's intention was to engage neighbourhood residents, city services, and local non-profits in community and economic development to generate tailored solutions that would result in heightened are prosperity, vibrancy, and safety in 31 of the city's most vulnerable neighbourhoods. My exploratory analysis focused on how the strategy attempted to address violent crime prevention and mitigation in one of these neighbourhoods (Kingsview Village-The Westway). Employing a combination of comparative violent crime data analysis, stakeholder interviews, and literature reviews from academic sources as well as my background in law enforcement, I could not find any evidence that TSNS 2020 had led to reductions in violent crime in Kingsview Village-The Westway. Furthermore, I elaborated on a series of fundamental strategic and implementational flaws in TSNS 2020 has halted its ability to achieve positive results and suggested methods TSNS 2020 could use to enhance their success rate as they look to learn from their past mistakes and build for the strategy's future. It is important to note that my exploratory analysis was conducted with a very small sample size and dataset and therefore should be taken a launching point for a more robust future evaluations of TSNS 2020 successes and shortcomings in the field of crime prevention and mitigation.
Improving energy efficiency in a pharmaceutical manufacturing environment -- production facility
The manufacturing plant of a pharmaceutical company in Singapore had low energy efficiency in both its office buildings and production facilities. Heating, Ventilation and Air-Conditioning (HVAC) system was identified to be the major energy consumer in the plant. An HVAC specific energy management tool was developed to monitor the energy efficiency and calculate the heat gains and cooling loads. In the office building, the HVAC operation schedule was revised, and motion detection lighting control was installed and configured to save electricity. In production facilities, house vacuum, process vacuum and dust collector were shut down during non-production time in Pharmaceutical Facility 2 (PF2). Statistical analysis using measured data was performed to verify the projected energy savings. Dehumidifier was disabled in Pharmaceutical Facility 1 (PF1) to relax the relative humidity from around 22% to 50%, while still maintaining it within the upper specification of 55%. Theoretical AHU-Dehumidifier models were built to find the optimum system settings with minimum energy consumption. With the implemented strategies, the annual energy consumption would be reduced by 6.68%, 6.58% and 2.32% in the office building, PF1 and PF2 respectively. The AHU-Dehumidifier models suggested a pre-cooling off-coil temperature of 15.50 C and a post-cooling off-coil temperature of 21 'C in face of the current humidity requirement to achieve minimum energy consumption.
Dalits rights, land reform, and the learning of democratic citizenship
This dissertation addresses the questions: When, and how, are durable inequalities disrupted and democratic citizenship deepened in societies that are politically committed to liberal democracy but have substantial social inequalities? How do law and social movements influence and shape this process? I develop answers by examining a successful case of land reform in Surendranagar (Gujarat, India), which was the result of socio-legal mobilization spearheaded by a local human rights organization called Navsarjan Trust. My main argument is that by working with Dalits in Surendranagar Navsarjan caseworkers helped articulate and popularize what philosopher Martha Nussbaum has called the "public myth of equality." I develop this main argument by developing responses to four questions.
Multi-signal gesture recognition using body and hand poses
We present a vision-based multi-signal gesture recognition system that integrates information from body and hand poses. Unlike previous approaches to gesture recognition, which concentrated mainly on making it a signal signal, our system allows a richer gesture vocabulary and more natural human-computer interaction. The system consists of three parts: 3D body pose estimation, hand pose classification, and gesture recognition. 3D body pose estimation is performed following a generative model-based approach, using a particle filtering estimation framework. Hand pose classification is performed by extracting Histogram of Oriented Gradients features and using a multi-class Support Vector Machine classifier. Finally, gesture recognition is performed using a novel statistical inference framework that we developed for multi-signal pattern recognition, extending previous work on a discriminative hidden-state graphical model (HCRF) to consider multi-signal input data, which we refer to Multi Information-Channel Hidden Conditional Random Fields (MIC-HCRFs). One advantage of MIC-HCRF is that it allows us to capture complex dependencies of multiple information channels more precisely than conventional approaches to the task. Our system was evaluated on the scenario of an aircraft carrier flight deck environment, where humans interact with unmanned vehicles using existing body and hand gesture vocabulary. When tested on 10 gestures recorded from 20 participants, the average recognition accuracy of our system was 88.41%.
Modifying the Massachusetts Institute of Technology Sensorimotor Control Lab model of human balance and gait control for the addition of running
This research continues the work begun by Sungho Jo and Steve G. Massaquoi on modeling human walking and upright balance. The model of human neurological control of balance and gait generation put forward by Jo and Massaquoi in "A model ofcerebrocerebello-spoinomuscular interaction in the sagittal control of human walking" and executed in MATLAB Simulink/SimMechanics. This model has been used to determine the feed-forward command sequences for the generation of walking and running gaits. Furthermore, two feedback circuits controlling the center of mass relative to the swing leg and the composted leg angle of the simulated model were added. These provide a basis for a wider control of disturbances in order to implement running. This work helps forward the long-term goals of the MIT Sensorimotor Control Group--creating a control model of the neurological circuitry responsible for governing human balance and locomotion and testing that model by using it to control a bipedal robot. The results of this research help to prove the validity of the cerebrocerebello-spinomuscular control model developed by Jo and Massaquoi and point positively towards the introduction of the running of the control model on a physical robot.
Astronaut EVA : safety, injury and countermeasures
Extravehicular Activity (EVA) spacesuits are a key enabling technology which allow astronauts to survive and work in the harsh environment of space. Of the entire spacesuit, the gloves may perhaps be considered the most difficult engineering design issue. A significant number of astronauts sustain hand and shoulder injuries during extravehicular activity (EVA) training and operations. In extreme cases these injuries lead to fingernail delamination (onycholysis) or rotator cuff tears and require medical or surgical intervention. In an effort to better understand the causal mechanisms of injury, a study consisting of modeling, statistical and experimental analyses was performed in section I of this thesis. A cursory musculoskeletal modeling tool was developed for use in comparing various spacesuit hard upper torso designs. The modeling effort focuses on optimizing comfort and range of motion of the shoulder joint within the suit. The statistical analysis investigated correlations between the anthropometrics of the hand and susceptibility to injury. A database of 192 male crewmembers' injury records and anthropometrics was sourced from NASA's Johnson Space Center. Hand circumference and width of the metacarpophalangeal (MCP) joint were found to be significantly associated with injuries by the Kruskal-Wallis test. Experimental testing was conducted to characterize skin blood flow and contact pressure inside the glove. This was done as part of NASA's effort to evaluate a hypothesis that fingernail delamination is caused by decreasing blood flow in the finger tips due to compression of the skin inside the extravehicular mobility unit (EMU) glove. The initial investigation consisted of a series of skin blood flow and contact pressure tests of the bare finger, and showed that blood flow decreased to approximately 60% of baseline value with increasing force, however, this occurred more rapidly for finger pads (4N) than for finger tips (ION). A gripping test of a pressure bulb using the bare hand was also performed at a moderate pressure of 13.33kPa (100mmHg) and at a high pressure of 26.66kPa (200mmHg), and showed that blood flow decreased 50% and 45%, respectively. Excessive hyperperfusion was observed for all tests following contact force or pressure, which may also contribute to the onset of delamination. Preliminary data from gripping tests inside the EMU glove in a hypobaric chamber at NASA's Johnson Space Center show that skin blood flow decreased by 45% and 40% when gripping at 3 moderate and high pressures, respectively. These tests show that finger skin blood flow is significantly altered by contact force/pressure, and that occlusion is more sensitive when it is applied to the finger pad than the finger tip. Our results indicate that the pressure on the finger pads required to articulate stiff gloves is more likely to impact blood flow than the pressure on the fingertips associated with tight or ill-fitting gloves. Improving the flexibility of the gloves will therefore not only benefit operational performance, but may also be an effective approach in reducing the incidence of finger injury. Space Policy Abstract EVA injury is only one of many dangers astronauts face in the extreme environment of space. Orbital debris presents a significant threat to astronaut safety and is a growing cause of concern. Since the dawn of satellites in the early 1950's, space debris from intentionally exploded spacecraft, dead satellites, and on-orbit collisions has significantly increased and currently outnumbers operational space hardware. Adding to this phenomenon, the advent of commercial spaceflight and the recent space activities in China and India to establish themselves as spacefairing nations are bound to accelerate the rate of space debris accumulating in low Earth orbit, thus, exacerbating the problem. The policies regulating orbital debris were drafted in the 1960s and 1970s and fail to effectively address the dynamic nature of the debris problem. These policies are not legally enforced under international law and implementation is entirely voluntary. Space debris is a relevant issue in international space cooperation. Unless regulated, some projections indicate space debris will reach a point of critical density, after which the debris will grow exponentially, as more fragments are generated by collisions than are removed by atmospheric drag. Space debris proliferation negatively impacts human spaceflight safety, presents a hazard to orbiting space assets, and may lead to portions of near-Earth orbit becoming inaccessible, thus limiting mission operations. The aim of this research effort was to review current international space policy, legislation and mitigation strategies in light of two recent orbital collision episodes. The first is the February 2009 collision between a defunct Russian Cosmos spacecraft and a commercial Iridium satellite. The second is China's display of technological prowess during the January 2007 intentional demolition of its inactive Fengyun-IC weather satellite using a SC-19 antisatellite (ASAT) missile. In each case the stakeholders, politics, policies, and consequences of the collision are analyzed. The results of this analysis as well as recommendations for alternative mitigation and regulatory strategies are presented.
Human induced pluripotent stem cell models of Rett Syndrome reveal deficits in early cortical development
Rett Syndrome (RTT) is a pervasive, X-linked neurodevelopmental disorder that predominantly affects girls. The clinical patient features of RTT are most commonly reported to emerge between the ages of 6-18 months and as such, RTT has largely been considered to be a postnatal disorder. The vast majority of cases are caused by sporadic mutations in the gene encoding methyl CpG-binding protein 2 (MeCP2), which is expressed in the brain during prenatal neurogenesis and continuously throughout adulthood. MeCP2 is a pleiotropic gene that functions as a complex, high-level transcriptional modulator. It both regulates and is regulated by coding genes and non-coding RNAs including microRNAs (miRNAs). The effects of MeCP2 are mediated by diverse signaling, transcriptional, and epigenetic mechanisms. Whereas the postnatal effects of MeCP2 have been widely studied, pre-symptomatic stages of RTT have yet to be thoroughly investigated. Recent evidence from our lab among others suggests a role for MeCP2 during prenatal neurogenesis that may contribute to the neuropathology observed later in life. We sought to characterize the course of neurogenesis in MeCP2-deficient human neurons with the use of induced pluripotent stem cells (iPSCs) derived from RTT patient skin samples. We generated a variety of monolayer and 3D neuronal models and found that RTT phenotypes are present at the earliest stages of brain development including neuroepithelial expansion, neural progenitor migration and differentiation, and later stages of membrane and synaptic physiological development. We established a link between MeCP2 and key microRNAs that are misregulated in RTT and lie upstream of signalling pathways that contribute to aberrant neuronal maturation in the absence of MeCP2. We have uncovered novel roles of MeCP2 in human neurogenesis. Whereas the processes that comprise early neural development were previously considered irrelevant to RTT pathology, the deficits we observed in neuronal differentiation, migration, and maturation are a crucial component to the larger picture of RTT pathogenesis and provide additional insight into the emergence of RTT patient symptoms.
Drug rehabilitation facility for mothers and their children
Contemporary society is faced with an increasing problem of substance abuse and addiction. The public consequences of this private problem are experienced in the increasing costs in the healthcare, penal and welfare systems as well as the less tangible effects drugs and addiction have on the quality of our urban centers. Current efforts to solve this problem include education, substance abuse treatment and curtailing the availability of drugs. Although these efforts are effective they are not sufficient because traditional treatment programs exclude a critical group within the substance abuse population--women with dependent children. This group, whose most pressing concerns are education, domestic violence and substance abuse, has been growing at an alarming rate and if their needs are not addressed then their problems will be handed down to their children in an increasing cycle of dependence. A new model for treatment is needed; one which can accommodate women and their children and can recognize the advantages of maintaining and nurturing families rather than isolating patients and placing their children in foster care. Such a facility could capitalize on the mutual support offered within the family structure and could address the growing problem of substance abuse with the most vulnerable population--the children of drug addicted parents. The architectural proposal presented here expresses the complex levels of dependence found between the individual and society, the individual and the clinic and between the parent and child. The progress from dependence to independence is articulated through a series of typological transformations which map the transition from institutional to domestic living and symbolizes in the urban fabric a process of healing and growth, revitalizing both the city and its population.
Design, manufacturing, and verification of a steel tube spaceframe chassis for Formula SAE
The Formula SAE chassis provides a number of functions: it protects the driver during high speed operation, links critical components such as the engine, drivetrain, and suspension together through a rigid structure, and distributes forces through the frame to allow for predictable handling and kinematics. This document examines and analyzes the critical factors in designing and building a Formula SAE chassis from 4130 chromoly steel tubing. The paper focuses on several main design issues and criteria, provides a detailed description of the manufacturing and jigging process, and also documents verification testing of the real chassis against the CAD and FEA models. The thesis will serve three functions: first as a summary of lessons I have learned about product development from personally overseeing the fabrication of the MIT Motorsports chassis for 3 years (MY2006 - MY2008), second as a guide for future generations of chassis engineers in frame design and construction, and third as a specific study and verification of the theoretical methods behind the current vehicle design.
Using design flexibility and real options to reduce risk in Private Finance Initiatives : the case of Japan
Private Finance Initiative (PFI) is a delivery system for public works projects to design construct, manage and maintain public facilities by using private capital, management skills, and technical abilities. It was introduced in Japan about 10 years ago to encourage the stagnant Japanese economy and provide public services with higher quality and less cost to the country and the local authority. It has been applied to many public works projects, but not to large-scale infrastructure projects, such as toll road and airport projects. One of the main reasons for this is that specific methodologies of handling risks and uncertainties involved in long-term projects have not been introduced and demonstrated to either the public or the private sector. This thesis aims to help those involved in large-scale infrastructure development projects apply PFI to those projects by proposing a flexible methodology that will allow them to handle risks. Specifically, this thesis 1) proposes a quantitative methodology so that project managers can handle uncertainty in large-scale engineering projects, and 2) demonstrates how project managers can apply the proposed methodology practically to real-world projects, including how to model and evaluate projects, and demonstrates how the proposed methodology is useful for reducing risks and enhancing the value of projects. As a quantitative methodology, this thesis proposes real options analysis as a tool for considering uncertainty and incorporating flexibility into design, based on the premise that it is crucial not how accurately project managers forecast uncertainty but how they can handle it. This thesis also explains barriers to the implementation of the proposed concepts and methodology, and recommends how to alleviate them. The thesis uses two real-world case studies: the "Tokyo International Airport New Runway Extension Project" and the "Tokyo Bay Aqua-Line Project".
Development of a questionnaire to test the impact of scarce materials on design in Developing Countries
The objective of this thesis is to create a questionnaire that tests how designers in developing countries design with scarce resources. The questionnaire will be given to mechanical engineering students in Mexico and will ask them to design and sketch ideas for several products that would help physically disabled shopkeepers. However, each student must use only materials provided on a specific list to manufacture their products. The list of materials has very basic items like plywood, aluminum bars, and springs. Along with these materials, found objects were also added to the list of materials. These included things that can be found rather easily in a developing country like an iron or a tire. Making the students design using only these sparse raw materials and found objects should simulate designing in a developing country with limited resources. The questionnaire and materials list underwent several revisions before it was sent to Mexico, wherein American engineering students took the questionnaire and then gave feedback that was used to make changes to the questionnaire. After three rounds of revising, the questionnaire and materials list were finalized and then sent to Mexico where they were taken by engineering students at local universities.
Adaptation of granular solid hydrodynamics for modeling sand behavior
The development of constitutive models that can realistically represent the effective stress-strain-strength of the soil properties is essential for making accurate predictions using finite element analysis. Currently, most of the existing constitutive models are based on the framework of incrementally-linearized elasto-plasticity. However, most of these models do not typically consider energy conservation and are also phenomenological. This means that they can only be used to predict the behavior/ loading conditions for which they have been developed and that they often employ artificial mathematical formulations. This research proposes an improved constitutive model for sands based on the framework of Granular Solid Hydrodynamics [GSHJ. The GSH framework considers energy and momentum conservation simultaneously and, by combining them with thermodynamic considerations, develops constitutive relations for a given energy expression.
Improving the efficiency of the later stages of the drug development process : survey results from the industry, academia, and the FDA
Drug development in the United States is a lengthy and expensive endeavor. It is estimated that average development times range from eleven to fifteen years and exceed costs of one billion dollars. The development pathway includes basic scientific discovery, pre-clinical testing in animals, clinical development in humans, and an application process. The Food and Drug Administration is responsible for the oversight and approval of drugs going through this process. Numerous financial and economic studies have been conducted that show the benefits to accelerating the drug development process. In 1992, the United States Congress enacted the Prescription Drug User Fee Act I, which mandated faster response times from the FDA in return for user fee payments to the FDA by the drug developing companies. Data on approval times for new drugs indicate that this process was indeed shortened. In contrast, the average drug development process prior to the filing of an application has been increasing in cost and time. The first purpose of this research is to quantify the benefits of accelerated new drug application review time under the Prescription Drug User Fee Acts I and II. The second purpose of the research is to investigate what industry and the FDA can do together to reduce the development process time between the IND and NDA without compromising patient safety and welfare, specifically the Phase II, Phase III, and NDA components. The research indicates that PDUFA has improved approval times in a statistically significant way. Furthermore, the financial and social benefits as measured using net present value have far exceeded the PDUFA costs. Quantitative and qualitative surveys of fifty individuals in large pharmaceutical and biotech companies
Gestural overlap of stop-consonant sequences
This study used an analysis-by-synthesis approach to discover possible principles governing the coordination of oral and laryngeal articulators in the production of English stop-consonant sequences. Recorded utterances containing stop-consonant sequences were analyzed acoustically, with focus on formant movements, closure durations, release bursts, and spectrum shape at low frequencies. The results of the acoustic analysis were translated into general gestural timing estimates. From these estimates, a set of possible principles was derived. Both the general gestural estimates and the derived principles were verified and refined through quasi-articulatory synthesis using HLsyn. Perception tests composed of synthetic sequences with varying degrees of overlap were administered. From acoustic analysis, synthesis verification, and perception testing, two principles emerged. First, V1Cl#C2V2 stop-consonant sequences with front-to-back order of place of articulation have more overlap of articulators than those with back-to-front order; this agrees with past research findings (Chitoran, Goldstein, and Byrd, 2002). The extent of the overlapping usually does not go beyond the obliteration of the Cl release burst. Second, gestural overlap involving laryngeal articulators exists but varies from individual to individual. The voicing of C1 usually affects the voicing of C2 in V1CI#C2V2 sequences.
Development of web-based image annotation tool and application of machine learning methods
Large-scale in situ hybridization screens are providing an abundance of spatio-temporal patterns of gene expression data that is valuable for understanding the mechanisms of gene regulation. Drosophila gene expression pattern images have been generated by the Berkeley Drosophila Genome Project (BDGP) for over 7,000 genes in over 90,000 digital images. These images are currently hand curated by field experts with developmental and anatomical terms based on the stained regions. These annotations enable the integration of spatial expression patterns with other genomic data sets that link regulators with their downstream targets. However, the manual curation has become a bottleneck in the process of analyzing the rapidly generated data therefore it is necessary to explore computational methods for the curation of gene expression pattern images. This thesis addresses improving the manual annotation process with a web-based image annotation tool and also enabling automation of the process using machine learning methods. First, a tool called LabelLife was developed to provide a systematic and flexible way of annotating images, groups of images, and shapes within images using terms from a controlled vocabulary. Second, machine learning methods for automatically predicting vocabulary terms for a given image based on image feature data were explored and implemented. The results of the applied machine learning methods are promising in terms of predictive ability, which has the potential to simplify and expedite the curation process hence increasing the rate that biologically significant data can be evaluated and new insights can be gained.
Analyzing the feasibility of lithium-ion batteries to reduce carbon dioxide emissions in Maritime shipping
The International Maritime Organization aims to reduce CO2 emissions in the shipping industry by 50% by 2050. One of the methods for meeting this goal is to electrify ships with lithium-ion batteries. A 14-ship sample was analyzed to determine the feasibility of installing lithium-ion batteries onto modern-day vessels. The two feasibility constraints that guided this discussion were mass and volume of the necessary battery system. Results show that the mass of the battery pack was well within the current mass of engine rooms, but the volume required was often too high. In order to compensate for this, an increase in the estimate of energy density improved the number of trips made possible by lithium-ion batteries. When coupled with increases in depth of discharge and the volume available for the system in the engine room, 11 out of 14 vessels could complete at least one trip with one charge of the battery. This corresponded to about 48% of the total miles travelled by all 14 ships. Hybrid vessels could be deployed to test out the technology, but eventually moving to lithium-ion battery technology could come close to reducing emissions by 50% under the right parameters.
A framework for evaluating advanced search concepts for multiple autonomous underwater vehicles (AUV) mine countermeasures (MCM)
Waterborne mines pose an asymmetric threat to naval forces. Their presence, whether actual or perceived, creates a low-cost yet very powerful deterrent that is notoriously dangerous and time consuming to counter. In recent years, autonomous underwater vehicles (AUV) have emerged as a viable technology for conducting underwater search, survey, and clearance operations in support of the mine countermeasures (MCM) mission. With continued advances in core technologies such as sensing, navigation, and communication, future AUV MCM operations are likely to involve many vehicles working together to enhance overall capability. Given the almost endless number of design and configuration possibilities for multiple-AUV MCM systems, it is important to understand the cost-benefit trade-offs associated with these systems. This thesis develops an analytical framework for evaluating advanced AUV MCM system concepts. The methodology is based on an existing approach for naval ship design. For the MCM application, distinct performance and effectiveness metrics are used to describe a series of AUV systems in terms of physical/performance characteristics and then to translate those characteristics into numeric values reflecting the mission-effectiveness of each system. The mission effectiveness parameters are organized into a hierarchy and weighted, using Analytical Hierarchy Process (AHP) techniques, according to the warfighter's preferences for a given operational scenario. Utility functions and modeling provide means of relating the effectiveness metrics to the system-level performance parameters. Implementation of this approach involves two computer-based models: a system model and an effectiveness model, which collectively perform the tasks just described. The evaluation framework is demonstrated using two simple case studies involving notional AUV MCM systems. The thesis conclusion discusses applications and future development potential for the evaluation model.
M13 virus-enabled assembly of 3D nanostructured composites : synthesis and applications in solar energy conversion and electrochemical energy storage devices
We live in an age where our society faces the great challenge of generating, storing and transporting energy in responsible ways that minimize impact to the environment. Significant effort has been spent to develop new technologies capable of (1) efficiently converting renewable energy into usable electricity, (2) storing energy into high performance energy storage devices, and (3) fabricating advanced energy conversion and storage devices using environmentally benign technologies. One of the approaches is to genetically engineer M13 bacteriophage (M13 virus) to display proteins with specific functionalities, which allow the synthesis and organization of hybrid materials in environmentally friendly manners. The primary goal of my Ph.D. thesis has been to develop M13 virus-enabled processes for building the electrodes of advanced energy conversion and energy storage devices, including dye-sensitized solar cells (DSCs), electrochemical capacitors, and perovskite hybrid solar cells (PSCs). In order to fabricate nanostructures for the DSC photoanodes, the M13 viruses were crosslinked into a virus hydrogel that served as a multifunctional 3D scaffold capable of binding gold nanoparticles (AuNPs) to the virus proteins. The AuNP-virus hydrogel was encapsulated in titanium dioxide (TiO₂) to produce a plasmon-enhanced nanowire (NW)-based DSC photoanode that enabled a power conversion efficiency (PCE) of 8.46%. A theoretical model was developed that predicted the experimentally observed trends of plasmon-enhancement. Furthermore, to optimize the surface-to-volume ratio of the photoanodes to maximize PCE, a tunable fabrication process used individual free-floating M13 virus as the template for TiO₂ NWs, and the assynthesized NWs were blended with sacrificial polymer to control the film porosity. The optimized semiconducting mesoporous networks were used as photoanodes in both DSCs and PSCs, and the effects of surface morphology on the photovoltaic properties was experimentally investigated. In order to construct the electrodes of electrochemical capacitors, M13 viruses were genetically programmed to bind single-walled carbon nanotubes (SWNTs) in a controlled fashion by aligning SWNTs along the length of the phage without aggregation. The SWNTs-virus complexes were used as the basis for the formation of crosslinked virus hydrogel scaffolds for the fabrication of porous 3D polyaniline (PANI) nanostructures. The PANI-coated SWNT nanocomposites further improved the electrical conductivity and electrochemical activity of thin films. In addition, by using a fog generator to deliver the crosslinker solution, larger-area virusbased hydrogels were fabricated for versatile material coatings, including PANI, MnOx, Ni, and Ni-MnOx. Lastly, an environmentally-responsible process to fabricate efficient PSCs was developed that recycled lead content from discarded car batteries. Perovskite films, assembled using materials sourced from either recycled battery materials or high-purity commercial reagents, showed the same material characterizations and the identical photovoltaic performance, indicating the practical feasibility of recycling car batteries for lead-based PSCs.
Business process improvement using axiomatic design and object-process methodology
This thesis introduces AD-OPM BPI, which is a new method of conducting business process improvement using both Axiomatic Design and Object-Process Methodology. The premise underlying the method is that modern process improvement techniques boast large efficiency gains, but fail to address the broader process system. Through first using Axiomatic Design to map and optimize the process system, broader-inefficiencies will be addressed before they constrain individual processes. Then Object-Process Methodology is conducted for process-specific optimization by utilizing modern system architecture layering principles to identify nonvalue-adding entities and improve them through deletion or simplification. A case study at a large aerospace manufacturing company demonstrates the method in practical application. Results suggest that application is better suited to new or small-scale systems due to the challenge of applying Axiomatic Design to pre-existing large scale systems. Despite this limitation, Object-Process Methodology remains a viable option for business process improvement, whether or not it is coupled with Axiomatic Design in AD-OPM BPI.
Design of an improved electronics platform for the EyeRing wearable device
This thesis presents a new prototype for EyeRing, a finger-worn device equipped with a camera and other peripherals. EyeRing is used in assistive technology applications, helping visually impaired people interact with uninstrumented environments. Applications for sighted people are also available to aid in learning, navigation, and other tasks. EyeRing is a wearable electronics device with an emphasis on natural gestural input and minimal interference. Previous prototypes used assemblies of commercial-off-the-shelf (COTS) control and sensing solutions. A more custom platform, consisting of two printed circuit boards (PCBs), peripherals, and firmware, was designed to make the device more usable and functional. Firmware was developed to ameliorate the communication capabilities, microcontroller functionality, and system power use. Improved features allow the pursual of previously unreachable application spaces. In addition, the smaller form factor increases usability and device acceptance. The new prototype improves power consumption by X, volume by Y, and throughput by Z. Video input is now available, etc.
Evolution of United States commercial domestic aircraft operations from 1991 to 2010
The main objective of this thesis is to explore the evolution of U.S. commercial domestic aircraft operations from 1991 to 2010 and describe the implications for future U.S. commercial domestic fleets. Using data collected from the U.S. Bureau of Transportation Statistics, we analyze 110 different aircraft types from 145 airlines operating U.S. commercial domestic service between 1991 and 2010. We classify the aircraft analyzed into four categories: turboprop, regional jet, narrow-body, and wide-body. Our study consists of three parts. First, we compare the four aircraft classes and explore trends in available seat miles, revenue passenger miles, load factor, aircraft departures, average stage length, aircraft utilization, seat capacity, daily departures per aircraft, aircraft ground time, and fuel burn. Second, we examine each of the aircraft classes in detail and provide insights on specific aircraft types. Finally, we compare product offerings from competing aircraft manufacturers in both the regional jet and narrow-body aircraft classes. The results indicate that more than 150 wide-body aircraft have been shifted from the U.S. commercial domestic market to international service while narrow-body stage lengths have increased 50% over the 20 year period analyzed. In addition, the introduction of more than 1,390 regional jets in the late 1990s and 2000s allowed airlines to expand hub operations and increase frequency on routes between major cities. A 10% decline in the turboprop fleet coupled with the lack of turboprop replacement aircraft in the 30 to 50-seat category suggest a potential for future reductions in air service to some smaller cities. Lastly, increasing fuel prices threaten the growth of the U.S. commercial domestic fleet in the upcoming decade and could potentially cause a significant number of aircraft to be no longer economically viable.
Predicting human behavior using visual media
The ability to predict human behavior has applications in many domains ranging from advertising to education to medicine. In this thesis, I focus on the use of visual media such as images and videos to predict human behavior. Can we predict what images people remember or forget? Can we predict the type of images people will like? Can we use a photograph of someone to determine their state of mind? These are some of the questions I tackle in this thesis. Through my work, I demonstrate: (1) It is possible to predict with near human-level correlation, the probability with which people will remember images, (2) it is possible to predictably modify the extent to which a face photograph is remembered, (3) it is possible to predict, with a high correlation, the number of views an image will receive even before it is uploaded, (4) it is possible to accurately identify the gaze of people in images, both from the perspective of a device, and third-person. Further, I develop techniques to visualize and understand machine learning algorithms that could help humans better understand themselves through the analysis of algorithms capable of predicting behavior. Overall, I demonstrate that visual media is a rich resource for the prediction of human behavior.
Innovative regeneration strategies in Germany
The Internationale Bauausstellung or International Building Exhibition (IBA) is a planning methodology implemented over the course of the 20th century and into the 21st century in Germany. The IBA is unique and characterized by a mix of seemly contradictory conditions. In composition, IBAs are characterized by being site- and time-specific by a mix of seemly contradictory conditions. In composition. IBAs are characterized by being site- and time-specific, long-term and temporary, driven by experimentation and independent in their urban development role. Conceptually, the IBA is driven by theoretical and practical experimentation and a goal to produce "models for the city of the future" that address paradigmatic shifts in urban development. After urban renewal, physical planning lost efficacy and the confidence for imaginative visions to be concretely brought to life. The IBA sits as an outlier in this commonly held conception of physical planning and urban design history. The IBA remains capable of large-scale transformations alongside careful experimentation that pushes existing thinking about the city forward. It is both conceptually ambitious and sensitively grounded in local regeneration. This study is focused on the meaning-making of the IBA-how it constructs new understandings of building: physical transformation and image-making for the city. Three contemporary IBAs were selected as cases to analyze the IBA methodology in its current implementation: 2010 IBA Saxony-Anhalt, 2013 IBA Hamburg and nascent 2020 IBA Berlin. In order to understand the dynamics of the IBA, this thesis is organized around three theoretical frames to analyze the IBA: city imaging, cultural regeneration and mega-events. Each of these frames deals with the complexity of building as an ideological act that shapes not only physical form but also the shape of the city in our minds. Based on analysis of the IBA, this thesis offers strategies for an approach towards the project of the city that can be as variegated as the urban context requires while maintaining the ambitions of urban design towards new models for the city of the future. towards new models for the city of the future.
A multiplex platform based on cellular barcoding for measuring single cell drug susceptibility
Predicting individual patient response to cancer drugs has been challenging. As many anticancer drugs aim to modulate cell deaths or growth inhibition, a useful assay for drug susceptibility would require direct assessment of phenotypic changes to cells upon drug treatment, such as cell viability or growth rate. Previously, the serial microfluidic mass sensor arrays have been used to measure single-cell mass accumulation rates over ~20 minute intervals to assess drug susceptibility. Here, we present a multiplexing platform that allows evaluation of multiple drug response conditions in a single experiment by utilizing fluorescent barcodes based on cell surface labeling. Fluorescence microscopy was integrated with the serial microfluidic mass sensor arrays to match a given barcode (which corresponds to a drug condition) with its mass accumulation rate as each cell flows through the microfluidic channel. To validate our approach, we show that the dynamics of drug response can be obtained from a single experiment by multiplexing drug treatment durations. Our validation highlights the capability of our platform to both eliminate measurement bias due to time differences in drug exposure and reduce the operation time when compared to standard time point assays.
Analyzing the proliferation resistance of advanced nuclear fuel cycles : in search of an assessment methodology for use in fuel cycle simulations
A methodology to assess proliferation resistance of advanced nuclear energy systems is investigated. The framework, based on Multi-Attribute Utility Theory (MAUT), is envisioned for use within early-stage fuel cycle simulations. Method assumptions and structure are explained, and reference technology cases are presented to test the model. Eleven metrics are presented to evaluate the proliferation resistance of once-through, COmbined Non-Fertile and Uranium (CONFU), Mixed-Oxide (MOX), and Advanced Burner Reactor (ABR) fuel cycles. The metrics are roughly categorized in three groups: material characteristics, material handling characteristics, and "inherent" facility characteristics. Each metric is associated with its own utility function, and is weighted according to the proliferation threat of interest. Results suggest that transportation steps are less proliferation-resistant than stationary facilities, and that the ABR fuel cycle employing reactors with low conversion ratios are particularly safe. Nearly all steps of the fuel cycles analyzed are more proliferation resistant to a terrorist threat than to a host nation threat (which has more resources to devote toward proliferation activities). The open light water reactor (LWR) and MOX cycles appear to be the most vulnerable of all cycles analyzed. CONFU proliferation resistance is similar to that of the ABR with conversion ratios 0.5 and 1.0; these are all approximately in between the values ascribed to LWR/MOX (at the low end) and ABR with conversion ratio zero (with the highest proliferation resistance). Preliminary studies were conducted to determine the sensitivity of the results to weighting function structure and values. Several different weighting functions were applied to the utility values calculated for the once-through and CONFU fuel cycles. The tests showed very little change in the ultimate trends and conclusions drawn from each fuel cycle calculation. These conclusions, however, are far from definitive. Limitations of the model are discussed and demonstrated. Recommendations for improving the model are made, including a call for in-depth evaluation of weighting function structures and values, and an examination of quantitative links between assumptions and utilities. Ultimate conclusions include that the numerical values produced by the analysis are not fully and accurately instructive, and analysts should recognize that the greatest gifts of the assessment may come from performing the investigation.
Passamaquoddy language
Over the decades, Passamaquoddy has been taught in many ways and many forms. Some have tried using object identification and word(s) learning while others have tried teaching writing and reading via a phonetic form of English pronunciation. While all teaching methods and learning in any form is valid and valuable, we must first understand that the Passamaquoddy orthography is only a cut down version of the English orthography (using 17 characters plus an '). This cut down version of English characters with a Passamaquoddy grammar overlay is "still" English and can cause confusion for the adult learners of our language. And phonetic pronunciation and spelling is only as good as how we pronounce as set of letters in English. The spelling of words will vary by how our hearing processes the sounds. The methods I am presenting are not new to teaching but are new to teaching adult learner of Passamaquoddy here in our territory. I will outline the use of TPR (Total Physical Response), Picture method of discovering verb forms and practical sentences.
Extending the utility of enzymes for site-specific targeting of fluorescent probes
Genetically encodable fluorescence reporters such as the green fluorescent protein (GFP) are useful for studying protein expression, localization, and dynamics in a variety of biological systems. GFP and its related variants, however, suffer from several drawbacks. Compared to chemical fluorophores, they are large, dim, and limited in other reporting capabilities. Super-bright chemical fluorophores such as the Alexa Fluor dyes and quantum dots, on the other hand, are not genetically encodable and so their cellular targeting is challenging. To address this challenge, the Ting Lab engineered E. coli lipoic acid ligase (LpIA) to site-specifically attach reporters onto a 13-amino acid ligase recognition peptide, conferring comparable targeting specificity to genetic encoding. This thesis is an extension of this work, to expand the repertoire of chemical fluorophores that can be targeted to cellular proteins by this technology. We describe the computational redesign of LpIA into a red fluorophore ligase, and the validation of this design by X-ray protein crystallography. We used this new technology for live-cell fluorescence imaging and super-resolution imaging. For the attachment of other fluorophores than cannot be directly bound by the enzyme we engineered LplA to incorporate functional handles that can be chemoselectively derivatized with fluorophores in a second step. In one example, LplA targeted a strained alkene to cellular proteins, which can subsequently react with dienophiles with exceptional kinetics. In another example, we show that LplA-targeted haloalkanes can efficiently recruit a modified haloalkane dehalogenase. These methods were used to label cells with diverse fluorophores, including quantum dots, and allowed tracking of single membrane proteins to study their lateral diffusion.
Rational humility and other epistemic killjoys
I consider three ways in which our epistemic situation might be more impoverished than we ordinarily take it to be. I argue that we can save our robust epistemic lives from the skeptic. But only if we accept that they aren't quite as robust as we thought. In Chapter One, I ask whether the discovery that your belief has been influenced by your background should worry you. I provide a principled way of distinguishing between the kind of influence that is evidence of our own error, and the kind that is not. I argue, contra the dogmatist, that appropriate humility requires us to reduce confidence in response to the former. I conclude by explaining the nature and import of such humility: what it is, what accommodating it rationally amounts to, and why it need not entail skepticism. In Chapter Two, I ask whether awareness of disagreement calls for a similar sort of humility. Many of those who think it does make a plausible exception for propositions in which we are rationally highly confident. I show that, on the contrary, rational high confidence can make disagreement especially significant. This is because the significance of disagreement is largely shaped by our antecedent expectations, and we should not expect disagreement about propositions in which high confidence is appropriate. In Chapter Three, I consider whether a deflated theory of knowledge can help negotiate the path between skepticism and dogmatism more generally. I argue that knowing some proposition does not automatically entitle you to reason with it. The good news is that, on this view, we know a lot. The bad news is that most of what we know is junk: we cannot reason with it to gain more knowledge. It thus cannot play many of the roles that we typically want knowledge to play.
Evaluating the technical innovation landscape for wind energy's competitive future : a value creation -- value capture analysis
This thesis utilizes a systems approach to develop a framework to analyze the value creation and value capture potential of technical innovations in the wind energy sector of the electric power industry. Six technical innovations are considered for the analysis, including Grid-Scale Storage, On-Site Manufacturing Systems, Transmission Power Flow Control, Near-Term Forecasting, Long-Term Forecasting and Predictive Maintenance. Several comparative techniques are employed, including Pugh selection, weighted stakeholder occurrence based on stakeholder value networks, and a multi-attribute utility method. The technologies are compared across multiple possible future scenarios and scored based on their value contribution to stakeholders of both the wind power plant as well as the entire electric power system. Of the technical innovations analyzed in this framework, Grid-Scale Storage, On-Site Manufacturing Systems and Predictive Maintenance promise to contribute the greatest value to industry stakeholders and thus are the most likely to improve the competitiveness of the wind industry. A combined application of the multi-attribute utility method with the weighted stakeholder occurrence method based on stakeholder value networks was the most effective in distinguishing value contribution from the technologies. A value creation -- value capture matrix provides a useful method for visualizing value contribution to industry stakeholders and is used to inform commercialization strategy of the selected technologies. In addition, trade plots are utilized for selecting which technologies contribute the highest value across multiple possible future scenarios.
Cyber security risk analysis framework : network traffic anomaly detection
Cybersecurity is a growing research area with direct commercial impact to organizations and companies in every industry. With all other technological advancements in the Internet of Things (IoT), mobile devices, cloud computing, 5G network, and artificial intelligence, the need for cybersecurity is more critical than ever before. These technologies drive the need for tighter cybersecurity implementations, while at the same time act as enablers to provide more advanced security solutions. This paper will discuss a framework that can predict cybersecurity risk by identifying normal network behavior and detect network traffic anomalies. Our research focuses on the analysis of the historical network traffic data to identify network usage trends and security vulnerabilities. Specifically, this thesis will focus on multiple components of the data analytics platform. It explores the big data platform architecture, and data ingestion, analysis, and engineering processes. The experiments were conducted utilizing various time series algorithms (Seasonal ETS, Seasonal ARIMA, TBATS, Double-Seasonal Holt-Winters, and Ensemble methods) and Long Short-Term Memory Recurrent Neural Network algorithm. Upon creating the baselines and forecasting network traffic trends, the anomaly detection algorithm was implemented using specific thresholds to detect network traffic trends that show significant variation from the baseline. Lastly, the network traffic data was analyzed and forecasted in various dimensions: total volume, source vs. destination volume, protocol, port, machine, geography, and network structure and pattern. The experiments were conducted with multiple approaches to get more insights into the network patterns and traffic trends to detect anomalies.
Design of anti-biofouling Lubricant-Impregnated Surfaces (LIS) robust to cell-growth-induced instability
Unwanted deposition of cells on wetted solids, so-called biofouling, is a serious operational and environmental threat in many underwater and biomedical applications. Over the last decade, Lubricant-Impregnated Surfaces (LIS) has been one of the popular remedies, owing to its unique oil layer that separates solid from cellular media giving no chance for cells to foul. However, a critical bottleneck to this solution has been that retention of the oil could never be permanent, which shortened its anti-fouling efficacy. While understanding the root cause of this oil loss significantly helps prevent such failure, the loss mechanism has not received much attention to date. In this study, we show that secretion of biomolecules from aquatic cells and subsequent change in interfacial tension of the surrounding media can delaminate the oil film, resulting in gradual deterioration of anti-biofouling capability of LIS. We establish a correlation between the decrease in interfacial tension and observed wetting transitions of LIS over the fouling test period. We also visualize the cell medium - oil interface to confirm final wetting states of LIS in situ. We further measure mobility of various algae droplets on such surfaces and scale forces to confirm presence of line force specific to each wetting state. Finally, we propose a LIS regime map that helps determine the design of LIS that can resist oil loss in aquatic cellular environments, increasing long-term anti-biofouling efficacy.
Safety of light water reactor fuel with silicon carbide cladding
Structural aspects of the performance of light water reactor (LWR) fuel rod with triplex silicon carbide (SiC) cladding - an emerging option to replace the zirconium alloy cladding - are assessed. Its behavior under accident conditions is examined with an integrated approach of experiments, modeling, and simulation. High temperature (1100°C~1500°C) steam oxidation experiments demonstrated that the oxidation of monolithic SiC is about three orders of magnitude slower than that of zirconium alloys, and with a weaker impact on mechanical strength. This, along with the presence of the environmental barrier coating around the load carrying intermediate layer of SiC fiber composite, diminishes the importance of oxidation for cladding failure mechanisms. Thermal shock experiments showed strength retention for both [alpha]-SiC and [beta]-SiC, as well as A1₂O₃ samples quenched from temperatures up to 1260°C in saturated water. The initial heat transfer upon the solid - fluid contact in the quenching transient is found to be a controlling factor in the potential for brittle fracture. This implies that SiC would not fail by thermal shock induced fracture during the reflood phase of a loss of coolant accident, which includes fuel-cladding quenching by emergency coolant at saturation conditions. A thermo-mechanical model for stress distribution and Weibull statistical fracture of laminated SiC cladding during normal and accident conditions is developed. It is coupled to fuel rod performance code FRAPCON-3.4 (modified here for SiC) and RELAP-5 (to determine coolant conditions). It is concluded that a PWR fuel rod with SiC cladding can extend the fuel residence time in the core, while keeping the internal pressure level within the safety assurance limit during steady-state and loss of coolant accidents. Peak burnup of 93 MWD/kgU (10% central void in fuel pellets) at 74 months of in-core residence time is found achievable with conventional PWR fuel rod design, but with an extended plenum length (70 cm). An easier to manufacture, 30% larger SiC cladding thickness requires an improved thermal conductivity of the composite layer to reduce thermal stress levels under steady-state operation to avoid failure at the same burnup. A larger Weibull modulus of the SiC cladding improves chances of avoiding brittle failure.
Mechanical properties of collagen-based scaffolds for tissue regeneration
Collagen-glycosaminoglycan (CG) scaffolds for the regeneration of skin and nerve have previously been fabricated by freeze-drying a slurry containing a co-precipitate of collagen and glycosaminoglycan. Recently, mineralized collagen-glycosaminoglycan (MCG) scaffolds for bone regeneration have been developed by freeze-drying a slurry containing a co-precipitate of calcium phosphate, collagen and glycosaminoglycan. Bi-layer scaffolds with CG and MCG layers have been developed for cartilage-bone joint regeneration. The mechanical properties (Young's modulus and strength) of scaffolds are critical for handling during surgery as well as for cell differentiation. The mechanical properties of the MCG scaffolds are low in the dry state (e.g. they can be crushed under hard thumb pressure) as well as in the hydrated state (e.g. they do not have the optimal modulus for mesenchymal stem cells (MSC) to differentiate into bone cells). In addition, there is interest in extending the application of CG scaffolds to tendon and ligament, which carry significant mechanical loads. This thesis aims to improve the mechanical properties of the both CG and MCG scaffolds and to characterize their microstructure and mechanical properties. Models for cellular solids suggest that the overall mechanical properties of the scaffold can be increased by either increasing the mechanical properties of the solid from which the scaffold is made or by increasing the relative density of the scaffold. In an attempt to increase the solid properties, the MCG scaffolds with increasing mineral content were fabricated.
Dynamics of multi-body space interferometers including reaction wheel gyroscopic stiffening effects : structurally connected and electromagnetic formation of flying architectures
Space telescopes have the potential to revolutionize astronomy and our search for life-supporting planets beyond our Solar System. Free of atmospheric distortions, they are able to provide a much "clearer" view of the universe than ground-based telescopes. A developing technology that appears promising is space-based interferometry, which uses multiple apertures separated at great distances to act as a large virtual aperture. In this way, interferometers will achieve angular resolutions far greater than those achievable by monolithic telescopes. In this thesis, we investigate the dynamics and control of two proposed architectures for spaceborne interferometers: structurally connected interferometers and electromagnetic formation flying interferometers. For structurally connected interferometers, we develop a coupled disturbance analysis method that accurately predicts a space telescope's optical performance in the presence of reaction wheel vibrational disturbances. This method "couples" a reaction wheel to a structure using estimates of the accelerances (or mobilities) of both bodies. This coupled analysis method is validated on the Micro-Precision Interferometer testbed at NASA's Jet Propulsion Laboratory. The predictions show great improvement over a simplified "decoupled" analysis method when compared to experimental data. For formation flying interferometers, we consider the use of electromagnets as relative position actuators. A high fidelity, nonlinear dynamic model of a deep-space electromagnetic formation flight (EMFF) array is derived from first principles. The nonlinear dynamics are linearized for a two-vehicle array about a nominal trajectory, and the linearzed model is shown to be unstable,
case study of the formation of an Eastern Pacific hurricane
A case study is performed to investigate the nature of tropical cyclogenesis in the eastern Pacific Ocean. Focus is given to the formation and development of the initial circulation which eventually intensified into Hurricane Fefa. Using satellite imagery, the author studies the development of convective activity in the genesis region. Gridded reanalysis data are used to document the synoptic-scale flow, with emphasis on tracing the easterly wave which is associated with the formation of Fefa. The data show that the easterly wave propagated across the Caribbean Sea and the Central American mountains, and the initial circulation developed while the wave had moved into the eastern Pacific. The wave is found to have moved through an unstable basic state while it was in the Caribbean, which is favorble for its growth and maintenance. Two phenomena are observed prior to the formation of the low-level circulation. These include an easterly jet in the eastern Pacific that may have been associated with the blocking effect of the Central American mountains, and a southerly wind surge into the monsoon trough region. In addition, aircraft observations collected during Tropical Experiment in Mexico are used to study the evolution of the mesoscale system. Initially, a circulation in the middle troposphere with a cold core in the boundary layer, and a shear line located to the west were found. One day later, a low-level warm core vortex had developed, and it was displaced from the mid-level vortex. It is suggested that the low-level vortex formed from the spin-up of the monsoon trough, independent of the mid-level vortex.
Sources of arsenic and lead in drinking water of Eastport, Perry, and Pleasant Point, Maine
Lead and arsenic in drinking water are a health risk to communities throughout the world; lead can be a problem in houses with old piping systems with either lead piping or 50/50 lead solder, and groundwater in Maine contains high arsenic concentrations. This study sought to determine the prevalence and sources of arsenic and lead in the drinking water of Eastport, Perry, and Pleasant Point, Maine. Citizens of these towns submitted water samples from their homes, and arsenic and lead were measured in these samples. Each citizen submitted two samples: one where water stood in the pipes for a minimum of six hours, and another where the tap was flushed for 2+ minutes before sample collection. The primary water sources in the region were municipal water, from the Passamaquoddy Water District (PWD), and well water from private wells. Water samples were also collected from the source waters of the municipal water system, the Passamaquoddy Water District, and immediately following water treatment to determine sources of lead in the municipal system. Lead concentrations were found to be below the Environment Protection Agency (EPA) action level of 15ppb throughout the municipal system, and less than 1% of PWD samples exceeded the action level for lead in the standing samples. Overall, including houses with wells, 2% of houses exceeded the EPA action level in standing samples, and these houses are inferred to contain high lead levels in their piping. Arsenic levels in well water samples were found to exceed the EPA Guideline of 10[mu]g/L in 15% of samples, and did not depend on bedrock type, pH, or well depth, suggesting that bedrock heterogeneity and fracture geometry plays a large role in arsenic concentrations in this region.
Spatiotemporal processing and time-reversal for underwater acoustic communications
High-rate underwater acoustic communication can be achieved using transmitter/receiver arrays. Underwater acoustic channels can be characterized as rapidly time-varying systems that suffer severe Inter Symbol Interferences (ISI) caused by multi-path propagation. Multi-channel combining and equalization, as well as time-reversal techniques, have been used over these channels to reduce the effect of ISI. As an alternative, a spatiotemporal focusing technique had been proposed. This technique is similar to time-reversal but it explicitly takes into account elimination of ISI. To do so, the system relies on the knowledge of channel responses. In practice, however, only channel estimates are available. To assess the system performance for imperfectly estimated time-varying channels, a simulation analysis was conducted. Underwater acoustic channels were modeled using geometrical representations of a 3-path propagation model. Multi-path fading was incorporated using auto regressive models. Simulations were conducted with various estimator delay scenarios for both the spatiotemporal focusing and simple time-reversal. Results demonstrate performance dependence on the non-dimensional product of estimation delay and Doppler spread.
Design, synthesis, and characterization of conjugated polymers and functional paramagnetic materials for dynamic nuclear polarization
The design, synthesis, and characterization of a series of radicals and biradicals for use as dynamic nuclear polarization (DNP) agents is described. DNP is a method to enhance the S/N-ratio in solid-state nuclear magnetic resonance (SS-NMR) that involves transferring the polarization of electrons, which are more easily polarized due to their larger magnetic moment, to nuclei. Two strategies to improve the performance of DNP-agents have been explored. The first involves combining a carbon-centered radical (1,3-bisdiphenylene-2-phenylallyl (BDPA) radical), which has a narrow line width at high field, with a nitroxide radical (2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO) radical), which has a broader line width at high field. The synthesis and characterization of a BDPA-TEMPO biradical is described and is a first step in testing whether polarization agents of this type will out perform the currently used biradicals. Additionally, the synthesis of a water-soluble derivative of BDPA is described. The second strategy involves designing dinitroxide biradicals that rigidly hold the radicals in an orthogonal geometry. The synthesis of dinitroxide radicals of this type is described, along with efforts to optimize aqueous solubility. The design, synthesis, and characterization of thioether-containing poly(paraphenylene-ethynylene) (PPE) copolymers are reported. The polymers show a fluorescence turn-on response when exposed to oxidants in solution, and the oxidized polymers show desirable thin-film properties, such as high quantum yields and increased photostability. Work towards the synthesis of electroactive conjugated polymers based on the BDPA free radical is also reported.
Design of an omnidirectional soft tactile sensor with applications in leak detection
Soft robots have many unique geometries requiring different tactile feedback mechanisms. In order to respond to their environment, soft robots would benefit by having multi-axis sensors that can determine how a surface is being contacted. As a particular application, previous soft sensor designs are used to detect leaks in active water pipes, but have difficulty differentiating leaks from pipe joints and obstacles. This thesis presents the design, fabrication and experimental testing of soft, multi axis deformation sensors. In the first approach, various geometries of a piezoresistive rubber sensor were tested, and the soft-bodied drone for mapping the interior of pipes was demonstrated in a field test conducted in Matio, Brazil. This demo yielded some design realizations, which led to changes in the sensing technology in order to provide more detail about the interior pipe features. Thus, highly flexible conductive fabric and silicone capacitors were investigated as the capacitance sensing element, which exhibits linearity, faster response time, and less hysteresis. Multiple copies of this sensor were arranged in a particular way to decouple the four deformation modes of the material: uniaxial tension, bending, compressive pressure, and torsion. Furthermore, this sensor is well-suited for the detection of leaks, obstacles, and pipe joints in active water pipes.
strategic role of the transportation sector in the post-war economic development of Liberia : how can the strategic development of Liberia's transportation sector promote the nation's attainment of its post-war economic development goals?
This thesis examines the proposals for building and improving the transportation sector in Liberia, primarily the roads while providing immediate social opportunities and employment for many of the poor in Liberia. As Liberia emerges out of a protracted civil conflict and makes strives on a number of socio-economic fronts, the need to prioritize the transport sector is a critical part of the nation's rebuilding efforts. A large portion of the country lacks basic infrastructure. This has put an enormous strain on economic and social services, lead to an increase in poverty, marginal health care and lack of education. Improving the transport sector will help stimulate economic viability, expand public services and provide admission to and from urban centers. Connecting the rural areas with urban centers and markets means improved infrastructure at an affordable cost, taking into account the environmental challenges and decreasing its damaging effects. This will also help Liberia become a role model in the ever challenging global forum of nations and industries going green. Achieving this is not always an easy task, because although, Liberia has an enormous amount of good will from donor countries, road projects have remained a daunting undertaking. The stakeholders must come to terms with developing a comprehensive approach to rebuilding the country's transportation network. Studies must be conducted to understand the cost benefit of rebuilding road network throughout the country. Once these studies are completed, a diligent effort to execute a plan must be initiated. For each policy to serve its significance, the various modes of transportation in the country must be harmonized and directed under a governing body, such as the Ministry of Transportation. Within this governing body, there must be a system of checks and balances, ensuring that the interests of the citizens are at the forefront. Several recommendations have been examined: the logistics and talent makeup of the transportation team, authority within the team, tax and toll policies, unification of sectors, and contributions by private investment firms. As Liberians prepare for the next presidential election, the next five years should be used as a timeline to implement and measure success. Finally, a contingency plan outlines basic, yet productive approaches to improve roads immediately, while providing jobs for many of the unemployed.
Curtain wall components for conserving dwelling heat by passive-solar means
A prototype for a dwelling heat loss compensator is introduced in this thesis, along with its measured thermal performance and suggestions for its future development. As a heat loss compensator, the Sol-Clad-Siding collects, stores, and releases solar heat at room temperatures thereby maintaining a neutral skin for structures, which conserves energy, rather than attempting to supply heat into the interior as most solar systems do. Inhabitants' conventional objections to passive-solar systems utilized in housing are presented as a contrasting background. The potential of the outer component, a Trans-Lucent-Insulation as a sunlight diffuser and transmitter (65 to 52% of heating season insulation) and as a good insulator [0.62 W/(sq m) (°K) [0.11 Btu/(hr) (sq ft) (°F) 1] are described. The performance of the inner component, a container of phase-change materials as an efficient vertical thermal storage is discussed, and areas for future research are addressed. A very brief application of this passive-solar curtain wall system for dwellings is also given.
Automating website profiling for a deep web search engine
The deep web consists of information on the internet that resides in databases or is dynamically generated. It is believed that the deep web represents a large percentage of the total contents on the web, but is currently not indexed by traditional search engines. The Morpheus project is designed to solve this problem by making information in the deep web searchable. This requires a large repository of content sources to be built up, where each source is represented in Morpheus by a profile or wrapper. This research proposes an approach to automating the creation of wrappers by relying on the average internet user to identify relevant sites. A wrapper generation system was created based on this approach. It comprises two components: the clickstream recorder saves characteristic data for websites identified by users, and the wrapper constructor converts these data into wrappers for the Morpheus system. After completing the implementation of this system, user tests were conducted, which verified that the system is effective and has good usability.
Fully kinetic numerical modeling of a plasma thruster
A Hall effect plasma thruster with conductive acceleration channel walls was numerically modeled using 2D3V Particle-in-Cell (PIC) and Monte-Carlo Collision (MCC) methodolo- gies. Electron, ion, and neutral dynamics were treated kinetically on the electron time scale to study transport, instabilities, and the electron energy distribution function. Axisymmet- ric R-Z coordinates were used with a non-orthogonal variable mesh to account for important small-scale plasma structures and a complex physical geometry. Electric field and sheath structures were treated self-consistently. Conductive channel walls were allowed to float electrically. The simulation included, via MCC, elastic and inelastic electron-neutral colli- sions, ion-neutral scattering and charge exchange collisions, and Coulomb collisions. The latter were also treated through a Langevin (stochastic) differential equation for the particle trajectories in velocity space. Ion-electron recombination was modeled at the boundaries, and neutrals were recycled into the flow. The cathode was modeled indirectly by inject- ing electrons at a rate which preserved quasineutrality. Anomalous diffusion was included through an equivalent scattering frequency. Free space permittivity was increased to allow a coarser grid and longer time-step. A method for changing the ion to electron mass ratio and retrieving physical results was developed and used throughout. Results were compared with theory, experiments. Gradients and anisotropy in electron temperature were observed. Non-Maxwellian electron energy distribution functions were observed. The thruster was numerically redesigned; substantial performance benefits were predicted.
Housebuilding industry in the United States and Japan, Structural and organizational changes of the
This study has three parts. The first chapter investigates the construction sectors in the United States and Japan using the analytical framework of interindustry analysis. Six U.S. and five Japanese input-output tables are analyzed. The second chapter presents the housebuilding policy of Japan. Postwar industrialization of housing production and the recent efforts of the private sector to develop new markets are described. The third chapter explores housing production in the United States. Finally, the relationships between market characteristics and organizational structure of the housebuilding industry in both countries are considered.
Constructing the space of visual attention
This thesis explores the nature of a human experience in space through a primary inquiry into vision. This inquiry begins by questioning the existing methods and instruments employed to capture and represent a human experience of space. While existing qualitative and quantitative methods and instruments -- from "subjective" interviews to "objective" photographic documentation -- may lead to insight in the study of a human experience in space, we argue that they are inherently limited with respect to physiological realities. As one moves about the world, one believes to see the world as continuous and fully resolved. However, this is not how human vision is currently understood to function on a physiological level. If we want to understand how humans visually construct a space, then we must examine patterns of visual attention on a physiological level. In order to inquire into patterns of visual attention in three dimensional space, we need to develop new instruments and new methods of representation. The instruments we require, directly address the physiological realities of vision, and the methods of representation seek to situate the human subject within a space of their own construction. In order to achieve this goal we have developed PUPIL, a custom set of hardware and software instruments, that capture the subject's eye movements. Using PUPIL, we have conducted a series of trials from proof of concept -- demonstrating the capabilities of our instruments -- to critical inquiry of the relationship between a human subject and a space. We have developed software to visualize this unique spatial experience, and have posed open questions based on the initial findings of our trials. This thesis aims to contribute to spatial design disciplines, by providing a new way to capture and represent a human experience of space.
Decision tools for electricity transmission service and pricing : a dynamic programming approach
For a deregulated electricity industry, we consider a general electricity market structure with both long-term bilateral agreements and short-term spot market such that the system users can hedge the volatility of the real-time market. From a Transmission Service Provider's point of view, optimal transmission resource allocation between these two markets poses a very interesting decision making problem for a defined performance criteria under uncertainties. In this thesis, the decision-making is posed as a stochastic dynamic programming problem, and through simulations the strength of this method is demonstrated. This resource allocation problem is first posed as a centrally coordinated dynamic programming problem, computed by one entity at a system- wide level. This problem is shown to be, under certain assumptions, solvable in a deterministic setup. However, implementation for a large transmission system requires the algorithm to handle stochastic inputs and stochastic cost functions. It is observed that the curse of dimensionality makes this centralized optimization infeasible. Thesis offers certain remedies to the computational issues, but motivates a partially distributed setup and related optimization functions for a better decision making in large networks where the intelligent system users drive the use of network resources. Formulations are introduced to reflect mathematical and policy constraints that are crucial to distributed network operations in power systems.
Search for binary KBOs using images from the deep ecliptic survey and Magellan telescopes
A method is developed for examining image frames containing Kuiper Belt objects and classifying those KBOs as binary candidate / not binary candidate. This method uses an elliptical point-spread function fitting technique, relying on the fact that a binary KBO system will appear in a.telescope image as an elongated source, with more or less elongation depending on the separation distance and the relative intensity of the two components. Mosaic images for forty-five Kuiper Belt objects from the Deep Ecliptic Survey were tested. Of these, four were binary candidates, twelve others were possible candidates, twenty one were circular within detection limits, and seven lacked the consistent set of field stars necessary for determination. Observations were planned for the nights of April 8-11, 2002 with the Magellan I telescope. However, due to unfavorable observing conditions during the three nights of the run, no good images of the ten candidate binaries and possible candidates visible at the time were able to be taken. This methodology of image analysis should continue to be applied to new KBO images, to identify additional binary candidates. Imaging of the candidates selected through this thesis work will be attempted during future Magellan observing runs, particularly in June 2002.
AE in hydraulic fracturing of Barre granite
The purpose of this work is to observe acoustic emissions (AE) generated by laboratory scale hydraulic fracturing of Barre granite specimens with single or double flaw geometry. The scope of this work covers the experimental setup and subsequent analyses of these acoustic emissions, which in essence are elastic waves generated by displacements occurring within the rock specimen. Acoustic emissions can be analysed in a number of ways, whether individually or grouped together into events when a number of emissions arrive together. Individual emissions can be analysed for their amplitude, energy, or their frequency content while the source location and source mechanism can be inferred from events. The AE data are analysed in conjunction with water pressure, high resolution images, and high speed video taken during the experiment. Section 3 of this work outlines the selection and subsequent modification of the AE acquisition system with a specific focus on capturing AE at the end of each experiment in order to compare results to high speed video. This section describes the initial equipment selection, as well as initial experiments where it was noted that crucial data were missing around the time of the fracture event. This issue was largely resolved by modifying the system parameters as well as upgrading the PC supporting the AE acquisition cards. Section 4 of this work describes analysis of one experiment, providing an in-depth start-to-finish account of the nature of acoustic emissions at different phases of the experiment. This section also considers all of the hydraulic fracture experiments performed at different vertical loads and specimen flaw geometries, and draws some tentative conclusions regarding hydraulic fracturing in granite.
Development and evaluation of AGNI : a distributed netgraph data collection and analysis tool
AGNI is a tool developed to facilitate the analysis of communication networks between people in large organizations. Determining patterns of communication within organiza­tions is critical to the analysis of the effectiveness of their structure. Until recently, large organizations presented a special problem. The copious amounts of data that had to be analyzed made the process slow and tedious. AGNI makes the job easier by automating many of the tasks involved in this process. It provides a user friendly graphical environ­ment in which data can be collected and analyzed. It also provides an easy way to display the data in a graphical format so that it is easy to visualize.
An investigation of nugget formation and simulation in resistance spot welding
Resistance spot welding is an important part of the automotive manufacturing industry. Today's automobiles typically contain five-thousand or more welds. Spot welding is attractive to the industry for its speed and relative simplicity, however, it is not without its disadvantages. Current spot welding technology relies on volumes of empirical data to set the welding parameters. Often this data is not sufficient to ensure that a nugget of sufficient size is formed without a splash occurring. Complicating the matter further is the industry's increased use of coated steel. The chemical reaction of the coatings with the electrodes cause greater variations in the nugget size. This study seeks to characterize the nugget formation patterns of spot welding for a variety of welding materials and welding conditions. Specifically for coated steels welded over long periods with the same electrodes. The study also seeks to relate a small set of monitored parameters during welding to the accurate prediction of nugget size and splash occurrence. Welding current and voltage are identified as the key parameters of interest and are used as input to a numerical simulation to predict nugget diameter. A comparison of the simulated nugget diameters to actual diameters obtained experimentally show good agreement between the two values. The simulation, however, uses a finite difference method to obtain the nugget diameter. This method requires extensive calculations that cannot be completed in the normal welding time. Therefore, a new method of splash prediction has been investigated using the mean temperature of the workpiece. The mean temperature is obtained from a heat balance model of the workpiece. The heat balance model is advantageous to the finite difference simulation because its calculation time is short enough to be carried out during the welding process. A comparison of the maximum mean temperature and the experimental nugget diameters shows that mean temperature is capable of predicting nugget diameter. This correlation indicates that the mean temperature value can serve as a splash prediction parameter.
The paleoceanography of the Bering Sea during the last glacial cycle
In this thesis, I present high-resolution stable-isotope and planktonic-fauna records from Bering Sea sediment cores, spanning the time period from 50,000 years ago to the present. During Marine Isotope Stage 3 (MIS3) at 30-20 ky BP (kiloyears before present) in a core from 1467m water depth near Umnak Plateau, there were episodic occurrences of diagenetic carbonate minerals with very low 13C (-22.4%), high 18O (6.5%), and high [Mg]/[Ca], which seem associated with sulfate reduction of organic matter and possibly anaerobic oxidation of methane. The episodes lasted less than 1000 years and were spaced about 1000 years apart. During MIS3 at 55-20 ky BP in a core from 2209m water depth on Bowers Ridge, N. pachyderma (s.) and Uvigerina 18O and 13C show no coherent variability on millennial time scales. Bering Sea sediments are dysoxic or laminated during the deglaciation. A high sedimentationrate core (200 cm/ky) from 1132m on the Bering Slope is laminated during the Blling warm phase, Allerd warm phase, and early Holocene, where the ages of lithological transitions agree with the ages of those climate events in Greenland (GISP2) to well within the uncertainty of the age models. The subsurface distribution of radiocarbon was estimated from a compilation of published and unpublished North Pacic benthic-planktonic 14C measurements (475{2700 m water depth). There was no consistent change in 14C probles between the present and the Last Glacial Maximum, Blling-Allerd, or the Younger Dryas cold phase. N. pachyderma (s.) 18O in the Bering Slope core decreases rapidly (in less than 220 y) by 0.7-0.8h at the onset of the Blling and the end of the Younger Dryas. These isotopic shifts are accompanied by transient decreases in the relative abundance of N. pachyderma (s.), suggesting that the isotopic events are transient warnings and sustained freshenings.
Determining articulator configuration in voiced stop consonants by matching time-domain patterns in pitch periods
In this thesis I will be concerned with linking the observed speech signal to the configuration of articulators. Due to the potentially rapid motion of the articulators, the speech signal can be highly non-stationary. The typical linear analysis techniques that assume quasi-stationarity may not have sufficient time-frequency resolution to determine the place of articulation. I argue that the traditional low and high-level primitives of speech processing, frequency and phonemes, are inadequate and should be replaced by a representation with three layers: 1. short pitch period resonances and other spatio-temporal patterns; 2. articulator configuration trajectories; 3. syllables. The patterns indicate articulator configuration trajectories (how the tongue, jaws, etc. are moving), which are interpreted as syllables and words. My patterns are an alternative to frequency. I use short time-domain features of the sound waveform, which can be extracted from each vowel pitch period pattern, to identify the positions of the articulators with high reliability. These features are important because by capitalizing on detailed measurements within a single pitch period, the rapid articulator movements can be tracked. No linear signal processing approach can achieve the combination of sensitivity to short term changes and measurement accuracy resulting from these nonlinear techniques. The measurements I use are neurophysiologically plausible: the auditory system could be using similar methods. I have demonstrated this approach by constructing a robust technique for categorizing the English voiced stops as the consonants B, D, or G based on the vocalic portions of their releases. The classification recognizes 93.5%, 81.8% and 86.1% of the b, d and g to ae transitions with false positive rates 2.9%, 8.7% and 2.6% respectively.
Viral delivery of recombinant GH to rescue effects of chronic stress on HIP learning
Chronic stress has been linked to variation in gene regulation in the hippocampus (HIP) among other areas. These lead to cytoskeletal and volumetric rearrangements in various nuclei of the central nervous system and are thought to contribute to several stress-sensitive disorders. One such gene that has been shown to be downregulated in HIP in response to stress is somatotropin, colloquially known as growth hormone (GH). These experiments were conducted to develop a novel assay for examination of working memory in rats and explore the nature of stress-induced impairment of hippocampal function and determine whether infusion of a modified herpes simplex virus (HSV) carrying the recombinant rodent growth hormone (GH) would be sufficient to restore normal hippocampal function. After 21 days of chronic immobilization stress (CIS), animals received bilateral infusions into the dorsal HIP of 2[mu]l HSV carrying either GH with green florescent protein (GFP) or GFP only. On the second day following the infusion, the animals received trace conditioning, a HIP-dependent task, with five tone-shock pairings of a 16 second tone followed by a 30 second trace interval terminating with a 1 second 0.85 milliamp footshock. An inter-trial interval of 3 minutes was used to separate the tone-shock pairings. The following day the animals were tested for fear to the context and for fear to the tone in a novel context, measured by amount of time the animal spent freezing. Using this criterion, animals that had undergone stress that received the control vector were less likely to freeze when presented with the tone, indicating an impairment of hippocampal function. Viral-mediated overexpression of GH in the dorsal HIP was able to reverse the CIS-related impairment in hippocampal function. ELISA was used to verify the expression of GH from the infused vector. These experiments may yield future directions of investigation for stress-based disorders.
Exploring the wireless sensor node tradespace within Structural Health Monitoring
Historically, Structural Health Monitoring (SHM) involved visually or acoustically observing a structure and if damage was detected, remedial action was undertaken to repair or replace it. For example, as early as 6,500 BC, potters were known to listen for audible sounds during the cooling of their ceramics, signifying structural failure. In 1864 the UK parliament legislated for dam monitoring after a dam failure lead to the deaths of 254 people. The Golden Gate and Bay Bridges in San Francisco were monitored by Dean S. Carder in 1937 to determine "the probabilities of damage due to resonance" during an earthquake. Given the technological limitations of the last century, the predominant focus of SHM has been on identifying and understanding the global modal properties of a structure. However, the promise of SHM is the detection of any damage to infrastructure at the earliest possible moment from an array of sensors and actuators. To achieve this goal, not only global but local facets of the structure must be monitored. If this promise is realized, it will be possible to design bridges closer to their tolerances, to extend their operational lives, and to switch servicing to more cost-effective condition based maintenance. Such changes will reduce construction and maintenance costs while still providing the same level of service. This thesis will explore the wireless sensor node tradespace with the specific intent of delving into the areas limiting large scale, high density, localized coverage of structural health monitoring of bridges.
How can a Chinese LCC airline become successful and profitable
This thesis is a discussion of improvements Chinese Low-cost Carriers (LCCs) could make in order to become profitable and successful as their counterparts in Europe and the United States. China is Asia's latest LCC market and has accelerated its pace in developing LCCs since Chinese authorities published the "Guidance on Promoting the Development of Low-Cost Airlines" at the end of 2013. There are currently seven LCCs in China, including Spring Airlines, an established LCC since 2004 along with another six newly established LCCs in response to the published Guidance. The newly established six have followed many practices adopted by Spring Airlines, which is seen as a role model for the Chinese LCC market. Spring Airlines applies sound management practices to control its costs, producing good profitability. As the Guidance is implemented by Chinese authorities in next few years, many costs that were previously uncontrolled, such as aircraft ownership, crew and airport fees, could be further cut. While expecting positive news from civil authorities, Spring Airlines and other Chinese LCCs could begin work on improvements. From the perspective of cost control, Spring Airlines could reduce its labor costs by decreasing employee-to-aircraft ratio. In terms of increasing revenue, Spring Airlines could increase charges on excess baggage and seat selection. It could also expand into various other ancillary services, such as in-flight wifi, to increase revenues.
In pursuit of sound
Architectural tools are built around visualizing our environment, however it is sound that paints the most accurate picture of our experiences. A glass wall feels more constricting than a opaque sheet, because when sound can reach our ears, our worlds are opened up. It is time that we leverage the technology that gives us so much insight into the science of sound, and start designing architectural experiences that can communicate visually what we understand sonically. Historically we have relied on a known quantity of sound in order to generate space. Pythagoras unifies specific rules of harmony and proportion from sound in order to determine guidelines for pleasant spaces. Years later, Xenakis composes a musical score that informs the constructed surface of the Philips pavilion. Both pioneers of sonic architecture, and both pushing the technology of sound design. This thesis advances the theory of sound architecture by focusing on the smallest component of sound -the frequency- and translating that into the smallest component of form, -the gradient. Frequencies layer on one another to create an entire sonic composition, so must the gradients blend together to bring architecture into being. The invitation to explore sonic movements as architectural experiences comes from the success of these gradients to convey imagined spaces among a flat image. It is through the production and implementation of this image that the architect can seek new control over visual forms that capture the ears as well as the eyes.
A transit-timing variation study of the extrasolar planet TrES-3
Portable Occultation Eclipse and Transit System (POETS) detectors [7] mounted on the Wallace Astrophysical Observatory (WAO) 0.8m and NASA Infrared Telescope Facility (IRTF) 3m telescopes are used to observe five stellar transit events of the extrasolar planet TrES-3 [5]. Model light curves are fit to the five data sets and transit midtimes are determined. Midtimes obtained in this study, along with midtimes reported by Sozzetti et al. (2008) [8], are compared to the ephemeris of the planet. The transit timing variation (TTV) of each midtime is calculated. Based on these data the presence of a third body in the TrES-3 system cannot be determined. Mass and period of a hypothetical perturbing body are calculated for several illustrative cases.
Estimation of potential aircraft fuel burn reduction in cruise via speed and altitude optimization strategies
Environmental performance has become a dominant theme in all transportation sectors. As scientific evidence for global climate change mounts, social and political pressure to reduce fuel burn and C0 2 emissions has increased accordingly, especially in the rapidly growing aviation industry. Operational improvements offer the ability to increase the performance of any aircraft immediately, by simply changing how the aircraft is flown. Cruise phase represents the largest portion of flight, and correspondingly the largest opportunity for fuel burn reduction. This research focuses on the potential efficiency benefits that can be achieved by improving the cruise speed and altitude profiles operated by flights today. Speed and altitude are closely linked with aircraft performance, so optimizing these profiles offers significant fuel burn savings. Unlike lateral route optimization, which simply attempts to minimize the distance flown, speed and altitude changes promise to increase the efficiency of aircraft throughout the entire flight. Flight data was collected for 257 flights during one day of domestic US operations. A process was developed to calculate the cruise fuel burn of each selected flight, based on aircraft performance data obtained from Piano-X and atmospheric data from. NOAA. Improved speed and altitude profiles were then generated for each flight, representing various levels of optimization. Optimal cruise climbs and step climbs of 1,000 and 2,000 ft were analyzed, along with optimal and LRC speed profiles. Results showed that a maximum fuel burn reduction of 3.5% is possible in cruise given complete altitude and speed optimization; this represents 2.6% fuel reduction system-wide, corresponding to 300 billion gallons of jet fuel and 3.2 million tons of CO₂ saved annually. Flights showed a larger potential to improve speed performance, with nearly 2.4% savings possible from speed optimization compared to 1.5% for altitude optimization. Few barriers exist to some of the strategies such as step climbs and lower speeds, making them attractive in the near term. As barriers are minimized, speed and altitude trajectory enhancements promise to improve the environmental performance of the aviation industry with relative ease.
Experimental and theoretical characterization of a Hall thruster plume
Despite the considerable flight heritage of the Hall thruster, the interaction of its plume with the spacecraft remains an important integration issue. Because in-flight data fully characterizing the plume in the space environment are currently unavailable, laboratory measurements are often used to understand plasma expansion and thereby minimize adverse plume-spacecraft interactions. However, experimental measurements obtained in ground facilities do not properly capture the wide angle plume effects most important for plume-spacecraft interactions because of the high background pressure of the laboratory environment. This research describes a method to determine the in-orbit plume divergence of a Hall thruster from laboratory measurements and characterizes the plasma properties of the in-orbit plume. Plume measurements were taken with a Faraday probe and a Retarding Potential Analyzer at various background pressures to correlate changes in current density and ion energy distribution with changes in pressure. Results showed that current density increases linearly with background pressure at any given angle. This linear relationship was used to extrapolate laboratory measurements to zero background pressure, the in-orbit condition. Measurements from the Faraday probe and the Retarding Potential Analyzer were compared to ensure consistency. The effect of discharge voltage on plume divergence was also investigated. Measurements from both probes revealed that plume divergence decreases with an increase in discharge voltage. Hall thruster plume expansion was also characterized using a numerical plume simulation. Comparison of plume simulation results for in-orbit conditions to extrapolated current density at zero pressure demonstrated good agreement.
Boom
This thesis is about my relationship to technology through the medium of my body. By implication it is about how our culture and society view and interact with technology's various manifestations. I use my voice as the medium of this exploration. Boom is a sound and video insertion embodying and re-presenting my vocal arguments and mergings with the machines of a cement pour at the Big Dig in Boston in the spring of the year 2000. Boom offers noise, physical auditive immersion, and hopefully a provocative and meaningful perspective on relating with machines. It creates temptations and in draughts of air around the metaphysical ideas it conjures with the humor and poetry of anarchy. By merging and falling out, struggling and capturing, losing and regaining, the machines and I are negotiating our relationship, our take on each other, our roles, our positions relative to each other. Each machine becomes an extension of my body, as I am resonating within its cavities and it is resonating within me. There is a constant arbitration of who is driving whom, my voice driving the machine's motor and/or the machine's vibrations moving my body, feelings, and perceptions of self within space. As I follow a machine's vibratory lead, try to keep up, to match, to catch, through matching vocalizations, I access previously unacknowledged places within myself. Something like the mantras of other cultures - magical brutal mysterious consonance expressed in broad daylight. Communication occurs through the correspondence of internal and external vibrations. Emanating and absorbing. The tones have an acupunctural precision, able to vibrate certain organs, interstitial tissues, cells, thereby accessing the body's warehouses. The performances of myself with the construction machines in the city throw new perspectives on how we conceive of not only the gigantic machines in our environments, but of other elements of technology as well, such as the intimate integration with small electronic devices being cultivated everywhere within our reach.
Behavior of a silkworm silk fiber web structure under wind load
Optimized by Nature for millions of years, silk is one of the strongest biomaterials with outstanding mechanical properties, it is both extensible and tough in order to ensure specific functions. In particular, protein-based Bombyx mori silkworm silk's stiffness is originated from the crystalline region of the semi-crystalline fibroin and the extensibility from the length hidden within the amorphous region. The silk fiber is coated with sericin which acts as a glue connecting fibers together and as a matrix in the three-dimensional nonwoven multi-layer composite structure of the cocoon. These properties can be engineered and enhanced with forced reeling silk: fast spun silks are stiffer and less extensible than slow reeled silk. For this study, two-dimensional single cocoon layer webs are created by silkworms and tested under an increasing wind load until failure, the deflections are recorded. To complement the experimental results, the web's structure is generated in two different models: straight fiber web and wavy fiber web models. Both models are studied under constant wind load for four type of fibers with different reeling speeds thus different mechanical properties. These tests indicate that the deflection increases with wind load for both the experiments and the simulations, but also that webs composed of fibers with different mechanical properties are not necessary stiffer and less extensible as the material they are composed of are stiffer and less extensible because of the high redundancy and randomness of the web structure. The divergence in results between the experiments and the simulations suggests the need to improve the models to be more in accordance with the real webs.
Detection using envelope following responses and Impacts on central auditory coding
Nearly all information about the acoustic environment is conveyed to the brain by auditory nerve (AN) fibers. While essential for hearing, these fibers may also be the most vulnerable link in the auditory pathway: moderate noise exposure can cause loss of AN fibers without causing hair cell damage or permanent threshold shift. This neuropathy is undetectable by standard clinical examination, but post-mortem evidence suggests that it is widespread in humans. Its impact on suprathreshold hearing ability is likely profound, but is not well understood. An essential tool for evaluating the impact of neuropathy is a non-invasive test useable in humans. Since noise-induced neuropathy is selective for high-threshold AN fibers, where phase locking to envelopes is particularly strong, we hypothesized that the envelope following response (EFR) might be a more sensitive measure of neuropathy than the more traditional auditory brainstem response (ABR). We compared ABRs and EFRs in mice following a neuropathic noise exposure. Changes to EFRs were more robust: the variance was smaller, thus inter-group differences were clearer. Neuropathy may be the root cause of a number of deficits that can occur in listeners with normal audiograms, such as speech discrimination in noise and ability to use envelope cues. We searched for neural correlates of these deficits in the mouse auditory midbrain following exposure. Consistent with reductions in EFRs, synchronization to envelopes was impaired. Neural detectability of tones in background noise was impaired, but only for cases when noise level changed every 600 milliseconds. When noise level changed every minute, responses were equal to those of unexposed mice, implicating changes to adaptation. In quiet, tone-evoked rate-level functions were steeper, indicating that neuropathy may initiate a compensatory response in the central auditory system leading to the genesis of hyperacusis. In sum, we found compensatory effects on coding in the midbrain beyond the simple direct effects expected by peripheral neuropathy.
Statistical nano-chemo-mechanical assessment of shale by wave dispersive spectroscopy and nanoindentation
Shale is a common type of sedimentary rock formed by clay particles and silt inclusions, and, in some cases, organic matter. Typically, shale formations serve as geological caps for hydrocarbon reservoirs. More recently, various shale formations have been identified as prolific sources of oil and natural gas and as host lithologies for the disposal of CO2 and nuclear waste. Despite its abundance, the characterization of shale rocks remains a challenging task due to their complex chemistry, heterogeneous microstructure, and multiscale mechanical behaviors. This thesis aims at establishing the link between the composition and mechanics of shale materials at grain scales. A comprehensive experimental program forms the basis for the characterization of the chemical composition and mechanical properties of shale at micrometer and sub-micrometer length scales. The chemical assessment was conducted through a novel experimental design involving grids of wave dispersive spectroscopy (WDS) spot analyses and statistical clustering of the chemical data generated by the experiments. This so-called statistical grid WDS technique was coupled with grid nanoindentation experiments as a means to assess the nanochemomechanics of shale rocks. The similar microvolumes probed by both methods ensure a direct relation between the local chemistry and mechanics response of shale materials. The results of this investigation showed that the grid WDS technique provides quantitative means to determine the chemistries of silt-size inclusions (mainly quartz and feldspars) and the clay matrix. The mineralogy assessments obtained by grid WDS analysis were validated through comparisons with results from X-ray image analysis and X-ray diffraction (XRD) experiments. The direct coupling of the grid WDS and indentation techniques revealed that the porous clay phase, previously inferred from the mechanistic interpretation of indentation experiments, corresponds to the response of clay minerals. The coupling technique also showed that clay minerals located nearby silt inclusions exhibit enhanced mechanical properties due to a composite action sensed by nanoindentation. The new understanding developed in this thesis provides valuable insight into the chemomechanics of shale at nano and microscales. This coupled assessment represents valuable information for the development of predictive models for shale materials which consider the intricate links of composition, microstructure, and mechanical performance.
The form of clean energy neighborhoods : how it is guided and how it could be
The subject of "clean energy city" has attained increased attention in recent year. However, almost all studies to date about "clean energy" are either at the building scale or the regional scale and little touches the real estate development scale, or in other words the neighborhood scale. The research project "Making the 'Clean Energy City' in China" - funded by Energy Foundation, China - is the first attempt to dress the relationship between neighborhood form and energy consumption. As part of the research, my thesis proposes the framework to address the energy-form relationship in in-home operational energy use, to be further developed in the future stages of the research project. The thesis poses two questions, how does neighborhood form affect in-home operational energy consumption and how do we guide designers and developers on the design of neighborhood form in order to reduce in-home operational energy consumption? The thesis approaches these questions through a review of existing energy-related simulation tools including building energy analysis tools, microclimate analysis tools and tools that address energy concerns at the neighborhood scale. The thesis proposes to use a simulation approach based on prototypes and their variations at the cluster scale - a form descriptive system developed by the research project - as the direction to establish this form-energy relationship as well as to convey this relationship to designers graphically. Finally as a demo, the thesis examines the relationship between operational energy use and neighborhood form under Prototype "Small Perimeter Block" with DeST, a building simulation tool that can also be applied to a cluster of buildings.
Comparison of receiver function deconvolution techniques
Receiver function (RF) techniques are commonly used by geophysicists to image discontinuities and estimate layer thicknesses within the crust and upper mantle. A receiver function is a time-series record of the P-to-S (Ps) teleseismic wave conversions within the earth and can be viewed as the Earth's impulse response. An RF is extracted from seismic data by deconvolving the observed trace from an estimate of the source wavelet. Due to the presence of noise in the data, the deconvolution is unstable and must be regularized. Six deconvolution techniques are evaluated and compared based on their performance with synthetic data sets. These methods approach the deconvolution problem from either the frequency or time domain; some approaches are based on iterative least-squares inversions, while others perform a direct inverse of the problem. The methods also vary in their underlying assumptions concerning the noise distribution of the data set, level of automation, and the degree of objectivity used in deriving or choosing the regularization parameter. The results from this study provide insight into the situations for which each deconvolution method is most reliable and appropriate.
Planning Boston for the future worker
As mobile technology continues to decentralize the workplace and employees are become liberated from their desks, the role of the office building within its urban context entered a state of flux. Accelerated by the burgeoning sharing economy and increased telecommuting, cities must start to incorporate, and even prioritize, productive workplace geographies over large swaths of land. This thesis sets out to take-on that challenge, and re-imagines the role of the office building within the city by adapting the emerging model of co-working as an urban device. On a regional scale this proposal looks at under-utilized areas within Boston that could be developed with distributed work in mind. On a neighborhood scale, the project speculates on how productive overlaps between a neighborhood co-working space and public amenities such as transportation systems and parks could create a new urban typology that enhances the life of its citizens.
Single-photon frequency upconversion for long-distance quantum teleportation and communication
Entanglement generation, single-photon detection, and frequency translation that preserves the polarization quantum state of the photons are essential technologies for long distance quantum communication protocols. This thesis investigates the application of polarization entanglement to quantum communication, including frequency upconversion, photon-counting detection, and photon-pair and entanglement generation. We demonstrate a near-unity efficient frequency conversion scheme that allows fast and efficient photon counting at wavelengths in the low-loss fiber optic and atmospheric transmission band near 1.55 /im. This upconverter, which is polarization-selective, is useful for classical as well as quantum optical communication. We investigate several schemes that allow frequency translation of polarization-entangled photons generated via spontaneous parametric downconversion in second order non-linear crystals. We demonstrate upconversion from 1.56 to 0.633 m that preserves the polarization state of an arbitrarily polarized input. The polarization-insensitive upconverter uses bidirectional sum-frequency generation in bulk periodically poled lithium niobate and a Michelson interferometer to stabilize the phase. Using this bidirectional upconversion technique, entangled photons produced in a periodically poled parametric downconverter can be translated to a different wavelength with preservation of their polarization state. We discuss the implications of these results for quantum information processing.