Title
stringlengths
3
331
text
stringlengths
14
9.14k
Analyzing South Korea's counterinvasion capability against North Korea
Assuming there is another North Korean invasion; could the South Koreans counterinvade North Korea and prevail even without the United States' assistance? This paper studies the possibility of a South Korean counterinvasion against North Korea by looking at the qualitative combat dynamics and performing a formal campaign analyses based on the Korean peninsula's conventional military balance. This study first analyzes the process of the South Korean defensive against the North Korean invasion, and examines South Korea's likely counterinvasion scenarios and assesses their chances of success. These scenarios vary based on North Korea's likely courses of action once its offensive fails, depending on whether the North Koreans retreat to the military demarcation line or hold their position within the South territory. According to this paper's analysis, South Korea is capable of counterinvading North Korea in all the scenarios suggested. South Korea possesses a qualitatively superior force with better readiness and logistics powered by a stronger economy, while the North Koreans lack the force effectiveness necessary to carry out their theory of victory. First, the South Korean forces are capable of fending off a North Korean invasion while inflicting severe damage to the North Koreans; second, the South Korean forces would inflict considerable casualty to the North Koreans during their retreat; finally, the South Korean offensive would be capable of breaking through the weakened North Korean defense. This study makes several contributions. First, it examines the puzzle of South Korean counterinvasion that has been under-discussed despite its political and strategic significance. In doing so, the study presents an opportunity to explain North Korea's recent behaviors and the United States' redefinition of its role involving the peninsula, hence increasing our understanding of the East Asian security dynamics. Second, by providing an updated survey of the peninsula's conventional balance, this study enhances our knowledge in the two Korea's strategic capabilities which have undergone considerable changes. Third, this study advances our usage of campaign analyses by applying a phased use of the models with changing parameters. This approach enables us to analyze multi-phased campaigns comprised of different dynamics with better accuracy.
Celebration of change : an exploration of meaning, form and material on the Chelsea waterfront
Change as a physical and psychological process is the connecting thread of the three themes explored in this thesis. First, the AA and AI-Anon programs are used as a framework for an exploration of spirituality or meaning. The programs are about finding a route to positive change in one's life through a sharing of common life experiences among group members. Second, concrete is used as a primary structural and textural material and as the point of departure for an exploration of all other materials. The integration of material and form throughout the design process was a basic premise. Finally, the strong character of the site, on the waterfront, demanded investigation. The everchanging influences on the site generated the built form. This document shows the project as a building process and a design process. The introduction clarifies the thematic origins and intentions. A pictorial essay describes the design process. Design production phases narrate the essay and act as a point of reference for a chronological journal.
The effect of intermediate concepts in hierarchical relational learning
The DARPA Bootstrapped Learning project uses relational learners to ladder concepts together to teach a final concept, essentially narrowing the search space at each step. However, there are many ways to structure background knowledge to teach a concept and it is uncertain how different ways of structuring knowledge affects the accuracy and performance of learning. In this paper, we examine the effect of having intermediate concepts when learning high level concepts. We used Quinlan's First Order Inductive Learner to learn target selection for a real-time strategy game and did cross-validation tests with varying levels of intermediate concept support. The results show that including intermediate concepts does not always improve performance and accuracy.
The Gamma Intensity Monitor at the Crystal-Barrel-Experiment
This thesis details the motivation, design, construction, and testing of the Gamma Intensity Monitor (GIM) for the Crystal-Barrel-Experiment at the Universität Bonn. The CB-ELSA collaboration studies the baryon excitation spectrum; resonances are produced by exciting nucleons in a polarized target with a linearly or circularly polarized, GeV-order photon beam. The photoproduced decay states are measured by a variety of detectors covering almost 4[pi] of the solid angle about the target. To measure the total cross section of these reactions, the total flux of photons through the target must be known to high accuracy. As the total cross section for nuclear photoproduction is low, counting the photons unscattered in the target is sufficiently accurate measurement of this quantity{this is the purpose of the Gamma Intensity Monitor. It is the final detector along the beam path and counts all photons that do not react with the target. The major design parameter is that the detector must consistently count GeV order photons at 10 MHz. This is accomplished by allowing the gammas to electronpositron pair produce within Ĉerenkov radiating PbF2 crystals. The Cerenkov light from these highly relativistic lepton pairs is measured with industrial photomultiplier tubes to provide an effective efficiency close to unity. Special bases were built for photomultiplier to ensure stable signal amplication even high count rates. Detailed descriptions of the GIM are provided to ensure that its inner working are completely transparent and to enable efficient operation and maintenance of the detector.
Key challenges to model-based design : distinguishing model confidence from model validation
Model-based design is becoming more prevalent in industry due to increasing complexities in technology while schedules shorten and budgets tighten. Model-based design is a means to substantiate good design under these circumstances. Despite this, organizations often have a lack of confidence in the use of models to make critical decisions. As a consequence they often invest heavily in expensive test activities that may not yield substantially new or better information. On the other hand, models are often used beyond the bounds within which they had been previously calibrated and validated and their predictions in the new regime may be substantially in error and this can add substantial risk to a program. This thesis seeks to identify factors that cause either of these behaviors. Eight factors emerged as the key variables to misaligned model confidence. These were found by studying three case studies to setup the problem space. This was followed by a review of the literature with emphasis on model validation and assessment processes to identify remaining gaps. These gaps include proper model validation processes, limited research from the perspective of the decision-maker, and lack of understanding of the impact of contextual variables surrounding a decision. The impact these eight factors have on model confidence and credibility was tested using a web-based experiment that included a simple model of a catapult and varying contextual details representing the factors. In total 252 respondents interacted with the model and made a binary decision on a design problem to provide a measure for model confidence. Results from the testing showed several factors proved to cause an outright change in model confidence. One factor, a representation of model uncertainty, did not result in any differences to model confidence despite support from the literature suggesting otherwise. Findings such as these were used to gain additional insights and recommendations to address the problem of misaligned model confidence. Recommendations included system-level approaches, improved quality of communication, and use of decision analysis techniques. Applying focus in these areas can help to alleviate pressures from the contextual factors involved in the decision-making process. This will allow models to be used more effectively thereby supporting model-based design efforts.
Performance limits of radio frequency power complementary metal-oxide-semiconductor
Wireless and mobile communication systems have become ubiquitous in our daily life. The need for higher bandwidth and thus higher speed and data rates in wireless communications has prompted the exploration of millimeter-wave frequencies. Some of the applications in this regime include high-speed wireless local area networks and high data rate personal area networks at 60 GHz, automotive collision avoidance radar at 77 GHz and millimeter-wave imaging at 94 GHz. Most of these applications are cost sensitive and require high levels of integration to reduce system size. The tremendous improvement in the frequency response of state-of-the-art deeply scaled CMOS technologies has made them an ideal candidate for millimeter-wave applications. A few research groups have already demonstrated single chip CMOS radios at 60 GHz. However, the design of power amplifiers in CMOS still remains a significant challenge because of the low breakdown voltage of deep submicron CMOS technologies. Power levels from 60 GHz power amplifiers have been limited to around 15 dBm with power-added efficiencies in the 10-20% range, despite the use of multiple gain stages and power combining techniques. In this work, we have studied the RF power potential of commercial 65 nm and 45 nm CMOS technologies. We have mapped the frequency, power and efficiency limitations of these technologies and identified the physical mechanisms responsible for these limitations. We also present a simple analytical model that allows circuit designers to estimate the maximum power obtainable from their designs for a given efficiency. The model uses only the DC bias point and on-resistance of the device as inputs and contains no adjustable parameters. We have demonstrated a record output power density of 210 mW/mm and power-added efficiency in excess of 75% at VDs = 1.1 V and f = 2 GHz on 45 nm CMOS devices. This record power performance was made possible through careful device layout for minimized parasitic resistances and capacitances. Total output power approaching 70 mW was measured on 45 nm CMOS devices by increasing the device width to 640 gm. However, we find that the output power scales non-ideally with device width because of an increase in normalized on-resistance in the wide devices. PAE also decreases with increasing device width because of degradation in f. in the wide devices. Additionally PAE decreases as the measurement frequency increases, though the output power remains constant with increasing frequency. Small-signal equivalent circuit extractions on these devices suggest that the main reason for the degradation in the normalized output power and PAE with increasing device width is the non-ideal scaling of parasitic gate and drain resistances in the wide devices.
City growth and community-owned land in Mexico City
Sixteen years after the promulgation of the reforms to Article 27 that regulates land tenure in Mexico, there is consensus among political authorities, public officials, private investors, and scholars that the outcomes have been completely different than were predicted. In spite of the important changes produced in the legal status, internal organization, and governmental interactions of the agrarian communities, these changes have not translated into a massive privatization of ejido lands, and the incorporation of social land into urban development is far below what was expected. Furthermore, new forms of illegal social land sales emerged as a response to the privatization initiative. In addition to the economic and legal arguments typically used to explain this phenomenon, this research identifies three key factors that also have a strong influence in the ejidos' behavior towards land privatization: the hindering effect of community participation on privatization; the permanence of a clientelistic relationship between ejidos and government; and agrarian communities' cultural attachment to land. These factors reflect the economic, political, and cultural dimensions of the ejidos, something that the ideologues did not take into account when they defined the mechanisms for land liberalization. Key words: urban expansion, Mexico City, ejidos, Article 27, informal market, regularization, clientelism.
Novel electromagnetic scattering phenomena
Scattering of electromagnetic waves is fundamentally related to the inhomogeneity of a system. This thesis focuses on several theoretical and experimental findings of electromagnetic scattering under contemporary context. These results vary from scattering off real structures and off synthetic gauge fields. The source of scattering also varies from near-field to far-field excitations. First, we present a general framework for nanoscale electromagnetism with experimental verifications based on far-field plasmonic scattering. We also theoretically propose two schemes featured by thin metallic films and hybrid plasmonic dielectric nanoresonantors, respectively, aiming at achieving high radiative efficiency in plasmonics. Second, treating free electrons as a near-field scattering excitation, we derive a universal upper limit to the spontaneous free electron radiation and energy loss, verified by measurements on the Smith-Purcell radiation. Such an upper limit allows us to identify a new regime of radiation operation where slow electrons are more efficient than fast ones. The limit also exhibits a emission probability divergence, which we show can be physically approached -by coupling free electrons to photonic bound states in the continuum. Finally, we will discuss the scattering of optical waves off synthetic magnetic fields. Specifically, we will describe a synthesis non-Abelian (non-commutative) gauge fields in real space, enabled time-reversal symmetry breaking with distinct manners. These synthetic non-Abelian gauge fields enables us to observe the non-Abelian Aharonov-Bohm effect with classical waves and classical fluxes, relevant for classical and quantum topological phenomena.
Structural specificity in coiled coils : a and d position polar residues
Experimental studies were performed to determine the effects of single polar residues at the a and the d positions of a reference coiled coil, GCN4-pVL. The reference coiled coil is very stable in solution and exists as a mixture of dimers and trimers. The placement of single polar residues in the otherwise hydrophobic core of GCN4-pVL has dramatic effects on both stability and oligomeric specificity. The effects vary with regard to both the identity and the position (a vs d position) of the polar substitution. The d position is more sensitive to polar residues. Two residues, asparagine and glutamine, were found to be much more destabilizing when placed at d positions than any residues at the a positions. In addition to the known ability of a single asparagine at the a position to specify coiled-coil dimers, it was found that a single threonine at the d postion can specify a coiled-coil trimer. The crystal structures of four coiled coils, with either a single serine or threonine at either the a or the d position, were determined. These structures show that threonine residues tend to form intra-helical hydrogen bonds when packed in the coiled-coil core. Serine residues tend to show more variability in their packing when placed in the core positions. Polar residues can affect the local coiled-coil geometry. The most dramatic effect was as a result of serine residues at the d position in which in a local decrease in the supercoil radius of 0.5A centered around the buried serine position was observed.
Bitcoin : a new way to understand payment systems
Bitcoin has recently raised substantial attention from a variety of players: media, academia, and regulators. While the price of Bitcoin has continuously and substantially gone down since its peak in December 2013, other metrics indicate a more optimistic prospect. The adoption of Bitcoin is increasing rapidly, even in off-line channels, with companies such as Microsoft, Paypal, EBay, Dell, and Expedia now accepting Bitcoin payments. Venture capitalists are avid for investment opportunities in Bitcoin related opportunities, such as online wallets and remittance payment systems among others. Bitcoin has a prominent present as a payment technology and the potential to grow as a relevant alternative to credit cards and bank transfer. However, some features of its configuration will hinder its future growth. In this paper, I explore what Bitcoin is today and how could it be improved.
Experimental visualization of the near-boundary hydrodynamics about fish-like swimming bodies
This thesis takes a look at the near boundary flow about fish-like swimming bodies. Experiments were performed up to Reynolds number 106 using laser Doppler velocimetry and particle imaging techniques. The turbulence in the boundary layer of a waving mat and swimming robotic fish were investigated. How the undulating motion of the boundary controls both the turbulence production and the boundary layer development is of great interest. Unsteady motions have been shown effective in controlling flow. Tokumaru and Dimotakis (1991) demonstrated the control of vortex shedding, and thus the drag on a bluff body, through rotary oscillation of the body at certain frequencies. Similar results of flow control have been seen in fish-like swimming motions. Taneda and Tomonari (1974) illustrated that, for phase speeds greater than free stream velocity, traveling wave motion of a boundary tends to retard separation and reduce near-wall turbulence. In order to perform experiments on a two-dimensional waving plate, an apparatus was designed to be used in the MIT Propeller tunnel, a recirculating water tunnel. It is an eight-link piston driven mechanism that is attached to a neoprene mat in order to create a traveling wave motion down the mat. A correlation between this problem and that of a swimming fish is addressed herein, using visualization results obtained from a study of the MIT RoboTuna. The study of the MIT RoboTuna and a two-dimensional representation of the backbone of the robotic swimming fish was performed to further asses the implications of such motion on drag reduction. PIV experiments with the MIT RoboTuna indicate a laminarisation of the near boundary flow for swimming cases compared with non-swimming cases along the robot body. Laser Doppler Velocimetry (LDV) and PIV experiments were performed.
Understanding program structure and behavior
A large software system usually has structure in it. Several functions work together to accomplish a certain task, and several tasks are grouped together to perform a bigger task. In order to understand this division, one has to consult the documentations or read through the source code. However, the documentations are usually incomplete and outdated, while code inspection is tedious and impractical. Algorithms have been proposed that automatically group functions with similar functionality. In this thesis I will present LogiView, an algorithm that presents an organizational view of the functions. This view will ease the process of understanding the structures in the program, identifying functions with related tasks, and separating the functions into logical groups. I will also present a methodology of analyzing the function names in the program. This method leverages the result of the LogiView algorithm and identifies the names that are most relevant to the functionality of the program. Given a set of programs that are known to have same functionality, this method extracts the similarity in the function names and builds a dictionary of the names that are semantically related to the functionality.
Integrating physics of assembly in discrete element structures
Architecture is intrinsically the coordination of parts to form a whole, and the detail is the critical point where this coordination is resolved. Between technical and perceptual constraints, details are geometrical solutions and organizational devices that negotiate physics, construction, assembly, materials, fabrication, economy, and aesthetics, all at once. Over centuries, detail formulas have been created, tested and revised by builders, architects, engineers and fabricators; collected in catalogs and magazines, they have been usually documented in two-dimensional sections that silence all intervening forces. While masters with knowledge in construction and materials are able to iterate through different possibilities creating novel details, usually less experienced designers can only reproduce standard solutions. In the era of digital design and fabrication, where material and building information can be parametrically linked and massively computed, can we challenge what we can build with a new way of looking at details? This thesis introduces the concept of synced detailing, where conflicting constraints are resolved in the details. As a case study, stability and assemblability are studied on a structurally challenging discretized funicular funnel shell. The goal is to eliminate scaffolding during assembly using only joint details. Finite element (FE) analysis is performed at every step of the assembly sequence to show global and local instability. Local translation freedom (LTF) analysis shows the range of feasible assembly directions. Detailing knowledge is studied and encoded in shape rules to create a detail grammar. Real time visual feedback of the constraints informs the designer to apply these rules to create joints that satisfy across a range of priorities. This method is generalizable for other constraints, allowing architects to create novel solutions informed by quantifiable analysis and encoded knowledge. Keywords: details, discrete element structures, assembly, funnel shell, digital fabrication.
Forecasting resource requirements for drug development long range planning
This thesis investigates the use of a task-based Monte Carlo simulation model to forecast headcount and manufacturing capacity requirements for a drug development organization. A pharmaceutical drug development group is responsible for designing the manufacturing process for new potential drug products, testing the product quality, and supplying product for clinical trials. The drug development process is complex and uncertain. The speed to market is critical to a company's success. Therefore, it is important to have an adequate number of employees and available manufacturing capacity to support timely and efficient drug development. The employees and manufacturing capacity can either be supplied internally or externally, through contract manufacturing organizations. This thesis formulates and empirically evaluates a simulation model designed using the Novartis Biologics drug development process and is adaptable to other pharmaceutical organization. The model demonstrates 7% accuracy when compared with historical data, and estimates within 13% of the currently accepted manufacturing capacity forecasting tool. Additionally, three case studies are included to demonstrate how the model can be used to evaluate strategic decisions. The case studies include: a drug development process improvement evaluation, an outsourcing evaluation, and an "at risk" development evaluation.
Démêler les attitudes des femmes d'origine africaine vis-à-vis de leurs cheveux
This thesis concerns the question of the relationship that women of African origin have with their hair. Based on an analysis of the perceptions and attitudes of these women towards their hair, the thesis attempts to answer the question: what standards do they hold for their hair, and what factors contribute to that? To respond to these questions, I analyze two media sources created by African women, for a female, African audience -- postcolonial-era magazine, AWA: la revue de lafemme noire, and the modern-day YouTube channel of a young Franco-Senegalese woman, Aïcha Danso. The analysis raises questions about identity and its construction, and the meanings hair holds for black women. It leads to the hypothesis I propose: that natural, kinky hair is fundamentally racialized, and that the ways in which black women choose to style it -- although imbued with meanings that are multidimensional, profound, and personal -- come under structural factors such as the ideals of feminine beauty.
CNTs : a study on assembly methods
The urgent stipulation is to manufacture CNTs of desired properties and dimensions. The heart of this yearning lies in understanding the growth and assembly methods of CNTs, which are not yet clear. In this study, hence, we concentrate on the synthesis and assemblage, pointing out the parameters, issues, and limitations of the available techniques. While there have been many interesting and successful attempts to grow and assemble CNTs by various methods, this study focuses on the conventional growing techniques and proposes unconventional assembly methods.
Endogenous and chemical modifications of model proteins
Protein modifications are ubiquitous in nature, introducing biological complexity and functional diversity. Of the known post-translational modifications, glycosylation is one of the most common and most complex, yet some of the biological implications of this modification remain poorly understood. The development of chemical tools to mimic these modifications is helping to elucidate their biological roles and improve the range of biopharmaceuticals. To probe the biochemistry of endogenous glycosylation and to test the efficacy of novel synthetic modifications, tractable protein scaffolds are needed. Previously, members of the pancreatic-type ribonuclease (ptRNases) superfamily have been utilized as model protein scaffolds. They are a class of highly conserved, secretory endoribonucleases that mediate diverse biological functions through the cleavage of RNA.
Deficiencies in online privacy policies : factors and policy recommendations
Online service providers (OSPs) such as Google, Yahoo!, and Amazon provide customized features that do not behave as conventional experience goods. Absent familiar metaphors, unraveling the full. scope and implications of attendant privacy hazards requires technical knowledge, creating information asymmetries for casual users. While a number of information asymmetries are proximately rooted in the substantive content of OSP privacy policies, the lack of countervailing standards guidelines can be traced to systemic failures on the part of privacy regulating institutions. In particular, the EU Data Protection Directive (EU-DPD) and the US Safe Harbor Agreement (US-SHA) are based on comprehensive norms, but do not provide pragmatic guidelines for addressing emerging privacy hazards in a timely manner. The dearth of substantive privacy standards for behavioral advertising and emerging location-based services highlight these gaps. To explore this problem, the privacy policies of ten large OSPs were evaluated in terms of strategies for complying with the EU-DPD and US-SHA and in terms of their role as tools for enabling informed decision-making. Analysis of these policies shows that OSPs do little more than comply with the black letter of the EU-DPD and USSHA. Tacit data collection is an illustrative instance. OSP privacy policies satisfice by acknowledging the nominal mechanisms behind tacit data collection supporting services that "enhance and customize the user experience," but these metaphors do not sufficiently elaborate the privacy implications necessary for the user to make informed choices. In contrast, privacy advocates prefer "privacy and surveillance" metaphors that draw users attention away from the immediate gratification of customized services. Although OSPs do bear some responsibility, neither the EU-DPD nor the US-SHA provide the guidance or incentives necessary to develop more substantive privacy standards. In light of these deficiencies, this work identifies an alternative, collaborative approach to the design of privacy standards. OSPs often obscure emerging privacy hazards in favor of promoting innovative services. Privacy advocates err on the other side, giving primacy to "surveillance" metaphors and obscuring the utility of information based services. Rather than forcing users to unravel the conflicting metaphors, collaborative approaches focus on surfacing shared concerns. The collaborative approach presented here attempts to create a forum in which OSPs, advertisers, regulators, and civil society organizations contribute to a strategic menu of technical and policy options that highlight mutually beneficial paths to second best solutions. Particular solutions are developed through a process of issue (re)framing focused on identifying common metaphors that highlight shared concerns, reduce overall information asymmetries, and surface the requirements for governance and privacy tools that address emerging risks. To illustrate this reframing process, common deficiencies identified in the set of privacy policies are presented along with strategic options and examples of potential reframings.
Study of ultranarrow superconducting NbN nanowires and nanowires under strong magnetic field for photon detection
Photon detection is an integral part of experimental physics, high-speed communication, as well as many other high-tech disciplines. In the realm of communication, unmanned spacecraft are travelling extreme distances, and ground stations need more and more sensitive and selective detectors to maintain a reasonable data rate. In the realm of computing, some of the most promising new forms of quantum computing require consistent and efficient optical detection of single entangled photons. Due to projects like these, demands are increasing for ever more efficient detectors with higher count rates. The Superconducting Nanowire Single-Photon Detector (SNSPD) is one of the most promising new technologies in this field, being capable of counting photons as faster than 100MHz and with efficiencies around 50%. Currently, the leading competition is from the geiger-mode avalanche photodiode, which is capable of ~20 ~70% efficiency at a ~5MHz count rate depending on photon energy. In spite of this, the SNSPD is still a brand-new technology with many potential avenues unexplored. Therefore, it is still possible that we can achieve even better efficiencies and count rates to keep up with the requirements of burgeoning technologies. This photon detector consists of a meandering superconducting nanowire biased close to its critical current. In this regime, a single incident photon can cause a section of the detector to switch to normal conduction, producing a voltage pulse due to its now-finite resistance. An electron micrograph is given in figure 1. The intrinsic limitations of the detector (disregarding the optical coupling mechanism and the support electronics) are dominated by two primary points. First is the efficiency with which the detector converts an absorbed photon into a voltage pulse. This is controlled by the behavior of the excited electrons at the point of incidence. I will discuss this in greater detail in the next section. The second is the electrothermal time constant of the detector. This limits the relaxation time of the detector and therefore limits the maximum rate at which the detector can count photons. As we will see, detection efficiency increases as the number of Cooper pairs that need to be excited into the normal state to switch conduction modes decreases. One way to decrease the bandgap is to decrease the cross-section of the wire. This has already been shown to increase detection efficiency, but this cannot be done to arbitrarily narrow wires. Not only is there a limitation to fabrication, but there are also interesting quantum effects that occur at very narrow wire widths. Note that much of the research that has been done to understand these quantum effects has been undertaken on wires much wider than those we will be using. Simultaneously, most of the materials used previously have coherence lengths much longer than NbN. Therefore, even though our wires are narrower by a substantial factor, they are still wider than the coherence length of NbN. As such the validity of the one-dimensional approximation to be presented in in 2.2 is debatable for our wires. However, it should be apparent that regardless of behavior, thermal and quantum phase slips will be one of the limiting factors in producing ultra-narrow nanowire photon detectors. Until now, photon detectors have only used current biasing techniques. However, it is well known that both magnetic field and current have the effect of reducing the energy required to excite superconducting charge carriers. Therefore, it may be possible to detect photons using magnetic field close to H, instead of current close to Ic. It is important to note, however, that the readout of the detector in its current configuration depends on some bias current to produce a voltage pulse. Therefore, with the current detector architecture, one still needs a significant bias current. For my thesis, I have first investigated the theory of supercurrents in ultranarrow wires and confirmed the behavior of this theory with our materials and fabrication techniques in order to establish a lower bound for wire width where photon detection is still possible. In addition, I have constructed and executed an initial experiment to test how photon detectors behave under magnetic field bias conditions. I have measured how these different bias conditions affect the efficiency of the detector as well as the dark count rate.
An overview of the volcano-tectonic hazards of Portland, Oregon, and an assessment of emergency preparedness
Portland, Oregon, lies within an active tectonic margin, which puts the city at risk to hazards from earthquakes and volcanic eruptions. The young Juan de Fuca microplate is subducting under North America, introducing not only arc magmatism into the overlying plate, but also interplate and intraplate seismicity related to the subduction zone. Large crustal earthquakes are also probable in Portland because of the oblique strike-slip Portland Hills Fault zone. These hazards create risk to Portland residents and infrastructure because of pre-existing vulnerabilities. Much of Portland's downtown area, including the government and business districts, is at risk of ground shaking infrastructure damage, liquefaction and landslides due to earthquakes. Additionally, the city is within 110 km of three active Cascadia stratovolcanoes, two of which pose hazards from tephra and lahars. Though the city is under the umbrella of four emergency response plans-city, county, state and federal-there are critical gaps in mitigation strategies, emergency exercises and community education and outreach. Portland cannot prevent earthquakes or volcanic eruptions, but the city can reduce its vulnerability to these hazards.
Understanding and designing carbon-based TE materials with atomic-scale simulations
Thermoelectric (TE) materials, which can convert unused waste heat into useful electricity or vice versa, could play an important role in solving the current global energy challenge of providing sustainable and clean energy. Nevertheless, thermoelectrics have long been too inefficient to be utilized due to the relatively low energy conversion efficiency of present thermoelectrics. One way to obtain improved efficiency is to optimize the so-called TE figure of merit, ZT = S2[sigma]/[kappa], which is determined by the transport properties of the active layer material. To this end, higher-efficiency thermoelectrics will be enabled by a deep understanding of the key TE properties, such as thermal and charge transport and the impact of structural and chemical changes on these properties, in turn providing new design strategies for improved performance. To discover new classes of thermoelectric materials, computational materials design is applied to the field of thermoelectrics. This thesis presents a theoretical investigation of the influence of chemical modifications on thermal and charge transport in carbon-based materials (e.g., graphene and crystalline C60 ), with the goal of providing insight into design rules for efficient carbon-based thermoelectric materials. We carried out a detailed atomistic study of thermal and charge transport in carbon-based materials using several theoretical and computational approaches - equilibrium molecular dynamics (EMD), lattice dynamics (LD), density functional theory (DFT), and the semi-classical Boltzmann theory. We first investigated thermal transport in graphene with atomic-scale classical simulations, which has been shown that the use of two-dimensional (2D) periodic patterns on graphene substantially reduces the room-temperature thermal conductivity compared to that of the pristine monolayer. This reduction is shown to be due to a combination of boundary effects induced from the sharp interface between sp 2 and sp 3 carbon as well as clamping effects induced from the additional mass and steric packing of the functional groups. Using lattice dynamics calculations, we elucidate the correlation between this large reduction in thermal conductivity and the dynamical properties of the main heat carrying phonon modes. We have also explored an understanding of the impact of chemical functionalization on charge transport in graphene. Using quantum mechanical calculations, we predict that suitable chemical functionalization of graphene can enhance the room-temperature power factor of a factor of two compared to pristine graphene. Based on the understanding on both transport studies we have gained here, we propose the possibility of highly efficient graphene-based thermoelectric materials, reaching a maximum ZT ~ 3 at room temperature. We showed here that it is possible to independently control charge transport and thermal transport of graphene, achieving reduced thermal conductivity and enhanced power factor simultaneously. In addition, we discuss here the broader potential and understanding of the key thermoelectric properties in 2D materials, which could provide new design strategies for high efficient TE materials. Transport properties of crystalline C60 are investigated, and the results demonstrate that these properties can be broadly modified with metal atom intercalation in crystalline C60. In contrast to the case of graphene, where chemical modifications induce structural changes in graphene lattice (from sp 2 C to sp3 C), intercalating metal atoms only modify van der Waals interactions between C60 molecules, but still having a huge impact on both thermal and charge transport. Taken both transport studies together, we suggest that the metal atom intercalation in crystalline C60 could be a highly appealing approach to improve both transports in solid C60, and with appropriate optimization of TE figure of merit, ZT value as large as 1 at room-temperature can be achieved. This dissertation consists of five chapters. Chapter 1 contains a brief review of thermoelectric materials. Chapter 2 introduces the theoretical approaches for computing both thermal (with molecular dynamics and lattice dynamics) and charge transport (with density functional theory and semi-classical Boltzmann approach) in materials. In Chapter 3, our study of thermal transport in functionalized graphene is presented. Chapter 4 describes our results on charge carrier transport in functionalized graphene. Combining these two works, we predict the full ZT values of functionalized graphene. Chapter 5 describes how to optimize ZT value in metal atom intercalated crystalline C60
Analysis of biomimetics in the application of robotic locomotion with a focus on structures, materials and dynamics
Biomimetics is the study and analysis of natural systems to inform engineering design and technology development. Through interdisciplinary research and analysis of natural phenomena, engineers are able to gain valuable insight to drive efficient and robust innovation. A critical understanding of nature's design constraints is necessary to effectively create an optimized bio-inspired design. A literature review of bio-inspired design is conducted with a focus on structures, dynamics and materials in the context of robotic locomotion. The biomimetic process in the read literature is analyzed for procedure and accomplishment. A generalized method of biomimetics is presented, based on the studied work. It is concluded that successful biomimetics requires four key elements: (1) a clear understanding of the natural system, gained through depth of biological study, (2) the development of a simplified model that encompasses the core elements of the natural system, (3) the design of a synthetic system that meets the model's specifications, and (4) engineering optimization to improve the final design.
Audience aware computational discourse generation for instruction and persuasion
If we are to take artificial intelligence to the next level, we must further our understanding of human storytelling, arguably the most salient aspect of human intelligence. The idea that the study and understanding of human narrative capability can advance multiple fields, including artificial intelligence, isn't a new one. The following, however, is: I claim that the right way to study and understand storytelling is not through the traditional lens of human creativity, aesthetics or even as a plain planning problem, but through formulating storytelling as a question of goal driven social interaction. In particular, I claim that any theory of storytelling must account for the goals of the storyteller and the storyteller's audience. To take a step toward such an account, I offer a framework, which I call Audience Aware Narrative Generation, drawing inspiration in particular from narratology, cognitive science, and of course, computer science. I propose questions that we need to work on answering, and suggest some rudimentary starter thoughts to serve as guidelines for continued research. I picked a small subset of the proposed questions on which to focus my computational efforts: storytelling for teaching and persuasive storytelling. More specifically, I developed exploratory implementations for addressing this subset on the Genesis story understanding platform. The results have been encouraging: On the pedagogical side, my implementation models and simulates a teacher using the story of Macbeth to instruct a student about concepts such as murder, greed, and predecessor relationships in monarchies. On the persuasion side, my implementation models and simulates various different tellings of the classic fairy tale "Hansel and Gretel" so as to make The Witch appear likable in one, and unlikable in another; to make The Woodcutter appear to be a good parent just going through difficult times in one, and a bad parent in another. Perhaps the most amusing example however, especially in these days of sensationalized and highly subjective journalism, is that given a story of the cyber warfare between Russia and Estonia, my implementation can generate one telling of the story which makes Russia appear to be the aggressor, and yet another telling which makes Estonia appear to be the aggressor. And isn't that the story of history, politics, and journalism in one neat package! Overall, I have made four key contributions: I proposed Audience Aware Narrative Generation as a new framework for developing theories of storytelling; I identified important questions that must be answered by storytelling research and proposed initial plans of attack for them; I introduced storytelling functionality into the Genesis story understanding platform; and I implemented narrative discourse generators which produce a wide range of narratives, adapting accordingly to different audiences and goals.
Magnetic behavior of 360° domain walls in patterned magnetic thin films
360° transverse domain walls (360DWs), which form readily from transverse 180° domain walls (180DWs) of opposite sense, demonstrate qualitatively distinct behaviors from their constituent 180DWs and are therefore of interest both from a physics perspective and for their applications in future domain wall devices. This thesis presents experimental and modeling-based investigation of the properties and behaviors of 360DWs including formation, magnetostatic behaviors, and response to field, AC, and DC driving forces. The formation of 360DWs is first examined by simulation in a model nanowire. An injection system capable of producing 360DWs from a wire and an injection pad is presented and its behavior is analyzed both by simulation and experimentally through magnetic force microscopy and scanning electron microscopy with polarization analysis. Next, a model multilayer system is used to demonstrate the magnetostatic behavior of 360DWs, demonstrating a much reduced stray field compared to 180DWs and a strong interlayer pinning behavior that allows the 360DW to act as a programmable pinning site. The richness of this magnetostatic behavior is analyzed experimentally in a rhombic ring system which readily generates 360DWs during reversal. The action of 360DWs is shown to dominate the reversal process, reducing switching fields and showing multiple reversal pathways with a strong dependence on field history. Simulations are used to explore the response of the 360DW to field and DC and AC currents. This highlights 360DW behaviors quite distinct from those of 180DWs, including the inability to be positioned by an applied field and the ability to be destroyed in place. 360DWs are shown to have an intrinsic resonant behavior in the GHz range, the exact frequency of which is broadly tunable by an applied field. Resonance can be excited by an applied AC current, and in conjunction with DC can be used to pin and gate 360DW propagation at a geometric pinning site, using globally applied currents and without impact on nonpinned domain walls.
Detection of lower hybrid waves at the plasma edge of a diverted tokamak
In this thesis, two experimental investigations are presented in an attempt to understand the loss of lower hybrid current drive (LHCD) efficiency in reactor-relevant, high-density plasmas on the diverted Alcator C-Mod tokamak. While the previous work has identified that edge interactions, such as collisional absorption and excessive up-shift of the parallel refractive index due to full-wave effects, could potentially explain the observed loss of LHCD efficiency in a wide range of line-averaged densities, these simulations still over-predict the fast electron population generated by LHCD above line-averaged densities of 1 x 10²⁰ m- ³. It is critical to identify remaining mechanisms in order to demonstrate advanced tokamak operation at reactor-relevant densities. The first investigation is to perform microwave backscattering experiments to detect electrostatic lower hybrid (LH) waves in the scrape-off layer (SOL), where a significant amount of the injected LH power may be deposited due to a number edge loss mechanisms. An existing ordinary-mode (O-mode) reflectometer system has been modified to measure the backscattered O-mode wave as a result of Bragg backscattering of the incident O-mode wave off the LH wave. The detection of LH waves in a region that is not magnetically connected to the launcher implies a weak single pass absorption of LH waves in high density plasmas. The observed spectral width of the backscattered signals indicates the presence of non-linear effects on the propagation of LH waves, but no experimental evidence is found to confirm whether the non-linear mechanism that is responsible for the observed spectral broadening is responsible for the observed loss of LHCD efficiency. The second investigation is to examine the change in LH frequency spectra due to density-dependent non-linear effects, such as parametric decay instability (PDI) above the line-averaged density of 1 x 10²⁰ m-³, using the probes installed on the launcher, outer divertor, and inner wall. Ion cyclotron PDI is found to be excited above line-averaged densities of 1 x 10²⁰ m -³, suggesting that ion cyclotron PDI may be a remaining mechanism to understand the loss of LHCD efficiency. Ion cyclotron PDI is observed to be excited not only at the low-field-side edge but also at the high-field-side (HFS) edge of Alcator C-Mod, further corroborating that LH waves are weakly absorbed on their first pass. Evidence of pump depletion is found with the onset of ion cyclotron PDI at the HFS edge in lower-single-null plasmas. However, no apparent pump depletion is seen when the magnetic geometry is switched to an upper-single-null. Moreover, ion cyclotron PDI is excited at the LFS edge in this case. Thus, the role of the observed ion cyclotron PDI on the loss of LHCD efficiency needs further experimental investigation to be conclusive due to the different ion cyclotron PDI strength and excitation location, depending on magnetic configurations. A summary of the new findings of this thesis is as follows: First measurements of PDI below the classical threshold (wo/wLH(0) ~~ 2). First ever observation of PDI on HFS of a tokamak and its relationship to being in a multi-pass damping regime. " Advancement of PDI as a candidate mechanism for the LHCD density limit.
Politics of "activating" public space in the State of Kuwait
My thesis examines the socio-spatial dialectics that unfold throughout the development of public spaces in Kuwait. In my thesis, public space is understood as a space of urban dialogue between the state, the city, and the people. This dialogue can be understood by examining the spatial dynamics between three complex agents: the State, Kuwaiti citizens, and public space. This thesis examines the historical development of two site-specific typologies in Kuwait: first, the political actions taken in squares and streets; and second, the design interventions in large and small park networks within the city of Kuwait. In this thesis, I investigate the political dissent movement from Al-Safat square since 1938 and AI-Erada square since 2006, and the ways in which the government responds to each. Additionally, I examine the emergence of the park networks in Kuwait since the 1960s and more recent design movements found within the Secret Garden and the MantaqaMe movement in 2013 until today, in comparison to the larger-scale Al- Shaheed Park. This thesis argues that each space was appropriated by socio-political citizen movements as a symbolic space for political dispute over democracy or power. With each new socio-political movement, the government responds with 'new' legislation and spatial maneuvers aimed at disrupting these claims. Finally, I propose a more nuanced reading of public space in Kuwait, highlighting a more complex spatial relationship between the Kuwaiti citizens and the State. This thesis posits that public space is not only a container for politics but the space to reinstate spatial and political agency for a broad desire for change. Studying the two contested typologies, I seek to dismantle the neutral view of public space as simply scenic or functional in favor of a far more political history that is also a spatial history.
Biochemical and biophysical investigations of N-linked glycosylation pathways in archaea
Asparagine-linked glycosylation is an abundant and complex protein modification conserved among all three domains of life. Much is known about N-glycan assembly in eukaryotes and selected bacteria, in which the oligosaccharyltransferase (OTase) carries out the en bloc transfer of glycans from polyprenyl-PP-linked donors onto asparagine side chains of acceptor proteins. The first aim of this thesis is to elucidate the biochemical details of archaeal N-linked glycosylation, specifically through in vitro analysis of the polyprenyl-P-dependent pathway of the methanogenic archaeon Methanococcus voltae. The archaeal OTase, known as AglB, utilizes a-linked dolichyl-P-trisaccharide substrate as the glycosyl donor for transfer to the acceptor protein. This dolichyl-P-glycan is generated by an initial retaining glycosyltransferase (AglK) and elaborated by additional glycosyltransferases (AglC and AgIA) to afford Dol-P-GlcNAc- Glc-2,3-diNAcA-ManNAc(6Thr)A. Despite the homology to other bacterial or eukaryotic OTases that exploit polyprenyl-PP-linked substrates, the M. voltae AglB efficiently transfers disaccharide to model peptides from the Dol-P-GlcNAc-Glc-2,3-diNAcA monophosphate. While this archaeal pathway affords the same asparagine-linked P-glycosyl amide products generated in bacteria and eukaryotes, these studies provide the first biochemical evidence revealing that despite the apparent similarities of the overall pathways, there are actually two general strategies to achieve N-linked glycoproteins across the domains of life. A second focus of this thesis involves biophysical studies to probe structural features and conformational dynamics of AglB. An intramolecular LRET experimental system was developed to report on substrate binding and the resulting structural transformations in AgIB. There is a strong need for detailed studies on the mechanistic and functional significance of archaeal adaptations of N-linked glycosylation, especially exploring differences between AglB and other OTases that allow AglB to utilize these unique polyprenyl-P-linked substrates. Lastly, a cell-free expression system was established for the efficient synthesis of Alg5, a yeast dolichyl-phosphate glucosyltransferase that shares high sequence similarity to AglK, the first glycosyltransferase in the M. voltae pathway. Dol-P-Glc was generated and examined to unambiguously characterize the stereochemistry of the product of Alg5.
Real-time context-based sound and color extraction from text
Narratarium is a system that uses English text or voice input, provided either realtime or off-line, to generate context-specific colors and sound effects. It accomplishes this by employing a variety of machine learning approaches, including commonsense reasoning and natural language processing. It can be highly customized to prioritize different performance metrics, most importantly accuracy and latency, and can be used with any tagged sound corpus. The final product allows users to tell a story in an immersive environment that augments the story-telling experience with thematic colors and background sounds. In this thesis, we present the back-end logic that generates best guesses for contextual colors and sound using text input. We evaluate the performance of these algorithms under different configurations, and demonstrate that performance is acceptable for realistic user scenarios. We also discuss Narratarium's overall design.
Angles-only navigation technique for maneuver-free spacecraft proximity operations
The technique of angles-only navigation consists of a single surveyor making line-of-sight observations of a target to deduce a relative navigation state from a sequence of angle measurements. Historically, angles-only navigation has been impeded by a range ambiguity problem in its many applications, especially those involving linear dynamical models. A classical solution to the problem is for the surveyor to perform precise maneuvers to change the nominal angle profile between the surveyor and the target. In the space environment, the orbital dynamics are inherently nonlinear and natural orbit perturbations have the effect of continuous micro-maneuvers. These advantageous conditions present an opportunity to overcome the ambiguity problem and enable spacecraft to navigate passively with a lightweight, low-power camera without the associated fuel cost of maneuver-assisted angles-only navigation. This technology has military and civilian utility for a wide range of missions involving rendezvous and proximity operations, most notably with non-cooperative resident space objects (RSOs). A novel procedure is developed that constrains the admissible region of the target's natural motion to a set of unit-less parameters. These parameters and an arbitrary scale factor combine to describe a single orbit hypothesis that translates into a set of classical orbital elements (COEs). A cluster of uniformly sampled hypotheses are propagated and rendered into angle vs. angle-rate curves. Although these curves exhibit very similar trends for all admissible hypotheses, the angles are slightly different at common angle-rate waypoints during certain parts of the orbit. The set of angle and range hypotheses at these waypoints form a linear map to transform the observed angle to a range approximation. Photometry can complement this procedure with a secondary mapping from the timing of virtual eclipse events if a sufficient time differential is manifested across the admissible hypotheses. A nonlinear least squares (NLS) filter is designed to refine the accuracy of the initial orbit solution using a novel application of Kolmogorov-Arnold-Moser (KAM) theorem to model the Earth's geopotential to any degree and order in the filter dynamics. The KAM torus conveniently captures the full nonlinear effects that make angles-only navigation possible in space and is computationally superior to numerically integrated reference trajectories for exact temporal synchronization with angle observations. Numerical results are presented that demonstrate the first angles-only navigation technique for natural motion circumnavigation trajectories without prior knowledge of the Target's state. An analytical proof is developed to compliment and verify the results.
Low altitude threat evasive trajectory generation for autonomous aerial vehicles
In recent years, high altitude unmanned aerial vehicles have been used to great success in combat operations, providing both reconnaissance as well as weapon launch platforms for time critical targets. Interest is now growing in extending autonomous vehicle operation to the low altitude regime. Because perfect threat knowledge can never be assumed in a dynamic environment, an algorithm capable of generating evasive trajectories in response to pop-up threats is required. Predetermination of contingency plans is precluded due to the enormity of possible scenarios; therefore, an on-line vehicle trajectory planner is desired in order to maximize vehicle survivability. This thesis presents a genetic algorithm based threat evasive response trajectory planner capable of explicitly leveraging terrain masking in minimizing threat exposure. The ability of genetic algorithms to easily incorporate line-of-sight effects, the inherent ability to trade off solution quality for reduced solution time, and the lack of off-line computation make them well suited for this application. The algorithm presented generates trajectories in three-dimensional space by commanding changes in velocity magnitude and orientation. A crossover process is introduced that links two parent trajectories while preserving their inertial qualities. Throughout the trajectory generation process vehicle maneuverability limits are imposed so that the resultant solutions remain dynamically feasible.
Simulating self-assembly of nanoparticles in tumor environments
Self-assembly is important in nanomedicine and increasingly plays a role in drug-delivery or imaging applications in tumors. Predicting behavior and dynamics of nanoparticle systems is very difficult, especially when assembling and disassembling particles are involved. To address this challenge, the Bhatia lab has developed NanoDoc (http: //nanodoc.org), an online game that allows users around the world to design and simulate nanoparticle treatments. During this project, we were able to implement mechanisms to effectively describe and simulate self-assembly in NanoDoc. As a bench mark for our simulator, we show that we are able to reproduce laboratory experiments in the literature. The simulator was then made available to the crowd and a challenge was proposed that requires users to perform self-assembly in a scenario aimed at improving the accumulation of imaging agents in tumors.
Thermal evolution of a compositionally stratified earth, including plates
For subduction to occur, plates must bend and slide past overriding plates along fault zones. This deformation is associated with significant energy dissipation, which changes the energy balance of mantle convection and influences the thermal history of the Earth. To parameterize these effects, a subduction zone was included in a small region of a finite element model for the mantle, which also features an asthenosphere and a mid-oceanic ridge. Velocity boundary conditions were imposed in the vicinity of the subduction. We present theoretical arguments for, and numerical illustrations of the fact that for most modes of deformation, the simple powerlaw relationship of parameterized convection Nu ~ Ra[beta] is not valid anymore, although it is still a good first order approximation. In the case of viscous bending dissipation and non-depth dependent brittle simple shear however, Nu ~ Ra[beta] does hold. [Beta] is less than the value of 1/3 predicted by standard boundary layer theory. For viscous energy dissipation, two different regimes of mantle convection can be considered, depending on the effective viscosity of the lithosphere: the "mobile lid" regime, and the "stagnant lid" regime. For brittle dissipation, the lithosphere strength is a function of yield stress which, when nearing a certain critical value, introduces a third regime, that of the "episodic overturning". Within the "mobile lid" regime, the plate velocities for models with a subduction zone governed by brittle behavior are far less dependent on the plate stress than those models with viscous deformation. This suggests that the plate motion is resisted by viscous stresses in the mantle. The "mobile lid" would be representative for mantle convection associated with plate tectonics, as we observe on Earth. A "stagnant lid" would be the case for the Moon or Mars, while Venus could experience the "episodic overturn" regime featuring cyclic and catastrophic brittle mobilization of a lithosphere with high friction coefficient.
Assessing United States hurricane damage under different environmental conditions
Hurricane activity between 1979 and 2011 was studied to determine damage statistics under different environmental conditions. Hurricanes cause billions of dollars of damage every year in the United States, but damage locations and magnitudes vary from year to year. Seasonal hurricane forecasts predicting the strength of the upcoming hurricane season have the potential to be used by many industries and sectors to reduce and mitigate the effects of hurricanes. However, damage itself is not predicted by these forecasts. This work analyzed trends in hurricane damage due to atmospheric and oceanic conditions, and the results could be applied to and included in seasonal hurricane forecasts, thus increasing forecast applicability and value. This work used synthetic hurricane tracks generated from background climate conditions, a U.S. property portfolio, and a damage function based on wind speed to determine 1979-2011 hurricane damage. Damage was split into La Niña/El Niño and pre-/post- 1995 year sets to determine spatial and temporal trends in U.S. hurricane damage. This work concluded that different regions of the country experienced more or less hurricane damage under different environmental conditions. Knowledge of these trends can be applied to seasonal hurricane forecasts and can influence property owner, regulator, and insurer behavior across the nation.
Massachusetts Institute of Technology Alumni and their Course 2 and 2-A educational experience
Data was gathered and analyzed through a survey of the Mechanical Engineering Course 2 and Course 2-A Alumni to analyze the impact of their choice of major on their current career path and to investigate the career paths of mechanical engineering majors. Data was gathered on their jobs taken, confidence level compared to their peers, preparation and importance abilities, experiences, and reflections. Over 350 graduates completed the survey and several differences were found. Course 2-A students had more transfers from other majors, engaged in a wider variety of career options, and found their elective classes more useful. Course 2 students reported to have a greater importance for technical skills and a higher confidence level with respect to their peers in their profession. There was little difference in most abilities, and what was missing in their MIT experience. Overall, Course 2 and 2-A reported being better prepared for technical subjects and less prepared for communication-related subjects than was required in their job. Moreover, all respondents mentioned missing the same courses in their curriculum that was needed for their job. Finally, Course 2 and 2-A respondents held widely divergent impressions of the other's program. Empirical data suggests that each major possesses qualities to satisfy the specific course's individual needs. This resulted in the conclusion that the Mechanical Engineering Department was on the right path by supporting the Course 2-A major and by recognizing and catering to two separate populations, one with an interest in depth and one with an interest in breadth.
Current profile measurements using Motional Stark Effect on Alcator C-Mod
A Motional Stark Effect (MSE) diagnostic system has been installed on the Alcator C-Mod tokamak to measure the plasma internal magnetic pitch angle profile. The diagnostic utilizes polarization patterns from Doppler-shifted Balmer-alpha decay emission from an energetic neutral beam injected into a magnetically confined plasma. This dissertation consists of three parts: (1) the current status of the C-Mod MSE diagnostic which includes major upgrades in the hardware and calibration techniques; (2) the elimination of the spurious drift in the polarization measurements due to thermal-stress induced birefringence; and (3) the measurement of current density profiles in Lower Hybrid Current Drive (LHCD) experiments. The major hardware upgrades include replacement of photomultiplier tubes (PMT's) with avalanche photodiodes (APD's) which enhanced the quantum efficiency; installation of a wire-grid polarizer to verify small Faraday rotation in the diagnostic; installation of steep edge filters to minimize pollution by the thermal Balmer-alpha signals; rotation of the Diagnostic Neutral Beam (DNB) which significantly reduced the anomalous effect from the secondary beam neutrals during the beam-into-gas calibrations. The new calibration techniques include two plasma calibrations: plasma current sweeping and the plasma size sweeping whose feasibility was experimentally proven; and an absolute intensity calibration which measured the real optical throughput of the system. A large database study indicates the signal-to-background ratio larger than 100 is required to have the measurement uncertainty under 0.1 degrees.
Experimental and numerical characterization of ion-cyclotron heated protons on the Alcator C-Mod tokamak
Energetic minority protons with -100 keV effective temperature are routinely created in Alcator C-Mod plasmas with the application of ICRF. A new multi-channel Compact Neutral Particle Analyzer is used to make measurements of these distributions in Alcator C-Mod's unique and reactor-relevant operating space via an active charge-exchange technique (CX). Using a detailed model that accounts for beam, halo, and impurity CX, core proton temperatures of 430-120 keV are directly measured for the first time in lower density (neo0 0.8 - 1.5 x 1020/m3) Alcator C-Mod plasmas using only 0.5 MW of ICRF power. The model found that the minority proton temperatures are peaked spatially away from r/a=O, even for an on-axis resonance. Additionally, noticeable phase-space anisotropy is seen as expected for ICRF heating. The measured effective temperatures scale approximately with the Stix parameter. The CNPA measurements are also compared with several leading simulation packages. Preliminary comparisons with results from the AORSA/CQL3D Full-wave/Fokker-Planck (FW/FP) code using a new synthetic diagnostic show good agreement and demonstrate that these complex codes are required to simulate Alcator C-Mod's energetic minority populations with accuracy. These FW/FP analyses represent the first comparison between predictions of such detailed codes and extensive minority ion experimental measurements.
Tradition, continuity and change in the physical environment : the Arab-Muslim city
Issues within the context of the present cannot be isolated from their spatial or temporal context. Neither the past (tradition) nor the future (modern technology) can provide solutions to the problems of the present. Their value lies in the fact that they represent "resources" which broaden our choices and inform us as to how similar issues were or could be dealt with in different times and places. However, a society's past and the way that society conceives of its past provides modes of continuity which give the present its authenticity. If we are to deal with the issues of the present and hope for an authentic future, the authority of the past or tradition cannot be blindly accepted though its authenticity and relevance to the present must be recognized. The problem addressed here is that of a present physical environment in the Arab-Muslim city which is to tally different from the traditional one. As a result of this difference, a sense of discontinuity and alienation has developed among the inhabitants of these cities. The purpose of this study is to understand how this process came about and how a sense of continuity with the past can be reestablished. To achieve this purpose four main issues are addressed here: (l) the origin and process of formation of the traditional physical environment; (2) the disparity between the traditional and the contemporary environment; (3) the origins of this disparity; and (4) the possible notions which might be suggested by way of reestablishing a sense of continuity between the past and the present. The legal system is used as a means of analysis in this study. This has helped us to see the physical environment within its socio-cultural context, by informing us about the ideological or structural level of the society and by pointing out accepted social norms and conventions and the mechanism of their social effectiveness. The law has helped us to point out the differences between the traditional and the contemporary process. In the traditional city, the process relied on rules of conduct or social conventions which proscribed certain actions on the part of the inhabitants. In the contemporary city, the rules are physical and prescriptive in nature. They prescribe in physical terms not only what is to be done but also how it is to be implemented. Implied within the traditional process is a reciprocal and possibilist relationship between form and use while the contemporary process advocates a determinist approach to the relationship of form and use. Several factors are believed to have worked in favor of the shift from the traditional process to the contemporary one in the Arab-Muslim city. Important among these are: the existence of certain implied ideologies; changes in the scale of development, power and technology; and problems within the field of architecture and urbanism and their relationship to the Arab-Muslim context. Only by being aware of these processes and factors can we conceive of an appropriate approach to reestablish a sense of continuity with the past that sterns from the needs of the present and aspirations for the future.
Palladium reagents for bioconjugation
physicochemical properties in comparison to their linear counterparts. Here we detail a method for a divergent macrocyclization of unprotected peptides by crosslinking two cysteine residues with bis-palladium organometallic reagents. These synthetic intermediates are prepared in a single step from commercially available aryl bis-halides. Two bioactive linear peptides with cysteine residues at i, i + 4 and i, i + 7 positions, respectively, were cyclised to introduce a diverse array of aryl and bi-aryl linkers. These two series of macrocyclic peptides displayed similar linker-dependent lipophilicity, phospholipid affinity, and unique volume of distributions. Additionally, one of the bioactive peptides showed target binding affinity that was predominantly affected by the length of the linker. Collectively, this divergent strategy allowed rapid and convenient access to various aryl linkers, enabling the systematic evaluation of the effect of appending unit on the medicinal properties of macrocyclic peptides. Chapter 2: We report the use of a sulfonated biarylphosphine ligand (sSPhos) to promote the chemoselective modification of cysteine containing proteins and peptides with palladium reagents in aqueous medium. The use of sSPhos allowed for the isolation of several air-stable and water-soluble mono- and bis-palladium reagents, which were used in an improved protocol for the rapid S-arylation of cysteines under benign and physiologically relevant conditions. The cosolvent-free aqueous conditions were applied to the conjugation of a variety of biomolecules with affinity tags, heterocycles, fluorophores, and functional handles. Additionally, bispalladium reagents were used to perform macrocyclization of peptides bearing two cysteine residues. Chapter 3: The synthesis of palladium oxidative addition complexes of unprotected peptides is described. Incorporation of 4-halophenylalanine into a peptide during solid phase peptide synthesis allows for subsequent oxidative addition at this position of the unprotected peptide upon treatment with a palladium precursor and suitable ligand. The resulting palladium-peptide complexes are solid, storable, water-soluble, and easily purified via high-performance liquid chromatography. These complexes react rapidly with thiols at low micromolar concentrations in an aqueous buffer, offering an efficient method for bioconjugation. Using this strategy, peptides can be rapidly functionalized with small molecules to prepare modified aryl thioether sidechains. Additionally, peptide-peptide and peptide-protein ligations are demonstrated under dilute aqueous conditions.
Municipal maintenance facility
a structure is placed in tompkins square park it is part of a network of civic spaces intended for civil contribution. to the passer-by or the passer-through this distraction, this construction, occupied by some others involves me somehow. at the end near the branch library is a clustering of facilities facilities related to political activity, organizational offices, places for discussions ones begun by two people yesterday or ones on-going for generations by countless many ones that started in this area (neighborhood! community?) or somewhere else pressing issues, burning desires, casual comments, silent observations the right to speak nonsense. voices whether i listen or not, are there with or without mine but with consequence for me my absence will be noted with my presence, i will be held accountable.
GaAs/AlGaAs far-infrared quantum cascade laser
In this thesis I investigated the feasibility of an optically pumped inter subband farinfrared (40-100[mu]m) laser, using GaAs/ AlxGal-xAs heterostructures. The proposed design aims to use LO-phonon-mediated depopulation of the lower THz laser level to aid the intersubband laser population inversion. Interband recombination occurs by means of stimulated emission, thus combining an interband (~1550 me V) and intersubband (~16-18 meV) laser. As the subband properties of both the valence band and the conduction band are important for this work, a numerical program code was developed for the valence band to supplement the available tools for the conduction band. The steady state rate equations for the proposed quantum well structure were solved self-consistently for several different carrier temperatures. The calculations indicate that a pump beam of moderate power (0.5-1 W) concentrated on a device of typical dimensions (104 cm2) can generate an intersuhb and gain of 20 cm-1 at 50 K for a THz emission linewidth of 2 meV. This gain level can suffice to obtain THz lasing action, provided that the cavity losses can be kept in check. The performance of the THz laser is predicted to be very dependent on electron temperature, mainly due to the opening of a parasitic LO-phonon channel between the THz laser levels. Interband lasing seems to be easier to obtain, as the calculated threshold pump intensity is lower than for the intersubband case.
Intrinsically secure communication in large-scale wireless networks
The ability to exchange secret information is critical to many commercial, governmental, and military networks. Information-theoretic security - widely accepted as the strictest notion of security - relies on channel coding techniques that exploit the inherent randomness of the propagation channels to significantly strengthen the security of digital communications systems. Motivated by recent developments in the field, this thesis aims at a characterization of the fundamental secrecy limits of large-scale wireless networks. We start by introducing an information-theoretic definition of the intrinsically secure communications graph (iS-graph), based on the notion of strong secrecy. The iS-graph is a random geometric graph which captures the connections that can be securely established over a large-scale network, in the presence of spatially scattered eavesdroppers. Using fundamental tools from stochastic geometry, we analyze how the spatial densities of legitimate and eavesdropper nodes influence various properties of the Poisson iS-graph, such as the distribution of node degrees, the node isolation probabilities, and the achievable secrecy rates. We study how the wireless propagation effects (e.g., fading and shadowing) and eavesdropper collusion affect the secrecy properties of the network. We also explore the potential of sectorized transmission and eavesdropper neutralization as two techniques for enhancing the secrecy of communications. We then shift our focus to the global properties of the iS-graph, which concern secure connectivity over multiple hops. We first characterize percolation of the Poisson iS-graph on the infinite plane. We show that each of the four components of the iS-graph (in, out, weak, and strong component) experiences a phase transition at some nontrivial critical density of legitimate nodes. Operationally, this is important because it implies that long-range communication over multiple hops is still feasible when a security constraint is present. We then consider full-connectivity on a finite region of the Poisson iS-graph. Specifically, we derive simple, explicit expressions that closely approximate the probability of a node being securely connected to all other nodes inside the region. We also show that the iS-graph is asymptotically fully out-connected with probability one, but full in-connectivity remains bounded away from one, no matter how large the density of legitimate nodes is made. Our results clarify how the spatial density of eavesdroppers can compromise the intrinsic security of wireless networks. We are hopeful that further efforts in combining stochastic geometry with information-theoretic principles will lead to a more comprehensive treatment of wireless security.
Human and modeling approaches for humanitarian transportation planning
Recent disasters have highlighted the need for more effective supply chain management during emergency response. Planning and prioritizing the use of trucks and helicopters to transport humanitarian aid to affected communities is a key logistics challenge. This dissertation explores ways to improve humanitarian transportation planning by building on the strengths of both humans and models. The changing, urgent, multi-objective context of humanitarian aid makes it challenging to formulate and deploy useful planning models. Humans are better able to understand the context, but struggle with the complexity of the problem. This research investigates the strengths and weaknesses of human transportation planners in comparison with models, with the goal of supporting both- better human decision-making and better models for humanitarian transportation planning. Chapter 2 investigates how experienced humanitarian logisticians build transportation plans in a simulated emergency response. Based on an ethnographic study of ten logistics response teams, I show how humans come to understand the problem and its objectives through sensemaking, and solve it through a search-like series of decisions guided by goal-oriented decision rules. I find that the definition of objectives is an important strength of the sensemaking process, and that the human reliance on greedy search may be a weakness of human problem-solving. Chapter 3 defines a performance measure for humanitarian transportation plans, by measuring the importance of the objectives identified in the ethnographic study. I use a conjoint analysis survey of expert humanitarian logisticians to quantify the importance of each objective and develop a utility function to value the performance of aid delivery plans. The results show that the amount of cargo delivered is the most important objective and cost the least; experts prefer to prioritize vulnerable communities and critical commodities, but not to the exclusion of others. Chapter 4 investigates the performance of human decision-making approaches in comparison to optimization models. The human decision-making processes found in Chapter 2 are modeled as heuristic algorithms and compared to a mixed-integer linear program. Results show that optimization models create better transportation plans, but that human decision processes could be nearly as effective if implemented consistently with the right decision rules.
Reduced-order modeling and adaptive observer design for lithium-ion battery cells
This thesis discusses the design of a control-oriented modeling approach to Lithium- Ion battery modeling, as well as the application of adaptive observers to this structure. It begins by describing the fundamental problem statement of a battery management system (BMS), and why this is challenging to solve. It continues by describing, in brief, several different modeling techniques and their use cases, then fully expounds two separate high fidelity models. The first model, the ANCF, was initiated in previous work, and has been updated with novel features, such as dynamic diffusion coefficients. The second model, the ANCF II, was developed for this thesis and updates the previous model to better solve the problems facing the construction of an adaptive observer, while maintaining its model accuracy. The results of these models are presented as well. After establishing a model with the desired accuracy and complexity, foundational observers are designed to estimate the states and parameters of the time-varying ionic concentrations in the solid electrode and electrolyte, as well as an a-priori estimate of the molar flux. For the solid electrode, it is shown that a regressor matrix can be constructed for the observer using both spatial and temporal filters, limiting the amount of additional computation required for this purpose. For the molar flux estimate, it is shown that fast convergence is possible with coefficients pertaining to measurable inputs and outputs, and filters thereof. Finally, for the electrolyte observer, a novel structure is established to restrict learning only along unknown degrees of freedom of the model system, using a Jacobian steepest descent approach. Following the results of these observers, an outline is sketched for the application of a machine learning algorithm to estimate the nonlinear effects of cell dynamics.
Linkages between Eurasian snow cover and Northern Hemisphere winter-time climate variability
Recently it has been shown that the Eurasian snow cover in the prior autumn (ESCSON) and the leading mode variability in the wintertime extratropical Northern Hemisphere (NH) atmospheric circulation have significant correlation. In this study, a linkage between the ESCSON and the following wintertime NH climate variability was investigated. Satellite data from the NOAA is used for snow cover, and NCEP/NCAR Reanalysis data are used for climate variables. The high latitudes sea-level pressure is quality-controlled by use of the IABP sea-level pressure dataset, which is derived from the buoy observations. Interannual variability of and association between ESCSON and winter climate variables were surveyed by use of linear statistical analysis techniques; Empirical Orthogonal Function (EOF) analysis, and correlation/regression analysis. The gravity current by the expansion of the cold, dense air over Siberia north- and westward remained one among the several possible mechanisms. The upper air mechanism may be active to connect the ESCSON and the leading mode of DJF surface pressure variability. It is also suggested that the DJF sea-level pressure variations associated with the ESCSON is considerably confined to the Atlantic side, and has only limited association with the linear trend and the Pacific side variations. Future work may include reexamination of the results using the possible, longer data of the observation. The mechanism connecting the ESCSON anomalies and the upper level circulation anomaly should be investigated further, for which one possible approach is analysis of the wave activity and energy propagation in the troposphere and stratosphere.
Curse or blessing? : challenges of commodity-based economies
The idea that massive natural resource endowments would lead countries to weak economic growth and development is counterintuitive. Oil, gas, copper, gold or other resource riches should, at least in theory, spearhead countries with such natural wealth to growth that parallels non-commodity-based economies and help them achieve high-income status. This has not been the case for majority of the endowed countries particularly in North Africa, the Middle East and Latin America. With few exceptions, such as Norway, Botswana, Chile or Australia, the resources proved to be a curse. I begin with a survey of previous academic literature and research on the effects of natural resources on a given country's economic, social and political development. I then move to exploring the many challenges and pitfalls faced by resource-based economies. Such concepts as the Dutch Disease, Rentier State, Governance and Corruption are discussed. In the final section, outline different methods of the resource curse management by first exploring monetary and fiscal policies, and later touching upon the issues of responsible governance. I conclude by proposing a multi-step framework for resource management.
Modeling and control of airport departure processes for emissions reduction
Taxiing aircraft contribute significantly to the fuel burn and emissions at airports. This thesis investigates the possibility of reducing fuel burn and emissions from surface operations through a reduction of the taxi times of departing aircraft. Data analysis of the departing traffic in four major US airports provides a comprehensive assessment of the impact of surface congestion on taxi times, fuel burn and emissions. For this analysis two metrics are introduced: one that compares the taxi times to the unimpeded ones and another that evaluates them in terms of their contribution to the airport's throughput. A novel approach is proposed that models the aircraft departure process as a queuing system. The departure taxi (taxi-out) time of an aircraft is represented as a sum of three components: the unimpeded taxi-out time, the time spent in the departure queue, and the congestion delay due to ramp and taxiway interactions. The dependence of the taxi-out time on these factors is analyzed and modeled. The performance of the model is validated through a comparison of its predictions with observed data at Boston's Logan International Airport (BOS). A reduction in taxi times may be achieved through the queue management strategy known as N-Control, which controls the push back process so as to keep the number of departing aircraft on the surface of the airport below a specified threshold. The developed model is used to quantify the impact of N-Control on taxi times, delays, fuel burn and emissions at BOS. Finally, the benefits and implications of N-Control are compared to the ones theoretically achievable from a scheme that controls the takeoff queue of each departing aircraft.
The form and use of public space in a changing urban context
Today appropriately designed architectural settings that adequately serve the function of supporting public life are rare. Sociologists and psychologists have consistently observed the alienating effects of modernity, and of modern attitudes to life, on community and society. It is believed that as a result of these attitudes of extreme invidualism, public life in American cities has declined over the last few decades. The urban square, as the classic example of a public space, is studied here in the present context of an American city. While it is clear that the reasons for this decline in public life are much deeper than merely architectural, the underlying premise is that it is at least partly due to the inappropriateness of its physical and programmatic design that the square no longer plays an active role in the public realm. Public space is being designed without people in mind and hence has become merely an empty symbol of public life. The Government Center Plaza in Boston is used as the specific example for the study. A comparative analysis of the various plans proposed for it illustrates that though it is partially the prevailing theories of urban renewal in the 60's and modernist city planning ideals that are responsible for the current unsatisfying square, it is, as evidenced by the plan proposed by Kevin Lynch and John Myer, among others, with the firm of Adams, Howard and Greeley, still entirely possible to design satisfying urban public spaces which attempt to bridge between the planning approaches of the past and those which meet the functional demands of our times. That this plan was not the one eventually built is itself indicative of the problems in the urban design attitudes of that period.
integration of Web services in EPCs and RFID technology
This thesis proposes a framework for user interface (UI) design in the Auto-ID world. The thesis includes the examination of issues related to visualizing data to the user from a top-down perspective in the Auto-ID World. Using the main application of supply chain management, the role and cognitive capabilities of the users of the system are analyzed in order to distill the key considerations for a user interface (UI) from the user's perspective. Data related to Auto-ID that is available in the supply chain are explored to provide a clearer picture of the required capabilities of the UI. Systems with different categories of Uls are also studied to provide a more comprehensive view of the options available. A model for a functional and useful UI for supply chain management in the Auto-ID world is proposed as a solution.
Computational models of natural gas markets for gas-fired generators
Climate change is a major factor reforming the world's energy landscape today, and as electricity consumes 40% of total energy, huge efforts are being undertaken to reduce the carbon footprint within the electricity sector. The electric sector has been taking steps to reform the grid, retiring carbon-intensive coal plants, increasing renewable penetration, and introducing cyber elements end-to-end for monitoring, estimating, and controlling devices, systems, and markets. Due to retirements of coal plants, discovery of shale gas leading to low natural gas prices, and geopolitical motives to reduce dependence on foreign oil, natural gas is becoming a major fuel source for electricity around the United States. In addition, with increasingly intermittent renewable sources in the grid, there is a need for a readily available, clean, and flexible back-up fuel; natural gas is sought after in New England to serve this purpose as a reliable and guaranteed fuel in times when wind turbines and solar panels cannot produce. While research has been conducted advocating natural gas pipeline expansion projects to ensure this reliability, not enough attention has been paid to the overall market structure in the natural gas and electricity infrastructures which can also impact reliable delivery of gas and therefore efficient interdependency between the two infrastructures. This thesis explores the market structures in natural gas and electricity, the interdependence of natural gas and electricity prices with increasing reliance on natural gas as the penetration of renewable energy resources (RER) increases in order to complement their intermittencies, possible volatilities in these prices with varying penetration rates in RER, and alternatives to existing market structures that improve reliability and reduce volatility in electricity and gas prices. In particular, the thesis will attempt to answer the following two questions: What will the generation mix look like in 2030 and how will this impact gas and electricity prices? How do Gas-Fired Generator (GFG) bids for gas change between 2015 and 2030? In order to answer these questions, a computational model is determined using regression analysis tools and an auction model. Data from the New England region in terms of prices, generation, and demand is used to determine these models.
Statistical approaches to leak detection for geological sequestration
Geological sequestration has been proposed as a way to remove CO₂ from the atmosphere by injecting it into deep saline aquifers. Detecting leaks to the atmosphere will be important for ensuring safety and effectiveness of storage. However, a standard set of tools for monitoring does not yet exist. The basic problem for leak detection - and eventually for the inverse problem of determining where and how big a leak is given measurements - is to detect shifts in the mean of atmospheric CO₂ data. Because the data are uncertain, statistical approaches are necessary. The traditional way to detect a shift would be to apply a hypothesis test, such as Z- or t-tests, directly to the data. These methods implicitly assume the data are Gaussian and independent. Analysis of atmospheric CO 2 data suggests these assumptions are often poor. The data are characterized by a high degree of variability, are non-Gaussian, and exhibit obvious systematic trends. Simple Z- or t-tests will lead to higher false positive rates than desired by the operator. Therefore Bayesian methods and methods for handling autocorrelation will be needed to control false positives. A model-based framework for shift detection is introduced that is capable of coping with non-Gaussian data and autocorrelation. Given baseline data, the framework estimates parameters and chooses the best model. When new data arrive, they are compared to forecasts of the baseline model and testing is performed to determine if a shift is present. The key questions are, how to estimate parameters? Which model to use for detrending? And how to test for shifts? The framework is applied to atmospheric CO₂ data from three existing monitoring sites: Mauna Loa Observatory in Hawaii, Harvard Forest in central Massachusetts, and a site from the Salt Lake CO₂ Network in Utah. These sites have been chosen to represent a spectrum of possible monitoring scenarios. The data exhibit obvious trends, including interannual growth and seasonal cycles. Several physical models are proposed for capturing interannual and seasonal trends in atmospheric CO₂ data. The simplest model correlates increases in atmospheric CO₂ with global annual emissions of CO₂ from fossil fuel combustion. Solar radiation and leaf area index models are proposed as alternative ways to explain seasonality in the data. Quantitative normality tests reject normality of the CO₂ data and the seasonal models proposed are nonlinear. A simple reaction kinetics example demonstrates that nonlinearity in the detrending model can lead to non-Gaussian posterior distributions. Therefore Bayesian methods estimation will be necessary. Here, nonlinear least squares is used to reduce computational effort. A Bayesian method of model selection called the deviance information criterion (DIC) is introduced as a way to avoid overfitting. DIC is used to choose between the proposed models and it is determined that a model using a straight line to represent emissions driven growth, the solar radiation model and a 6-month harmonic term does the best job of explaining the data. Improving the model is shown to have two important consequences: reduced variability in the residuals and reduced autocorrelation.
Regulating new construction in historic areas
This study is an examination of how the restrictiveness of different design regulations impacts the process of new construction in historic areas. The North End, South End, and Back Bay neighborhoods of Boston were identified as historic areas that possessed increasingly restrictive design regulations, and within each neighborhood, two recent new buildings were selected as case studies. Each pair of cases represented a project that had undergone either an easy or difficult approval process under the district's design regulations. Using relevant statutes, interviews with regulators, reviewers, and architects, and the official documentation produced during the approval process, histories for each of the new buildings were compiled and compared. The results of this comparison suggest that, counter to the hypothesis, there is not a direct relationship between the restrictiveness of the regulation and certain variables such as historicism, inflexibility, and contextualism. In many ways, the new construction processes that occur in the North End and Back Bay, the least and most restrictive regulatory environments, respectively, resemble each other much more than they resemble the process that takes place in the South End, which is moderately restrictive.
Framework for partnership between public transit and new mobility services
The emergence and proliferation of "new" mobility has the potential to fundamentally disrupt urban mobility in the 21st century. This includes bikesharing, carsharing, or on-demand vehicles that can be summoned from a smartphone through transportation network companies (TNCs) and microtransit. Competition provided by these services to public transit has often soured the relationship between public authorities and new mobility. However, in the absence of a blanket ban on these services, the public sector needs to find a way to coexist with newer mobility forms, while still upholding system-wide benefits and values of public transportation. One way to coexist is through publicly-guided regulation, but going further than this is to find mutually-beneficial forms of partnership.
Conflict of placemaking in the disconnected urban fabric of Doha, Qatar
Doha, the capital city of Qatar, has become a metropolis of disconnected inward-facing mega-projects with no regard to the remaining fabric of the city. This can be owed to the relatively short urbanization period that the country has undergone, with its heavy reliance on international firms. The consequence is a city that has lost much of its historic core and vernacular architecture, and is defined by the large development projects that dot the capital. These mega-projects are treated as self-enclosed cities within the larger context of Doha. They are internally facing, turning their back to the city as a whole. The individual developments may be deemed successful, however not connecting to and addressing the larger fabric of the city negatively impacts Doha's urban environment. While proper design can address the disruptive nature of towers and mega-projects in the city fabric, the issue needs to be acknowledged at a larger scale. Unless there are regulations in place that enforce desired urban design qualities, the city as a whole will fall victim to the whims of each individual designer, which is the case in West Bay, the Central Business District of Doha. This project aims to demonstrate the insufficient built environment within the West Bay site, and note how the lack of regulations have created forms that turn their back to the city, producing an uninviting urban fabric with no regard to the human dimension. The realities of the planning process in Qatar are examined, along with comparative cases and literature on urban design, in order to propose recommendations for an alternative to the urbanism that currently exists.
An urban weather generator coupling a building simulation program with an urban canopy model
The increase in air temperature observed in urban environments compared to the undeveloped rural surroundings, known as the Urban Heat Island (UHI) effect, is being intensely studied, due to its adverse environmental and economic impacts. Some of the causes of the UHI effect are related to the interactions between buildings and the urban environment. This thesis presents a methodology intended to integrate building energy and urban climate studies for the first time. It is based on the premise that at the same time buildings are affected by their urban environment, the urban climate is affected by the energy performance of buildings. To predict this reciprocal interaction, the developed methodology couples a detailed building simulation program, EnergyPlus, with a physically based urban canopy model, the Town Energy Balance (TEB). Both modeling tools are leading their respective fields of study. The Urban Weather Generator (UWG) methodology presented in this thesis is a transformation of meteorological information from a weather station located in an open area to a particular urban location. The UWG methodology fulfils two important needs. First, it is able to simulate the energy performance of buildings taking into account site-specific urban weather conditions. Second, it proposes a building parameterization for urban canopy models that takes advantage of the modelling experience of a state-of-the-art building simulation program. This thesis also presents the application of the UWG methodology to a new urban area, Masdar (Abu Dhabi). The UHI effect produced in this hot and arid climate by an urban canyon configuration and its impact on the energy performance of buildings are analyzed.
Experimental implementations of stereo matching algorithms in Halide
Currently, most stereo matching algorithms focus their efforts on increasing accuracy at the price of losing run-time performance. However, applications such as robotics require high performance stereo algorithms to perform real time tasks. The problem is due to the difficulty of hand optimizing the complicated stereo matching pipelines. Halide is a programming language that has been widely used in writing high-performance image processing codes. In this work, we explore the usability of Halide in the area of real-time stereo algorithms by implementing several stereo algorithms in Halide. Because of Halide's ability to reduce the computation cost of dense algorithms, we focus on local dense stereo matching algorithms, including the simple box matching algorithm and the adaptive window stereo matching algorithms. Although we have found Halide's limitation in scheduling dynamic programming and recursive filters, our results demonstrate that Halide programs can achieve comparable performance as hand-tuned programs with much simpler and understandable code. Lastly, we also include a design solution to support dynamic programming in Halide.
Greed, hedging, and acceleration in convex optimization
This thesis revisits the well-studied and practically motivated problem of minimizing a strongly convex, smooth function with first-order information. The first main message of the thesis is that, surprisingly, algorithms which are individually suboptimal can be combined to achieve accelerated convergence rates. This phenomenon can be intuively understood as "hedging" between safe strategies (e.g. slowly converging algorithms) and aggressive strategies (e.g. divergent algorithms) since bad cases for the former are good cases for the latter, and vice versa. Concretely, we implement the optimal hedging by simply running Gradient Descent (GD) with prudently chosen stepsizes. This result goes against the conventional wisdom that acceleration is impossible without momentum. The second main message is a universality result for quadratic optimization. We show that, roughly speaking, "most" Krylov-subspace algorithms are asymptotically optimal (in the worst-case) and "most" quadratic functions are asymptotically worst-case functions (for all algorithms). From an algorithmic perspective, this goes against the conventional wisdom that accelerated algorithms require extremely careful parameter tuning. From a lower-bound perspective, this goes against the conventional wisdom that there are relatively few "worst functions in the world" and they have lots of structure. It also goes against the conventional wisdom that a quadratic function is easier to optimize when the initialization error is more concentrated on certain eigenspaces - counterintuitively, we show that so long as this concentration is not "pathologically" extreme, this only leads to faster convergence in the beginning iterations and is irrelevant asymptotically. Part I of the thesis shows the algorithmic side of this universality by leveraging tools from potential theory and harmonic analysis. The main result is a characterization of non-adaptive randomized Krylov-subspace algorithms which asymptotically achieve the so-called "accelerated rate" in the worst case. As a special case, this recovers the known fact that GD accelerates when inverse stepsizes are i.i.d. from the Arcsine distribution. This distribution has a remarkable "equalizing" property: every quadratic function is equally easy to optimize. We interpret this as "optimal hedging" since there is no worst-case function. Leveraging the equalizing property also provides other new insights including asymptotic isotropy of the iterates around the optimum, and uniform convergence guarantees for extending our analysis to l2. Part II of the thesis shows the lower-bound side of this universality by connecting quadratic optimization to the universality of orthogonal polynomials. We also characterize, for every finite number of iterations n, all worst-case quadratic functions for n iterations of any Krylov-subspace algorithm. Previously no tight constructions were known. (Note the classical construction of [Nemirovskii and Yudin, 1983] is only tight asymptotically.) As a corollary, this result also proves that randomness does not help Krylov-subspace algorithms. Combining the results in Parts I and II uncovers a duality between optimal Krylov-subspace algorithms and worst-case quadratic functions. It also shows new close connections between quadratic optimization, orthogonal polynomials, Gaussian quadrature, Jacobi operators, and their spectral measures. Part III of the thesis extends the algorithmic techniques in Part I to convex optimization. We first show that running the aforementioned random GD algorithm accelerates on separable convex functions. This is the first convergence rate that exactly matches the classical quadratic-optimization lower bound of [Nemirovskii and Yudin, 1983] on any class of convex functions richer than quadratics. This provides partial evidence suggesting that convex optimization might be no harder than quadratic optimization. However, these techniques (provably) do not extend to general convex functions. This is roughly because they do not require all observed data to be consistent with a single valid function - we call this "stitching." We turn to a semidefinite programming formulation of worst-case rate from [Taylor et al., 2017] that ensures stitching. Using this we compute the optimal GD stepsize schedules for 1, 2, and 3 iterations, and show that they partially accelerate on general convex functions. These optimal schedules for convex optimization are remarkably different from the optimal schedules for quadratic optimization. The rate improves as the number of iterations increases, but the algebraic systems become increasingly complicated to solve and the general case eludes us.
Identification of new phenotypical sub-clusters of Type 2 diabetes using machine learning
Advances in data science and technology promise to help clinicians diagnose and treat certain conditions. But there are other complex and poorly characterized illnesses for which the drivers and dependent variables are not understood well enough to take full advantage of the copious patient data that may exist. For these diseases new techniques need to be explored to gain better understanding of the nature of the disease, its subtypes, cause, consequence, and presentation. Modern genetics have shown that these diseases often have multiple subtypes, as well as multiple phenotypes as indicated by the new laboratory data. Examples of such diseases include common and important illness such as Type 2 diabetes (T2D) - affecting approximately 30 million Americans, Crohn's Disease - 1 million USA suffers, epilepsy - 3.4 million Americans, and migraines - another 3.2 million in the United States.
Absorption enhancement and frequency selective metasurfaces
In this work, we develop frameworks to study and design the scattering properties in two kinds of systems. For the first problem, we find approximate angle/ frequency-averaged limits on absorption enhancement due to multiple scattering from arrays of "metaparticles", applicable to general wave-scattering problems and motivated here by ocean-buoy energy extraction. We show that general limits, including the well known Yablonovitch result in solar cells, arise from reciprocity conditions. The use of reciprocity in the radiative transfer equation (similar to a stochastic regime neglecting coherent effects) justify the use of a diffusion model as an upper estimation for the enhancement. This allows us to write an analytical formula for the maximum angle/frequency-averaged enhancement. We use this result to propose and quantify approaches to increase performance through careful particle design and/or using external reflectors. For the second problem, we develop a design method for multi-grid frequency selective metasurfaces based on temporal coupled mode theory (CMT). In particular, we design an elliptic passband filter with a center frequency of 10 GHz, bandwidth of 10% and relatively good angle dependence.
Real-time trajectory optimization for excavators by power maximization
In this work an algorithm for controlling the motion of an autonomous excavator arm during excavation is presented. To deal with the challenge, posed by modeling and planning trajectories through soil, a model-free method is proposed which aims at maximally harnessing the capabilities of the excavator by matching its internal characteristics to those of the environment. By maximizing the power output of specific actuators the machine is able to strike a balance between disadvantageous operating conditions where it is either getting stuck in the soil or simply not utilizing its full potential to move soil towards task oriented goals. The real-time optimization, which used methods from extremum seeking control, was implemented in simulation and then on a small scale simulation rig which validated the method. It was shown that power maximization as a strategy of trajectory adaptation for excavation was both well-grounded and feasible.
Value delivery through product-based service
Products and services are two ways firms delivery value to customers. In some situations firms augment physical products with services related to that product. In other situations the service offered to customers is the primary offering and it is enabled by a product. This paper investigates enterprise resource planning (ERP) software tracking its evolution from predominantly a product with associated services to an offering as a service enabled by the software product. Frameworks have been developed to analyze service offerings. Two such frameworks capture causal relationships to customer value and customer satisfaction. This paper analyzes these frameworks and applies one of them to SAP R/3 ERP software as the offering evolved towards a more pure service offering (product-based). The paper then analyzes the sufficiency and appropriateness of one framework, the service profit chain, to the current offering of the SAP R/3 ERP application service provider (ASP) product, MySAP.com. Several additions are suggested to enhance the service profit chain model.
Characterization of a Drosophila model of Huntington's disease
Huntington's disease (HD) is an autosomal dominant neurological disorder caused by a polyglutamine (polyQ) repeat expansion in the huntingtin (Htt) protein. The disease is characterized by neurodegeneration and formation of neuronal intracellular inclusions primarily in the striatum and cortex, leading to personality changes, motor impairment, and dementia. To date, the molecular mechanisms that underlie the neurodegenerative process remain to be defined. Development of transgenic Drosophila HD models may facilitate dissection of molecular and cellular pathways that lead to disease pathology and suggest potential strategies for treatment. To explore mutant Htt-mediated mechanisms of neuronal dysfunction, we generated transgenic Drosophila that express the first 548 amino acids of the human Htt gene with either a pathogenic polyglutamine tract of 128 repeats (Htt-Q128) or a nonpathogenic tract of 0 repeats (Htt-QO). Characterization of these transgenic lines indicates formation of cytoplasmic and neuritic Htt aggregates in our Drosophila HD model that sequester other non-nuclear polyQ-containing proteins and block axonal transport.
Co-simulation of algebraically coupled dynamic subsystems
In the manufacturing industry out-sourcing/integration is becoming an important business pattern (not a clear statement-integration still done in house-component design and manufacturing outsourced). An engineering system often consists of many subsystems supplied by different companies. Bridge between thoughts is weak. Object-oriented modeling is an effective tool for modeling of complex coupled systems. However, subsystem models have to be assembled and compiled before they can produce simulation results for the coupled system. Compiling models into simulations? is time consuming and often requires a profound understanding of the models. Also, the subsystem makers cannot preserve their proprietary information in the compilation process. This research is intended to address this problem by extending object-oriented modeling to object-oriented simulation called co-simulation. Co-Simulation is an environment in which we can simultaneously run multiple independent compiled simulators to simulate a large coupled system. This research studies a major challenge of object-oriented simulation: incompatible boundary conditions between subsystem simulators caused by causal conflicts. The incompatible boundary condition is treated as an algebraic constraint. The high index of the algebraic constraint is reduced by defining a sliding manifold, which is enforced by a discrete-time sliding mode controller. The discrete-time approach fits well with the numerical simulation since it can guarantee numerical stability.
Green's function analysis of bunched charged particle beams
In this thesis, we analyze the dynamics and equilibrium of bunched charged particle beams in the presence of perfectly conducting walls using a Green's function technique. Exact self-consistent electric and magnetic fields are obtained for charged particles in the vicinity of a conducting boundary with the use of Green's functions. We present three analytical models of bunched beams in a cylindrical conducting pipe which employ Green's functions, the Non-Relativistic Center-of-Mass (NRCM) model, the Relativistic Center-of-Mass (RCM) model, and the Relativistic Bunched Disk Beam (RBDB) model. The NRCM model assumes that the bunches are periodic and represented as point charges propagating non-relativistically in the presence of a constant magnetic focusing field. We derive a maximum limit on the effective self-field parameter ... necessary for confining the bunched beam, where wp, is the effective plasma frequency and at is the cyclotron frequency. The RCM model extends the analysis of the NRCM model to incorporate relativistic motion of the bunches in the presence of a periodic solenoidal focusing field. We derive a maximum limit on ... for confinement, where ... is the root-mean-square cyclotron frequency. We demonstrate how the self-field parameter limit can be used to predict a current limit in Periodic Permanent Magnet (PPM) klystrons. The 75 MW-XP PPM 11.4 GHz klystron designed by SLAC is found to be operating above this current limit, which may explain the observance of non-negligible beam loss in this experiment.
Coal fired power generation scheme with near-zero carbon dioxide emissions
Humans are releasing record amounts of carbon dioxide into the atmosphere through the combustion of fossil fuels in power generation plants. With mounting evidence that this carbon dioxide is a leading cause of global warming and with energy demand exploding, it is time to seek out realistic power production methods that do not pollute the environment with CO2 waste. The relative abundance and low cost of fossil fuels remains attractive and clean coal technologies are examined as a viable solution. This paper helps identify the many options currently available, including post-combustion capture, pre-combustion capture, and a number of oxy-fuel combustion schemes. One cycle design in particular, the Graz cycle, holds some promise as a future power generation cycle. A model of the Graz cycle developed in this paper predicts a cycle efficiency value of 56.72%, a value that does not account for efficiency losses in the liquefaction and sequestration of carbon dioxide, or the efficiency penalty associated with the gasification of coal. This high efficiency number, coupled with the low technological barriers of this cycle compared to similar schemes, is used as a justification for investigating this cycle further.
The globalization of clinical drug development
Industry-sponsored clinical research of investigational drugs (also called clinical development) has traditionally been carried out in relatively developed countries in the North American, Western European, and Pacific regions. However, lately it has been widely reported that clinical trials starting now are becoming increasingly diffused globally, with significant growth of activity in so-called emerging economies in Eastern Europe, Latin America, and Southeast Asia. This change in location of clinical development activities has numerous implications for patients, health care providers, pharmaceutical companies, regulatory agencies and governments around the globe. Even though there is much debate about the topic, a public systematic quantitative assessment of the current status of the globalization of clinical drug development phenomenon is lacking. The objective of this thesis research is to provide such objective quantification while addressing some issues that are currently in active discussion. This thesis documents that the participation of emerging countries is still relatively small (13%) and they most commonly participate in very large (involving more than five countries) phase Ilb or III trials.
Designing an experiment to study absorption vs. dose for feedback enabled radiation therapy
In the field of radiation oncology, while there are simulations and devices that allow users to be relatively confident that radiation to the tumor and sparing of healthy tissue is being maximized, the inability to reliably measure and control the dose during radiation treatment is a major source of uncertainty. This uncertainty is due to issues such as organ movement, a lack of precise and constant knowledge of beam current at the target site, and the inability to correctly register dose during hardware or software failures; all of which result in radiation treatments being measured after the procedure or in a fault susceptible manner during the procedure. The integrating feedback f-center dosimeter (IF2D) is a dosimeter that would address these challenges and enable feedback during radiotherapy procedures, which would give doctors and patients confidence that the correct dose was delivered to the target sites without exceeding allowable doses to healthy tissue. An in-situ irradiator will be designed and later used to quantify the relationship between dose and f-center absorption. This design will help guide the future experiment and further the development of the IF2D.
Linking microbial metabolism and organic matter cycling through metabolite distributions in the ocean
Key players in the marine carbon cycle are the ocean-dwelling microbes that fix, remineralize, and transform organic matter. Many of the small organic molecules in the marine carbon pool have not been well characterized and their roles in microbial physiology, ecological interactions, and carbon cycling remain largely unknown. In this dissertation metabolomics techniques were developed and used to profile and quantify a suite of metabolites in the field and in laboratory experiments. Experiments were run to study the way a specific metabolite can influence microbial metabolite output and potentially processing of organic matter. Specifically, the metabolic response of the heterotrophic marine bacterium, Ruegeria pomeroyi, to the algal metabolite dimethylsulfoniopropionate (DMSP) was analyzed using targeted and untargeted metabolomics. The manner in which DMSP causes R. pomeroyi to modify its biochemical pathways suggests anticipation by R. pomeroyi of phytoplankton-derived nutrients and higher microbial density. Targeted metabolomics was used to characterize the latitudinal and vertical distributions of particulate and dissolved metabolites in samples gathered along a transect in the Western Atlantic Ocean. The assembled dataset indicates that, while many metabolite distributions co-vary with biomass abundance, other metabolites show distributions that suggest abiotic, species specific, or metabolic controls on their variability. On sinking particles in the South Atlantic portion of the transect, metabolites possibly derived from degradation of organic matter increase and phytoplankton-derived metabolites decrease. This work highlights the role DMSP plays in the metabolic response of a bacterium to the environment and reveals unexpected ways metabolite abundances vary between ocean regions and are transformed on sinking particles. Further metabolomics studies of the global distributions and interactions of marine biomolecules promise to provide new insights into microbial processes and metabolite cycling.
Design and modeling of a force sensitive toothbrush by using a buckling truss structure
Excessive force applied to teeth with a toothbrush during brushing may cause tooth erosion and gum recession. There have been many attempts by others to mitigate this effect with a force-sensitive toothbrush that can alert a user when excessive force is applied. However, many of the prior art solutions to this problem do not have a tactile response to alert the user when excessive force is applied. Further many prior art solutions are often bulky, have multiple components, and/or are not aesthetically pleasing or ergonomic. Some prior art buckling structures also often had thin hinge sections which are difficult to injection mold and act as failure points and the resulting broken structure can be dangerous. Prior art buckling toothbrush structures further had the problem of once they buckled, the structure was so substantially weakened, that continued application of force could cause the structure to plastically fail. A force-sensitive toothbrush incorporates a bistable truss into the neck of the toothbrush. The mechanism can alert a user to excessive brushing force by changing shape in response to brushing forces exceeding a predetermined threshold. The mechanism can also automatically return to its original state when the brushing forces are lowered back down below the predetermined level. The mechanism may include a force-sensitive region having an upper beam and a lower beam joined together to form a triangular truss, both grounded to the handle. This mechanism can advantageously be molded into an integral toothbrush body using an injection molding operation.
Investigating the ice nucleation activity of organic aerosol
Emissions of aerosol particles and their precursors affect climate directly by scattering radiation and indirectly by altering cloud properties. Aerosol-induced ice nucleation includes several processes that impact cloud formation, lifetime, albedo, and precipitation efficiency. Ice nucleating particles (INPs) promote ice formation at warmer temperatures and lower relative humidities than required to spontaneously freeze aqueous aerosol. This dissertation investigates the sources and ambient concentrations of organic INPs. We quantify the ice nucleation activity -- defined by the conditions required to initiate ice nucleation and the ice nucleation active site density -- of primary and secondary organic aerosol species. Organically-enriched sea spray aerosol emitted from bubble bursting mechanisms at the ocean surface are more effective INPs than inorganic sea salt aerosol. We demonstrate that polysaccharides and proteinaceous molecules likely determine the ice nucleation activity of sea spray aerosol. Our results illustrate that seawater biogeochemistry affects the organic content of sea spray aerosol and that enhanced primary productivity results in the emission of more effective INPs. We further investigate secondary organic aerosol (SOA) sources of INPs. Isoprene-derived SOA material is unlikely to significantly contribute to INP concentrations in the mid-latitude troposphere due to its poor ice nucleation activity. However, it may be an important source of INPs in convective outflow systems over forested environments. SOA material derived from hydrofluoroolefin refrigerant emissions is an effective INP, but our analyses predict it will not be abundant enough to impact cirrus cloud properties. These results demonstrate the diversity of organic INPs. By better understanding the sources, characteristics, and concentrations of organic ice nucleating particles, our understanding of aerosol-climate feedbacks will reciprocally grow.
An analytic model of the Cochlea and functional interpretations
The cochlea is part of the peripheral auditory system that has unique and intriguing features - for example it acts as a wave-based frequency analyzer and amplifies traveling waves. The human cochlea is particularly interesting due to its critical role in our ability to process speech. To better understand how the cochlea works, we develop a model of the mammalian cochlea. We develop the model using a mixed physical-phenomenological approach. Specifically, we utilize existing work on the physics of classical box-representations of the cochlea, as well as the behavior of recent data-derived wavenumber estimates. We provide closed-form expressions for macromechanical responses - the pressure difference across the Organ of Corti (OoC), and the OoC velocity, as well as the response characteristics - such as bandwidth and group delay. We also provide expressions for the wavenumber of the pressure traveling wave and the impedance of the OoC that underlie these macromechanical responses and are particularly important variables which provide us with information regarding how the cochlea works; they are a window to properties such as effective stiffness, positive and negative damping or amplifier profile, incremental wavelengths, gain and decay, phase and group velocities, and dispersivity. The expressions are in terms of three model constants, which can be reduced to two constants for most applications. Spatial variation is implicitly incorporated through an assumption of scaling symmetry, which relates space and frequency, and reduces the problem to a single independent dimension. We perform and discuss various tests of the model. We then exemplify a model application by determining the wavenumber and impedance from observable response characteristics. To do so, we determine closed-form expressions for the model constants in terms of the response characteristics. Then, using these expressions, along with values for human response characteristics that are available from psychoacoustic measurements or otoacoustic emissions, we determine the human wavenumber and impedance. In addition, we determine the difference in the wavenumber and impedance in the human base (where the OoC velocity responds maximally to high frequencies), and the human apex (where the OoC velocity responds maximally to low frequencies) and discuss their interpretations. The model is primarily valid near the peak region of the traveling wave, and is linear - therefore the model, as is, does not account for cochlear nonlinearity, and hence is primarily suitable for low stimulus levels. Finally, we discuss other scientific and engineering model applications which we can pursue, as well as potential modifications to the model, including suggestions regarding incorporating nonlinearity.
Mechanism-based constitutive modeling of L1₂ single-crystal plasticity
Ni3Al, an L12 structure intermetallic crystal, is the basic composition of the [gamma]' precipitates in nickel-based superalloys and is a major strengthening mechanism contributing to the superalloys' outstanding high-temperature mechanical properties. Many L12-structure crystals present unusual macroscopic mechanical properties, including the anomalous temperature-dependence of yield strength and strain hardening rate. To date, extensive research has been carried out to reveal the underlying mechanisms. However, none of the resulting models has satisfactorily quantified the macroscopic behavior based on microscopic phenomena. Mechanism-based constitutive modeling and simulation provide an effective method in this respect, assisting in the understanding and development of current existing models, and potentially providing a convenient path for engineering applications. In light of recent theoretical developments and experimental evidence, a single-crystal continuum plasticity model for the L12-structure compound Ni3A1 is developed.
Simulation-based approximate solution of large-scale linear least squares problems and applications
We consider linear least squares problems, or linear systems that can be formulated into least squares problems, of very large dimension, such as those arising for example in dynamic programming (DP) and inverse problems. We introduce an associated approximate problem, within a subspace spanned by a relatively small number of basis functions, and solution methods that use simulation, importance sampling, and low-dimensional calculations. The main components of this methodology are a regression/ regularization approach that can deal with nearly singular problems, and an importance sampling design approach that exploits existing continuity structures in the underlying models, and allows the solution of very large problems. We also investigate the use of our regression/regularization approach in temporal difference-type methods in the context of approximate DP. Finally we demonstrate the application of our methodology in a series of practical large-scale examples arising from Fredholm integral equations of the first kind.
Optimization of DBD of high-level nuclear waste
This work advances the concept of deep borehole disposal (DBD), where spent nuclear fuel (SNF) is isolated at depths of several km in basement rock. Improvements to the engineered components of the DBD concept (e.g., plug, canister, and fill materials) are presented. Reference site parameters and models for radionuclide transport, dose, and cost are developed and coupled to optimize DBD design. A conservative and analytical representation of thermal expansion flow gives vertical velocities of fluids vs. time (and the results are compared against numerical models). When fluid breakthrough occurs rapidly, the chemical transport model is necessary to calculate radionuclide concentrations along the flow path to the surface. The model derived here incorporates conservative assumptions, including instantaneous dissolution of the SNF, high solubility, low sorption, no aquifer or isotopic dilution, and a host rock matrix that is saturated (at a steady state profile) for each radionuclide. For radionuclides that do not decay rapidly, sorb, or reach solubility limitations (e.g., 1-129), molecular diffusion in the host rock (transverse to the flow path) is the primary loss mechanism. The first design basis failure mode (DB 1) assumes the primary flow path is a 1.2 m diameter region with 100x higher permeability than the surrounding rock, while DB2 assumes a 0.1 mm diameter fracture. For the limiting design basis (DB 1), borehole repository design is constrained (via dose limits) by the areal loading of SNF (MTHM/km2 ), which increases linearly with disposal depth. In the final portion of the thesis, total costs (including drilling, site characterization, and emplacement) are minimized ($/kgHM) while borehole depth, disposal zone length, and borehole spacing are varied subject to the performance (maximum dose) constraint. Accounting for a large uncertainty in costs, the optimal design generally lies at the minimum specified disposal depth (assumed to be 1200 in), with disposal zone length of 800-1500 m and borehole spacing of 250-360 meters. Optimized costs range between $45 to $191/kgHM, largely depending on the assumed emplacement method and drilling cost. The best estimate (currently achievable), minimum cost is $134/kgHM, which corresponds to a disposal zone length of -900 meters and borehole spacing of 272 meters.
Power supply switching for a mm-wave asymmetric multilevel outphasing power amplifier system
This thesis demonstrates power switches to be used in our new Asymmetric Multilevel Outphasing (AMO) transmitter architecture at mm-wave frequencies. The AMO topology breaks the linearity vs. efficiency design objective in radio frequency power amplifiers (PAs) which has until now appeared to be fundamental. These power switches allow for the modulation of the PA supply rail between four discrete levels at a maximum sampling rate of 2 GHz. This modulation results in a higher average system efficiency by reducing the outphasing angle between the phase paths. This work was designed in a 130-nm SiGe BiCMOS process.
Innovative lactic acid measurement solution for endurance athletes
Aspire is a company I co-founded along with John, PhD, in September 2014. Our mission is to be a breath of fresh air in the old-fashioned wearable sports technology market by leveraging the power of science and data to empower athletes with factual insights from their bodies and help them improve their performance. We have developed the first wearable lactic acid meter that works in real time and does not require a single drop of blood. This thesis will outline our business plan for this product by analyzing our market, competitive positioning, product roadmap as well as defining our customer, his expectations and introducing our team. Finally, we will discuss our growth strategy and expected financial performance in the long run.
The dynamics of the oil tanker industry
The tanker industry covers all business related with trading tankers in which there are many participants: vessel owners, charterers, shipbuilders, scrappers, consultants, capitalists, brokers, insurers, surveyors, agents, repairing shops, manning companies, and vendors etc. The industry exhibits the characteristics of commoditization driven by price. As the industry is significantly affected by the chartering market, I will herein focus on the chartering market and its movements, in order to better understand the industry. The structure of the market creates recurring cycles and instability. Also, the key elements affecting this market are highly interrelated. Characteristically, long delays of these key elements make the market more uncertain and more volatile. The purpose of this thesis is to study the dynamic of the oil tanker industry, in particular chartering market, using system dynamics methodology. A simulation model will illuminate the following:* Driving forces on the commoditized industry, Nature of the dynamics and structural behaviorsm, Effects of key elements on freight rates
Using context to resolve ambiguity in sketch understanding
This thesis presents methods for improving sketch understanding, without knowledge of a domain or the particular symbols being used in the sketch, by recognizing common sketch primitives. We address two issues that complicate recognition in its early stages. The first is imprecision and inconsistencies within a single sketch or between sketches by the same person. This problem is addressed with a graphical model approach that incorporates limited knowledge of the surrounding area in the sketch to better decide the intended meaning of a small piece of the sketch. The second problem, that of variation among sketches with different authors, is addressed by forming groups from the authors in training set. We apply these methods to the problem of finding corners, a common sketch primitive, and describe how this can translate into better recognition of entire sketches. We also describe the collection of a data set of sketches.
Residential building in Athens
The scope of this study is to examine the potential limitations and specific methods which can be used in applying climatic design principles in a densely populated urban environment. For illustrative purposes. a typical multi-story residential building in downtown Athens. Greece. will be used as a case study. Its goal is to use the rich and ever increasing vocabulary of climatic design in order to enhance the dialogue between the urban building and the physical environment as perceived through our senses. The first part analyzes the relationship between man. climate and architecture. and studies the basic principles of energy conscious design. The second part examines issues related to the urban and climatic environment of Athens in order to give an overview of the general context of the case study. This is followed by the description of a building that will constitute the basis for the proposed redesign. Finally. the third part discusses the application of climatic design principles on the proposed redesign. using techniques suitable to the specific climatic and environmental conditions. and provides a synthesis of the issues examined into a comprehensive design proposal. This part concludes with the author using the appraisal of specific improvements on the building to comment on the potential and limitations of this design approach to architecture and urban planning.
Designing the lean enterprise performance measurement system
The research contained in this thesis explores design attributes of the enterprise performance measurement system required for the transformation to the lean enterprise and its management. Arguments are made from the literature that successful deployment of the lean practices, across three different stages of the evolution of lean thinking, requires a supporting performance measurement system. The increase in scope of lean practices at each stage of the evolution increases the complexity in achieving synchronization across the enterprise subsystems. The research presents various attributes of the performance measurement system required at each stage and further derives the three key attributes for the design of the lean enterprise performance measurement system. These three attributes are: enterprise level stakeholder value measures, the causal relationships across performance measures at each level, and Uniform and consistent set of performance measures. A detailed case study of an aerospace and defense business of a multi-industry corporation which has embarked on a journey towards creating a lean enterprise is presented.
Greedy layerwise training of convolutional neural networks
Layerwise training presents an alternative approach to end-to-end back-propagation for training deep convolutional neural networks. Although previous work was unsuccessful in demonstrating the viability of layerwise training, especially on large-scale datasets such as ImageNet, recent work has shown that layerwise training on specific architectures can yield highly competitive performances. On ImageNet, the layerwise trained networks can perform comparably to many state-of-the-art end-to-end trained networks. In this thesis, we compare the performance gap between the two training procedures across a wide range of network architectures and further analyze the possible limitations of layerwise training. Our results show that layerwise training quickly saturates after a certain critical layer, due to the overfitting of early layers within the networks. We discuss several approaches we took to address this issue and help layerwise training improve across multiple architectures. From a fundamental standpoint, this study emphasizes the need to open the blackbox that is modern deep neural networks and investigate the layerwise interactions between intermediate hidden layers within deep networks, all through the lens of layerwise training.
The spatial and temporal dynamics of commuting : examining the impacts of urban growth patterns, 1980-2000
The dissertation is broadly concerned with the issues of urban transportation and urban spatial structure change. The focus of the research is to interpret the increase in commuting time and distance in the last two decades. The major hypothesis is that a significant proportion of commuting length increase can be explained by land development patterns, particularly the spatial relationship between workplace and residence. The biggest challenge to address the above problem is to design a method that characterizes job-housing proximity and correlates commuting with job-housing proximity consistently across space, over time and among different regions. A thorough evaluation of existing measures, including ratios of jobs to employed residents, gravity type accessibility and minimum required commuting, shows that all have serious problems. The dissertation presents a new approach - the commuting spectrum - for measuring and interpreting the commuting impacts of metropolitan changes in terms of job-housing distribution. This method is then used to explain commuting in two sizable but contrasting regions, Boston and Atlanta. Journey-to-work data from Census Transportation Planning Packages (CTPP) over three decades (1980, 1990 and 1990) are utilized.
Strategies for the development of the software industry in Columbia
Using Michael Porter's framework for the competitiveness of the nations and Professor Michael Cusumano's theory on the orientation of the software companies toward services, I analyzed the country of Colombia's software industry to elaborate a diagnosis of current conditions and to generate some strategies for the Government and for the business sector using diagrams of dynamic systems. Keeping in mind that Colombia has significant human capital, success in this type of industry is likely, not only because the industry is highly dependent on human talent; but also because seeing the reality and determining that the number of qualified people is not very large, the country should create aggressive strategies to increase the number of people qualified for the industry. In the short term, it should emphasize the information technology (IT) services sector taking advantage of its strengths and looking for specific market niches. For the medium term it should look for software products where Colombia has a competitive advantage.
Rendezvous approach guidance for uncooperative tumbling satellites
The development of a Rendezvous and Proximity Operations (RPO) guidance algorithm for approaching uncooperative tumbling satellites has multiple purposes including on-orbit satellite servicing, space debris removal, asteroid mining, and on-orbit assembly. This thesis develops a guidance algorithm within the framework of on-orbit satellite servicing, but is extendable to other mission scenarios. The author tests the algorithm in an RPO simulation with an uncooperative tumbling satellite near Geo-stationary Orbit (GEO) starting at a relative distance of 50 m and ending at a relative distance of 5 m. Examples of potential uncooperative tumbling clients include decommissioned satellites or satellites with malfunctioning thrusters. Due to the low Technology Readiness Level (TRL) of autonomous (RPO) missions, first missions prefer to use flight proven technologies. This thesis implements a guidance algorithm based on the flight proven Clohessy-Wiltshire (CW) and space shuttle glideslope equations which command a sequence of burns to close the distance between the servicer and client while matching the client satellite's rotation rate. The author validates the guidance algorithm through Monte Carlo (MC) analysis in a Three Degrees of Freedom (3DOF) simulation. Fuel use metrics characterize the sensitivity of the algorithm. Fuel consumption is measured by the total velocity changes, or [delta]V, needed to complete the maneuvers. Cumulative [delta]V sensitivity is measured against navigational uncertainty in the rotational axis to summarize the key requirements and trade-offs associated with implementing this algorithm.
Uncovering hidden pathways
Constructionist approaches in learning have emerged in national conversations in the past few years with the rising popularity of project-based learning and makerspaces in schools. What is missing from the conversation is a deeper understanding of who benefits. We celebrate the 4 P's of Creative Learning and the Maker Mindset, but the disproportionate rates of discipline in schools and data from the achievement gap suggest that Black and Brown youth may not benefit from these ideas. This thesis explores the development of an educator guide called "Uncovering Hidden Pathways" a term I use to describe the anti-racist approach to encouraging non-dominant youth to leverage the creativity and knowledge they already possess to feel more confident in participating in STEM activities while helping them make connections to professional opportunities. The guide builds off of the work of programs such as the Computer Clubhouse, Technology Access Foundation, and Digital Youth Network, which are rooted in the anti-racist idea that non-dominant youth have the right to technological fluency -- and give them access to the tools and opportunities needed to accomplish this. These programs demonstrate that it is possible to radically improve the trajectories of the lives of non-dominant youth by addressing the race, class, and social barriers in education that prevent access to participation in 21st century careers in STEM fields. This thesis includes a historical analysis of racist ideas in the United States, and how that history created inequalities such as the achievement gap and the digital divide, in an effort to justify centering the guide in anti-racist ideas. The guide is composed of four sections: designing equitable spaces, activities, mentorship, and making connections. This thesis discusses the significance of each section, and the research conducted to support the design decisions.
Investment opportunities in green technology real estate projects
The real estate sector accounts for more than a third of global greenhouse gas emissions and potentially provides great opportunity for carbon reduction. Energy efficient and green buildings have a huge potential in transforming the property sector, and investors could benefit from that transformation through the greening of their real estate holdings and investing in green technology real estate developments. My work will further define this opportunity by investigating the real estate industry's relationship to sustainability and global greenhouse gas emissions through perspective of energy markets, demographical changes and different technologies in the energy efficiency sphere. Additionally my thesis will provide summary of research regarding willingness to pay for efficiency and sustainability measures both in the residential and commercial part of the market. Finally I will analyze main factors affecting demand and forces shaping investment opportunity.
The influence of higher education on the national innovation system in Portugal
Many economists agree that countries wishing to develop their national economies should focus on increasing their innovation output. In recent years, the Portuguese government has pursued this goal, taking strides to improve the country's national system of innovation. This effort has included policy measures to increase the educational attainment of the Portuguese population and to improve the amount of collaboration between academia and industry in Portugal. Prior studies of locational effects have concluded that universities have a positive effect on the innovation output of the regions in which they are located. However, there is little understanding of how this locational effect varies with alternative types of higher education institutions, such as polytechnics and community colleges. This thesis evaluates the co-locational effects of educational institutions and industry clusters on innovation output, and makes recommendations for how these results may be put to use, given the historical context of the Portuguese higher education system. The analysis is a comparative study of the geographic sub-regions within Portugal and the U.S. states of Georgia and Pennsylvania. The data used in the analysis includes industry data (enterprises, employment, and wages), educational data (number of graduates by field and type of institution), and innovation survey data. The result of the co-location analysis shows that in Portugal, the technology-focused courses at universities and polytechnics are not concentrated in the same region as technological industry.
Compensating for model uncertainty in the control of cooperative field robots
Current control and planning algorithms are largely unsuitable for mobile robots in unstructured field environment due to uncertainties in the environment, task, robot models and sensors. A key problem is that it is often difficult to directly measure key information required for the control of interacting cooperative mobile robots. The objective of this research is to develop algorithms that can compensate for these uncertainties and limitations. The proposed approach is to develop physics-based information gathering models that fuse available sensor data with predictive models that can be used in lieu of missing sensory information. First, the dynamic parameters of the physical models of mobile field robots may not be well known. A new information-based performance metric for on-line dynamic parameter identification of a multi-body system is presented. The metric is used in an algorithm to optimally regulate the external excitation required by the dynamic system identification process. Next, an algorithm based on iterative sensor planning and sensor redundancy is presented to enable field robots to efficiently build 3D models of their environment. The algorithm uses the measured scene information to find new camera poses based on information content. Next, an algorithm is presented to enable field robots to efficiently position their cameras with respect to the task/target. The algorithm uses the environment model, the task/target model, the measured scene information and camera models to find optimum camera poses for vision guided tasks. Finally, the above algorithms are combined to compensate for uncertainties in the environment, task, robot models and sensors. This is applied to a cooperative robot assembly task in an unstructured environment.
The value of knowledge networks : conceptual framework in application to sustainable production
The thesis is motivated by two major trends: the rise of a global information and knowledge economy, and environmental degradation and the search for sustainable solutions. The increasing importance of knowledge has by some been equated with a new industrial revolution, one based on computer technology, digital infrastructure, and highly educated and technically skilled workers. But how do we assess the value of knowledge in this 'new' economy? The question over value is explored through the diffusion and localization of new knowledge via a knowledge network, based on information technology. The central argument is that in the knowledge economy, the value of knowledge lies in the ability to share it over a knowledge network, which allows for diffusion and localization of new knowledge. This central thesis and the value of knowledge networks is further explored by looking at the case of environmentally friendly or sustainable production. The knowledge network targets barriers to environmentally friendly practices by encouraging and enabling diffusion of knowledge related to sustainable products and processes. The knowledge scope for environmental solutions is analyzed, with the objective to develop common categories, and to understand better the increasing complexities and knowledge needs as enterprises engage in sustainable production. In discussing the knowledge economy and knowledge networks, the thesis focuses mostly on the business enterprise. But the development of the knowledge age has much larger implications, such as 'knowledge for whom?' and 'value for whom?'. The information technologies and networks offer new ways for people and groups to interact and influence social issues and can enable the diffusion of wide variety of views and perspectives. Thinking about the information and knowledge age in the larger economic and social context requires us to consider who builds, controls, influences and benefits from the technology and its use. Before we can reasonably approach this analysis, a basic conceptual framework or understanding of knowledge sharing, knowledge networks, and value of knowledge is called for. This thesis is a building block for such a framework, a contribution to future research into the economic and social implications of the knowledge economy.
Siting solar energy facilities in New York state : sources of and responses to controversy
Human reliance on fossil fuels has led to a wide range of adverse environmental and health effects. As our understanding of these impacts has grown, so has the search for other, more sustainable sources of energy. One such source is solar power, and the federal and state governments in the United States have created various policies and financial incentives to encourage adoption of solar energy technologies. While solar energy offers tremendous potential benefits, siting utility-scale ground-mounted photovoltaic arrays can give rise to strong public reaction. With this in mind, this thesis explores the controversy, or lack thereof, surrounding the siting of utility-scale solar energy facilities in New York by examining two case studies - the Skidmore College Denton Road solar array and the Cornell University Snyder Road solar array. While these two solar energy facilities share many commonalities, there is one key difference - the Skidmore College array created a much greater level of controversy than the Cornell University array. Analysis of this divergence indicates that choice of site is a crucial determinant of the extent of controversy. While local impacts are an important concern, this thesis demonstrates that the reasons for controversy go well beyond those impacts. Issues related to information, equity, and trust were other key sources of controversy. In addition to analyzing the sources of controversy, this thesis also offers some recommendations that may be helpful for entities involved in the development of solar power facilities. It is hoped that these recommendations will help to eliminate or mitigate future solar power siting controversies.
Critical evaluation of anomalous thermal conductivity and convective heat transfer enhancement in nanofluids
While robust progress has been made towards the practical use of nanofluids, uncertainties remain concerning the fundamental effects of nanoparticles on key thermo-physical properties. Nanofluids have higher thermal conductivity and single-phase heat transfer coefficients than their base fluids. The possibility of very large thermal conductivity enhancement in nanofluids and the associated physical mechanisms are a hotly debated topic, in part because the thermal conductivity database is sparse and inconsistent. This thesis reports on the International Nanofluid Property Benchmark Exercise (INPBE) in which the thermal conductivity of identical samples of colloidally stable dispersions of nanoparticles, or 'nanofluids', was measured by over 30 organizations worldwide, using a variety of experimental approaches, including the transient hot wire method, steady-state methods and optical methods. The nanofluids tested were comprised of aqueous and non-aqueous basefluids, metal and metal oxide particles, near-spherical and elongated particles, at low and high particle concentrations. The data analysis reveals that the data from most organizations lie within a relatively narrow band (± 10% or less) about the sample average, with only few outliers. The thermal conductivity of the nanofluids was found to increase with particle concentration and aspect ratio, as expected from classical theory. The effective medium theory developed for dispersed particles by Maxwell in 1881, and recently generalized by Nan et al., was found to be in good agreement with the experimental data. The nanofluid literature contains many claims of anomalous convective heat transfer enhancement in both turbulent and laminar flow. To put such claims to the test, we have performed a critical detailed analysis of the database reported in 12 nanofluid papers (8 on laminar flow and 4 on turbulent flow). The methodology accounted for both modeling and experimental uncertainties in the following way. The heat transfer coefficient for any given data set was calculated according to the established correlations (Dittus-Boelter's for turbulent flow and Shah's for laminar flow). The uncertainty in the correlation input parameters (i.e. nanofluid thermo-physical properties and flow rate) was propagated to get the uncertainty on the predicted heat transfer coefficient. The predicted and measured heat transfer coefficient values were then compared to each other. If they differed by more than their respective uncertainties, we called the deviation anomalous. According to this methodology, it was found that in nanofluid laminar flow in fact there seems to be anomalous heat transfer enhancement in the entrance region, while the data are in agreement (within uncertainties) with the Shah's correlation in the fully developed region. On the other hand, the turbulent flow data could be reconciled (within uncertainties) with the Dittus-Boelter's correlation, once the temperature dependence of viscosity was included in the prediction of the Reynolds number. While this finding is plausible, it could not be directly confirmed, because most papers do not report information about the temperature dependence of the viscosity for their nanofluids.
A comprehensive computer-aided planning approach for universal energy access : case study of Kilifi, Kenya
In 2009, it was estimated that 1.4 billion people in the world lack access to electricity, and approximately 2.7 billion people rely on biomass as their primary cooking fuel. Access to reliable electricity and modem forms of energy for cooking can contribute to improvements in sectors beyond the energy industry such as health, education, commerce, and agriculture, and has been shown to correspond with poverty alleviation and economic growth. A successful strategy towards universal access requires a careful assessment of the diverse energy services needs from the perspective of the beneficiaries, the impact on their economic and social development, and the environmental consequences. This thesis proposes a comprehensive methodology for the assessment of the appropriate modes of electrification and heating and cooking for specific countries or regions. The software tools used for this analysis are incorporated in the proposed technology toolkit consisting of: the Reference Electrification Model (REM)-used to determine the appropriate modes of electrification (grid extension, micro or isolated systems) given the current base scenario; the Reference Cooking Model (RCM)-used to determine technology choices for the provision of modem heat for cooking; and the MASTER4all Model-used to evaluate the future macro level impact of different energy access strategies in a specific region or a country as a whole, taking into account various business scenarios and regulatory policies. While the analytical strategy presented here is intended to be generalizable for other regions, it is based on a case study of Kilifi County in Kenya. The larger goal of this project, through the case study approach, is to provide a proof of concept for the decision support tools being developed that could be used in energy access expansion planning.
Mechanical and trajectory design of wearable Supernumerary Robotic Limbs for crutch use
The Supernumerary Robotic Limbs (SRL) is a wearable robot that augments its user with two robotic limbs, kinematically independent from the user's own limbs. This thesis explores the use of the SRL as a hands-free robotic crutch for assisting injured or elderly people. This paper first details the mechanical and material design choices that drastically reduced the weight of this SRL prototype, including advanced composite materials, efficient joint structure, and high-performance pneumatic actuators. The latter half of this paper characterizes the biomechanics of both traditional crutch-assisted and SRL-assisted ambulation, models this gait pattern with an inverted pendulum system, and derives equations of motion to create a simulation that examines the effect of various initial parameters. Finally, an optimum set of initial parameters is identified to produce a successful SRL-assisted swing.
Modeling and control of a fish-like vehicle
To understand the extremely complex hydrodynamics of fish swimming, it is desirable to build a mechanical prototype. This allows better cooperation of the "vehicle" under study than would be allowed with a live specimen. Draper Laboratory has undertaken the design and construction of a free-swimming fish robot called the Vorticity Control Unmanned Undersea Vehicle (VCUUV), patterned and scaled after a yellowfin tuna. The mechanical and electronic design of the VCUUV is versatile to allow ready variation of swimming parameters. Tests can be performed that will reveal the importance of each swimming pattern and how it contributes to the potentially superior efficiency of fish propulsion and how, ultimately, this mode of propulsion can be adapted to man-made vehicles. In this case of a mechanically complex and versatile robotic fish, a sophisticated control system algorithm is needed to ensure the motion closely approximates that of a live fish. Modeling and control of a hydrodynamic system is a difficult task, especially when the exact hydrodynamics have not yet been captured in a mathematical model. Based on some simplifying assumptions, a linear system model for the VCUUV is derived. Using state-space methods, a simulated controller is designed to govern this model. The ability of the controller to produce the desired system response is demonstrated, as well as robustness of the control algorithm in the presence of environmental disturbances and system model errors.
Electromagnetic measurement tool for ultra high frequency radio frequency identification diagnostics
This thesis presents the design and analysis of a radio frequency identification (RFID) passive UHF emulation tag designed to be used as an environment evaluation tool. The tag implements the Auto-ID Center/EPCglobal Generation 1 RFID passive UHF tag protocol, and it implements a power detector on the received UHF signals. The power detector enables the tag to operate as a Field Probe providing instantaneous power level feedback at its location. Power level feedback is provided visually through on-probe LEDs (light emitting diodes), audibly through an on-probe speaker, and electronically as part of the communication protocol between the Field Probe and the reader. Experimental results presented here as well as the use of the Field Probe in real-world installations by the project sponsors have already shown that the Field Probe is a valuable tool in the design and analysis of RFID system installations and gross product packaging design.
On relative permeability : a new approach to two-phase fluid flow in porous media
Being valid for single-phase flow, Darcy's law is adapted to two-phase flow through the standard approach of relative permeability, in which permeability, rather than being a unique property of the porous medium, becomes a joint property of the porous medium and each fluid phase. The goal of this study is to find a proper, alternate approach to relative permeability that can describe two-phase flow in porous media while maintaining sound physical concepts, specifically that of a unique permeability exclusive to the porous medium. The suggested approach uses the concept of an average viscosity of the two-phase fluid mixture. Viscosity, the only fluid-characterizing term in Darcy's law, should -at least partially- explain two-phase flow behavior by becoming the two-phase flow property that varies with the saturation ratio of the two fluid phases. Three common mathematical averages are tested as potential viscosity averages. Aspects of two-phase flow in pipes are then considered to see whether two-phase flow behavior in porous media can be attributed to the fluid mixture alone. Total flow rate of the two-phase fluid mixture is modeled by using the fluid mixture average viscosity in Darcy's law. Using two-phase flow data from Oak et al. (1990a, 1990b), the harmonic average weighted by the reduced fluid saturations represents the average viscosity of liquid-gas mixtures in steady-state flow in imbibition. Extracting flow rates of the individual phases from the total flow rate of the fluid mixture is the next, but crucial, step that determines whether the average viscosity approach can replace that of relative permeability in solving common reservoir engineering problems. Liquid-liquid flow in both drainage and imbibition, and liquid-gas flow in drainage are not represented by a simple viscosity average, which indicates the need for further study into more complex viscosity averages.
Product development strategy for LG Electronics in optical storage-based consumer electronics
With the dawning of the digital era, many home electronic products are emerging. One of the fastest growing and most wide-spread products in the market is the DVD player. Few digital products have achieved as fast a market penetration, and with as rapid a price drop, as the DVD player. This thesis touches on the product development strategy for my company, LG Electronics, in terms of short-range and mid (long)-range plan, specifically on means to sustain and keep the profit margin reasonable in spite of the current competitive market situation. The system dynamics tool is used as a method of analysis and assessment on the current DVD player market situation. Based on the results of the analysis, I propose the following strategies, in terms of products and technologies, for LG Electronics to maintain sustainable growth in the industry. Firstly, in terms of mid-range plan, the company should broaden its DVD product scope and make profit by adding value. Secondly, in terms of long-range plan, the company should draw a big picture for further growth by taking into account the development of new disruptive technologies and products in the industry.
Reconstruction and the debates on the "Synthesis of the Arts" in France, 1944-1962
My dissertation examines the collaborative efforts of different individuals and groups - such as Le Corbusier, the Salon des Réalités Nouvelles, Groupe Espace, and the Internationale Situationniste - which advocated a "synthesis of the arts" in the time period that corresponds to the Liberation until the beginning of the Fifth Republic. I consider a wide range of archival sources and projects, from the collective decoration of permanent buildings to temporary installations in galleries by way of outdoors art exhibitions and theatrical performances, many of which were sponsored by Eugene Claudius-Petit and the newly founded French Ministry of Reconstruction and Urbanism. The "synthesis of the arts" discourse was more than a faint humanist echo of the Wagnerian model of the Gesamtkuntwerk, or "total work of art": it was one of the primary routes along which the cultural and political conflicts of French modernization and governmentality were discussed. My study locates the "synthesis of the arts" amidst the effort to renovate a universalizing discourse linked to modernist art, on the one hand, and a nascent welfare state notion of public space (and its correlative rhetoric of beauty, hygiene, functionality, and accessibility), on the other. As such, the postwar synthesis discourse not only reflected but directly participated in the development and expansion of the French "cultural state". Rather than showing this discourse as unitary, the dissertation explores its complex and sometimes contradictory dimensions by analyzing the political and social connotations of three different categories: a heroic model, associated with the figure of Le Corbusier; a bureaucratic model, developed by Groupe Espace (1951-1956); and an oppositional model, deployed by AsgerJorn, Pinot-Gallizio, and others who became associated with the Situationists (1954-1962). Case studies include Le Corbusier's Usine Claude et Duval in Saint-Dié and Unité d'Habitation in Marseille, Bernard Zehrfuss and Felix Del Marle's Regie Nationale Renault factory complex in Flins, Michel Ragon and Jacques Polieri's first Festival d'Art d'Avant-Garde, and Cobra and Situationists enviroments such as the Architects' House and the Cavern of Anti-Matter.
Computational modeling of expanded plasma plumes in vacuum and in a tank
Electric propulsion devices have shown to offer substantial fuel savings for various space missions. Hall thrusters, specifically, have shown great promise over the years due to their near optimum specific impulse for a number of space missions. The Hall thruster, however, releases a partially ionized plasma plume which contaminates any surface it comes into contact with. Backflow contamination can lead to sputtering and effluent deposition on critical spacecraft components. A computational method for studying these interactions was developed by David Oh in 1997. He developed a Particle-in-Cell and Direct Simulation Monte Carlo (PIC-DSMC) algorithm to model the expansion of a plasma plume from a Hall thruster into a vacuum. In his work he implemented a plasma-surface interaction model which determined erosion rates on surfaces made of quartz, silicon and silver but he did not track the surface material removed. In this work Oh's model is expanded to include the removal and tracking of material from generic spacecraft surfaces and the walls of a vacuum tank. Sputtering yields adopted in this model are based on sputtering theory developed by Matsunami and Yamamura. Since the plasma can have a negative impact on spacecraft subcomponents, a method for protecting the spacecraft (in the form of a protective shield) is proposed, studied, and recommendations are discussed.
Study of the role of SMI in the Caterpillar supply chain
Strategic Managed Inventory (SMI) is an inventory replenishment process deployed by Caterpillar that blends elements of Vendor Managed Inventory (VMI) and Collaborative Planning, Forecasting, and Replenishment (CPFR). The SMI process calls for Caterpillar's suppliers to control the material replenishment process and hold inventory in strategic locations. SMI is designed such that Caterpillar and the supplier collaborate on replenishment plans and forecasts to ensure that material moves efficiently through the supply chain. The process is aimed at increasing supply chain flexibility, responsiveness and performance. This paper examines the current deployment of the SMI process in Caterpillar's supply chain in an effort to determine how the company can go about better leveraging this capability. It proposes potential frameworks for the identification of future SMI opportunities and part suitability. It also looks at the drivers behind SMI in cost evaluation. While there are some challenges identified with the process by the study, the study concludes that the SMI process does lead to benefits for Caterpillar and its suppliers. It suggests that these benefits could be better leveraged by growing the capability slowly using the most proficient suppliers, establishing oversight for the SMI process, increasing supplier vetting, and crafting a way to gain visibility into current SMI usage.
De︠st︡abilizing habitual perception
Technology mediates our perception of the world. The tools of contemporary society's digital habit aim for transparency, creating an inextricable link between the human body, perception and life-world. The resulting entanglement between humans and technology challenges the sensorium: illusion and reality seem to co-exi︠st︡, space and time become compressed, and human relationships transmute into constant digital conne︠ct︡ctions. This intimate yet tenuous bond with technology can automize perceptual processes, leading to habitual perception: we recognize the world around us, but we cease to really see what is there. We are increasingly in danger of losing sight of how we exist within a technologically saturated environment, how we cultivate our curiosity, how we create, how we perceive, and ultimately: how we ought to move forward without losing the sense of how we relate to each other and who we are as humans. Art has the ability to subvert, highlight, and elucidate our tenuous relationship with technology, and to defamiliarize, "make ︠st︡range," and shake up automized and habitual processes of perception as to re-e︠st︡ablish a critical awareness of perception and perceptual processes. In this thesis, I explore the creative strategy of defamiliarization in my own pra︠ct︡ice and regard the works that I have produced at MIT as experimental and experiential frameworks that have the capacity generate the awakening of critical awareness of perception. Besides providing documentation of proje︠ct︡s, these texts may be also be read as a non-linear record of the research, questions, and experiments related to the philosophy of technology, the relationship between art and technology, and the perceptual processes linked to experiential art.