Title
stringlengths
3
331
text
stringlengths
14
9.14k
Tailorability-focused recommendations for United States Air Force software acquisition policy
In order to adapt and respond to threats by near-peer-adversaries that are changing at an increasing pace, the U.S. Department of Defense (DoD) has been focused on reforming software acquisition for rapid development and deployment of software capabilities to the battlefield. Military leaders have been focused on accelerating development and increasing the frequency of deployment, encouraging developers to take risks to reduce schedules. However, military systems have certain levels of performance and quality requirements that must be met to successfully execute missions. Additionally, software systems have many different characteristics that must be considered during development. In this thesis, the DoD software acquisition process and new guidance from the Department and the U.S. Air Force (USAF) regarding software acquisition reforms are detailed first. The existing policy is examined to identify gaps regarding tailoring development processes to different software systems. After providing context on how software is developed and describing four process models to show that different processes are most appropriate for developing systems with certain characteristics, eight specific software system characteristics are identified: user, urgency, lifespan, performance (timing), quality/risk, size, integration, and requirements. Furthermore, recommendations to the USAF and DoD for implementing policy/guidelines that help establish a tailorable software acquisition process based on these eight system characteristics are provided. This thesis hopes to help leaders and managers understand the technical characteristics of software systems and match those with appropriate development process designs and practices, instead of a one-size-fits-all blanket solution, so that the required quality and evolvability of military systems are not compromised in execution of the national security mission.
Development of VHH- and antibody- based imaging and diagnostic tools
The immune system distinguishes self from non-self to combat pathogenic incursions. Evasion tactics deployed by viruses, microbes, or malignant cells may impede an adequate response. In such cases, therapeutic interventions aid in the elimination of pathogens and the restoration of physiological homeostasis. A major road block in the development of such therapies is the reliance on imperfect detection methods to identify site(s) of infection, or to monitor immune cell recruitment to sites of infection or inflammation in vivo. The goal of this thesis is overcome at least some of these limitations by utilizing novel tools that have been developed and refined in the laboratory to facilitate in vitro and in vivo characterization of specific immune subsets. We then track their recruitment to sites of active immune responses, such as infection or tumor progression sites. These tools consist of two components: one that confers specificity for immune cells and the other offers a site for labeling in a controlled manner. Single-domain antibodies (VHHs) from camelids are amongst the smallest (15 KDa) proteins that can recognize a diverse set of targets with excellent specificity. Chemoenzymatic labeling of molecules using sortase allows site-specific attachment of a single label of interest to the target protein containing the sortase recognition sequence LPXTG. VHHs specific for immune cell determinants labeled with sortase technology facilitate non-invasive and efficient monitoring of cells that infiltrate immunological niches in vivo in a manner not possible until now. This thesis presents the development of novel methods to allow in vitro and in vivo detection and imaging of specific immune subsets and their recruitment to sites of an active immune response. This thesis aims to (1) use DNA oligomers as a scaffold to push the limits of fluorescence labelling yield (2) create small and efficient biosensors for the rapid capture of specific lymphocyte subsets from peripheral blood samples using VHHs and graphene oxide nanosheets (3) develop radioisotope-labeled VHHs to track immune cell subsets to elucidate the roles of innate and adaptive immune components in the course of infection. Chapter 1 describes a new method for protein labeling via site-specific modification of proteins using a DNA scaffold. To avoid self-quenching of multiple fluorophores localized in close proximity, Holliday junctions were used to label proteins site-specifically with fluorophores. Holliday junctions enable the introduction of multiple fluorophores with reasonably precise spacing to improve fluorescence yield for both single domain and full-sized antibodies, without deleterious effects on antigen binding. Chapter 2 presents a biosensor generated for characterization of leukocytes from whole blood using a graphene oxide surface coated with single domain antibody fragments. This format allows quick and efficient capture of distinct white blood cell subpopulations from small samples of whole blood in a format that does not require any specialized equipment such as cell sorters or microfluidic devices. Chapter 3 documents a non-invasive immune-PET imaging method for tracing CD8+ T cells in the course of influenza A infection to better elucidate their protective mechanism(s) and immunopathological effects.
Temperature response of the ultra-high throughput mutational spectrometer
The Ultra-High Throughput Mutational Spectrometer is an instrument designed to separate mutant from wild type DNA through capillary electrophoresis. Since this technique uses the melting point of the molecule to distinguish between sequences of base pairs, temperature control is crucial to the success of the device. The purpose of this analysis is to characterize the temperature response of the instrument, taking into account the heat dissipated by the 10,000 capillaries in the system during electrophoresis. Analytical models, finite element analysis, and physical models were used to predict the steady state response of the system to heat generated by capillary electrophoresis. The analytical models estimated a steady state offset of 0.2 K for water at 3.3x 10̄⁴ m³/s (20 L/min) and 1.0 K for water at 6.7x 10̄⁵ m³/s (4.0 L/min) and predicted that the system would reach steady state within several seconds. Finite element analysis determined that the gel inside the capillaries would have a steady state offset of 0.24 K. The physical system, which simulated the Joule heating of the capillaries using an immersion heater, yielded a steady state offset of 0.24 K at 3.3x 10̄⁴ m³/s and 0.65 K at 6.7x 10̄⁵ m³/s, but the settling time in both cases was on the order of 500 s.
High-speed continuous micro-contact printing
Micro-contact printing ([mu]CP) is a technology that prints directly off a patterned elastomeric stamp by transferring only a molecular monolayer of ink to a surface, providing a low-cost, high resolution and widely applicable method of nano-scale patterning. Roll to roll is recognized as one of the most promising models for high volume micro-contact printing since it offers advantages such as high throughput, convenient material handling and conformal contact propagation. We have designed and built a tool to study the behavior of micro-contact printing in a roll to roll paradigm, with the three fold objective of printing at high speeds, over large areas and obtaining good quality. A speed of as high as 400 feet/min was achieved with good printing quality. This thesis provides details of this roll to roll high speed micro-contact printing technique from mechanical design to system control to final experiment result analysis, with a concentration in system control. We were also able to keep the distortions to as low as 28 microns over an area of 5.8"x5" and maintain dimensional distribution within 1 micron. A proof-of-concept continuous etching tool was also built to match the speed of the print tool.
Characterization of a peptide biomaterial used for cell-seeded scaffolds with an analysis of relevant stem cell policy
(cont.) We describe the ethical debate and political climate that led to the decision. An examination of the publication data shows that researchers in the United States have in fact remained leaders in the field until this point, in part because U.S. federal funding has also been available to early mover international groups who appear to have abided by the restrictions.
Reconciling modal and time domain techniques in photonic simulation
Three dimensional Finite Difference Time Domain (3D-FDTD) simulation serves as the indisputable gold standard for device design and verification in silicon photonics. However, 3D-FDTD is prohibitively expensive for large devices let alone cascaded systems, leading to the pursuit of a diversified simulation toolkit to acquire the full device response or combined device (cascade or parallel) response. A modular approach is followed subsequently. For analyzing silicon photonics at the systems level, transfer matrices in the modal/frequency domain are ubiquitously used. These matrices encapsulate the frequency response as well as the coupling coefficients between the various optical eigenmodes across all device ports. In this thesis we formulate and explore the performance of a fast, memory efficient stand-alone FDTD based algorithm that uses transfer matrices within the simulation window for the optical characterization of adiabatic mode-evolution devices. This class of adiabatic devices is vital to silicon photonics systems thanks to their broadband nature and reliable performance under fabrication induced perturbations and parameter variation. In our approach, the simulation domain is divided into blocks which can be simulated independently in the time domain, and then combined using modal transfer matrices. It is critical that we can match the accuracy of a 3D-FDTD simulation for a base class of devices and make an argument that time domain and modal techniques can be perfectly reconciled in a simulation environment where these devices appear and play a significant role. This environment might be targeting a particular device or even an entire section of the chip. When compared to pure 3D-FDTD this approach proves auspicious from a computational standpoint as it yields, in the limit of large devices, an asymptotic linear speedup when the blocks are simulated sequentially, and can further yield a quadratic speedup when an extra level of parallelization is employed.
Secure public realm within a city of violence
With an increasing number of cities experiencing chronic violence and conflict within their boundaries, the question of how architecture can effectively intervene to create a secure public realm in pluralistic and fractious urban environments grows more vital. This thesis explores the spatial and social notion of sanctuary as an architectural strategy in such contexts, using the design of a central transit station in a northern neighborhood of Karachi, Pakistan as a case study. Drawing upon sociological theory as well as precedent projects ranging from Johannesburg to Bogota, we come to see the creation of sanctuary as a deliberate construction of shared identity and experience. This strategy draws on four tactics that shift both the built and psychological environment and work in tandem to reinforce and amplify each other's effects: -- Partition (the separation of 'sacred' content from the 'profane' context) -- Ritual (strengthening psychological security through the repeated and familiar) -- Appropriation (empowering people to take ownership of portions of the space) -- Monumentality (creating a physical object upon which common values can be projected) Using these tactics as a foundation, the design's architectural patterns engender a variety of systems to accommodate the diverse program and user demands incumbent within a project of this scale and complexity. Through this investigation, the design proposes a new type of defensible infrastructure, relying not only on fortifying a space but also on strengthening the psychological resilience of people through architectural intervention.
Innovation in mature industries : recent impacts of the oil & gas and automobile technological trends on the steel industry
In order to survive, the steel industry has undergone traumatic changes in the last years. A thirty years old overcapacity combined with a slow growing market led to a steadily eroding profitability of steel companies, particularly in developed economies. These factors determined an industry profile delineated by a relentless quest for cost cutting and efficient operations. Regarding innovation, the approach of the steel industry has been reactive, basically following market requirements. The industry has historically found itself far from its customers businesses and has struggled to find innovative products and services that could meet an unperceived or unarticulated need so as to propose higher value and grow its market. Two important customers of the steel industry are the oil and gas and the automotive industries, two mature businesses as well. Even when changes in these two steel users have also been relatively slow, the more recent technological trends analyzed in this work suggest an upcoming faster pace of change. This thesis examines these recent technological trends in the oil and gas and automotive industries with regards to the potential impact in the steel industry. Some of the technological gaps that might be encountered in those trends are visited, in particular where substitution of lighter materials for steel is a possible avenue. Other cases where the new technological trend may affect consumption of steel are also analyzed. Following these lines, the thesis goes on analyzing the steel general approach to innovation and R&D and speculates on provocative alternatives to that approach that could put the industry in a better position for the future.
Framework for communication planning in a media intensive society
In today's media intensive society, where consumers are well equipped to resist advertisers' strategies, creative and messages, it is becoming increasingly more difficult for advertisers to break through the concofany of noise to persuade the consumer that their product or service is worthy of a consumer's attention and eventual purchase. The purpose of this research is ultimately to suggest a framework for use in communication planning that is measurably successful in moving users up a succession of steps starting prior to awareness through purchase utilizing a combination of traditional and digital advertising tactics and techniques. We assume that in today's media intensive society, were traditional and digital advertising is thought to cannibalize one another, an integrated approach across multiple communication channels is the most effective way to reach and motivate the modern consumer whose media consumption habits are increasingly fractured. The framework proposed within the paper will contribute to a better understanding of how to leverage traditional and digital media tactics within communication planning.
Acoustic landmark detection and segmentation using the McAulay-Quatieri Sinusoidal Model
The current method for phonetic landmark detection in the Spoken Language Systems Group at MIT is performed by SUMMIT, a segment-based speech recognition system. Under noisy conditions the system's segmentation algorithm has difficulty distinguishing between noise and speech components and often produces a poor alignment of sounds. Noise robustness in SUMMIT can be improved using a full segmentation method, which allows landmarks at regularly spaced intervals. While this approach is computationally more expensive than the original segmentation method, it is more robust under noisy environments. In this thesis, we explore a landmark detection and segmentation algorithm using the McAulay-Quatieri Sinusoidal Model, in hopes of improving the performance of the recognizer in noisy conditions. We first discuss the sinusoidal model representation, in which rapid changes in spectral components are tracked using the concept of "birth" and "death" of underlying sinewaves. Next, we describe our method of landmark detection with respect to the behavior of sinewave tracks generated from this model. These landmarks are interconnected together to form a graph of hypothetical segments.
Organizational structure in the hospitality industry : a comparative analysis of hotel real estate investment trusts (REITs) and hotel C-Corporations
Current legislation has made it possible for real estate investment trusts (REITs) to earn income beyond purely passive sources such as rents from real property or interest from mortgages on real property. As a result, both the number and market capitalization of hotel REITs have substantially increased, and the difference between hotel REITs and hotel C-corporations has narrowed. However, companies such as Starwood Hotels have reverted back to the C-corporation structure. Given these organizational changes and the increasing dominance of hotel REITs, there is a need to analyze hotel REITs and hotel C-corporations in a comparative framework. Equity REITs and C-corporations have been studied extensively. However, research on various organizational forms in the hospitality industry is somewhat limited. This study attempts to fill this gap by comparing the stock market performance of publicly traded hotel REITs with hotel C-Corporations from 1993 to 2011. Furthermore, the impact of significant events such as mergers and acquisitions and legislative amendments on firms' stock price are also observed. Finally, detailed case studies of companies that underwent corporate restructuring are conducted. The research objective of this thesis is to examine (a) whether REITs are an efficient organizational structure for the lodging industry; and (b) whether the tax benefits of REITs offset the regulatory constraints they face. The study infers that REIT acquirers have an advantage in mergers and acquisitions, but in all other situations, the net benefits of REITs are not as clear. On market cap basis, the performance of hotel REITs and hotel C-Corporations was almost identical, however when equally weighted, hotel REITs outperformed their C-Corporation counterparts. In addition, the results show that the REIT returns are highly volatile. On a broad level the hospitality business has two distinct segments -- ownership of hotels and management of hotels and the degree of operating flexibility offered is one of the main factors that differentiate REITs from the C-Corporation counterparts. Therefore, this study concludes that the choice of corporate structure depends greatly on a firm's business strategy.
System design and manufacturability of concrete spheres for undersea pumped hydro energy or hydrocarbon storage
Offshore wind and energy storage have both gained considerable attention in recent years as more wind turbine capacity is installed, less attractive/economical space remains for onshore wind, and load-leveling issues make integrating wind power into the existing electrical grid difficult. For depths greater than 50m, floating wind turbines are expected to be more economical than pylon-based wind turbines, In order for offshore wind energy to maintain a steady supply to the grid without excessive ramping-up and ramping-down of onshore, fossil-fueled power generation units and to reduce the cost of wind integration, some form of energy storage is required. The greater water depths in which floating wind turbines are located can provide an opportunity for a unique energy storage concept that takes advantage of the hydrostatic pressure at ocean depths to create a robust pumped energy storage device. Coupling this energy storage system with a floating wind farm provides a more consistent and predictable power plant that could ultimately lessen the cost of large-scale wind integration, consistently reduce fossil fuel use, and reduce greenhouse gas (GHG) emissions and load-level onshore generation. Additionally, the same type of device structure can be used for undersea hydrocarbon storage during periods of hurricane/tropical storm shut-in's at oil wellheads, maintaining wellhead production without risking personnel or environmental safety due to storm evacuations at the rigs on the surface.
Additive manufacturing of microfluidics for evaluation of immunotherapy efficacy
This thesis presents the development of an entirely 3D-printed, monolithic microfluidic platform for evaluating the efficacy of immunotherapy treatments. The platform provides a dynamic microenvironment for perfusing and sustaining tumor samples extracted from a biopsy. The finely featured, non-cytotoxic, and transparent tumor trap is integrated with threaded connectors for rapid, leak-proof fluid interfacing, an in-line trap for removal of bubbles arising from oxygenated media flow or tumor loading procedures, and a network of microchannels for supplying media and immunotherapies to a retained tumor fragment. The device configuration is capable of modelling interactions between tumors and various drug treatments. Tested devices were additively manufactured in Pro3dure GR-10 -a relatively new, high-resolution stereolithographic resin with properties suitable for biomedical applications. Retention of human tumor fragments within the printed microfluidic device is confirmed through overlaid bright-field and fluorescence micrographs, which permit visualization of individual tumor cells within the biological sample. Under dynamic perfusion of media, live tumor fragments can be sustained for a period of at least 72 hours. Confocal microscopy confirmed that sustained tumors and the resident lymphocytes exhibited a response to perfused immunotherapy treatments compared to an untreated control. With further validation, the proposed platform may be capable of providing critical predictive insight into an individual's response to selected immunotherapies.
A framework for quantifying complexity and understanding its sources : application to tow large-scale systems
The motivation for this work is to quantify the complexity of complex systems and to understand its sources. To study complexity, we develop a theoretical framework where the complex system of interest is embedded in a broader system: a complex large-scale system. In order to understand and show how the complexity of the system is impacted by the complexity of its environment, three layers of complexity are defined: the internal complexity which is the complexity of the complex system itself, the external complexity which is the complexity of the environment of the system (i.e., the complexity of the large- scale system in which the system is embedded) and the interface complexity which is defined at the interface of the system and its environment. For each complexity we suggest metrics and apply them to two examples. The examples of complex systems used are two surveillance radars: the first one is an Air Traffic Control radar, the second one is a maritime surveillance radar. The two large-scale systems in which the radars are embedded are therefore the air and the maritime transportation system. The internal complexity metrics takes into account the number of links, the number of elements, the function and hierarchy of the elements. The interface complexity metric is based upon the information content of the probability of failure of the system as it is used in its environment. The External complexity metric deals with the risk configuration of large- scale systems emphasizing the reliability and the tendency to catastrophe of the system.
Adding identity to device-free localization systems
Recent advances in wireless localization systems show that by transmitting a wireless signal and analyzing its reflections, one can localize a person and track her vital signs without any wearables. These systems can localize with high accuracy even when multiple people are present in the environment. However, a primary limitation is that they cannot identify people and know who is the monitored person. In this thesis, we present a system for identifying people based only on their wireless reflection with high accuracy. We use a semi-supervised learning classifier to assign labels to each person tracked by the device-free localization system. We use recent advances in machine learning to leverage the big amount of unsupervised data that we have. A key challenge that we solve is obtaining labels that are used for guiding the classifier. To get labeled data, we devised a novel scheme to combine data from a sensor that people are carrying with data from a wireless localization system. We deployed and evaluated our system in people's homes. We present a case study of how it can be helpful to monitor people's health more effectively.
Sampling and quantization for optimal reconstruction
This thesis develops several approaches for signal sampling and reconstruction given different assumptions about the signal, the type of errors that occur, and the information available about the signal. The thesis first considers the effects of quantization in the environment of interleaved, oversampled multi-channel measurements with the potential of different quantization step size in each channel and varied timing offsets between channels. Considering sampling together with quantization in the digital representation of the continuous-time signal is shown to be advantageous. With uniform quantization and equal quantizer step size in each channel, the effective overall signal-to-noise ratio in the reconstructed output is shown to be maximized when the timing offsets between channels are identical, resulting in uniform sampling when the channels are interleaved. However, with different levels of accuracy in each channel, the choice of identical timing offsets between channels is in general not optimal, with better results often achievable with varied timing offsets corresponding to recurrent nonuniform sampling when the channels are interleaved. Similarly, it is shown that with varied timing offsets, equal quantization step size in each channel is in general not optimal, and a higher signal-to-quantization-noise ratio is often achievable with different levels of accuracy in the quantizers in different channels. Another aspect of this thesis considers nonuniform sampling in which the sampling grid is modeled as a perturbation of a uniform grid. Perfect reconstruction from these nonuniform samples is in general computationally difficult; as an alternative, this work presents a class of approximate reconstruction methods based on the use of time-invariant lowpass filtering, i.e., sinc interpolation. When the average sampling rate is less than the Nyquist rate, i.e., in sub-Nyquist sampling, the artifacts produced when these reconstruction methods are applied to the nonuniform samples can be preferable in certain applications to the aliasing artifacts, which occur in uniform sampling. The thesis also explores various approaches to avoiding aliasing in sampling. These approaches exploit additional information about the signal apart from its bandwidth and suggest using alternative pre-processing instead of the traditional linear time-invariant anti-aliasing filtering prior to sampling.
Design and characterization of artificial transcriptional terminators
Design and characterization of artificial transcriptional terminators. Ten new terminators were designed based on previous research of terminator structure and termination efficiency. The terminators were built by PCR extension, ligated into a BioBrick plasmid backbone, and transformed into TOP10 cells. Characterization devices were built to test the terminators. Input and output of the terminator were measured by expression of RFP and GFP. Charaterization devices were then placed into the E. coli strain CW2553/pJAT18, which hijacks the arabinose transport system to provide controlled input to the terminator. Of the ten terminators designed and tested, BBa_B1002, BBa_B1004, BBa_B1006 and BBa_B1010 proved to be strong terminators with termination efficiencies above 90%. These terminators may be obtained from the Registry of Standarized Parts at parts.mit.edu.
Thermal properties of granular silica aerogel for high-performance insulation systems
Based on mounting evidence in support of anthropogenic global climate change, there is an urgency for developments in high-performance building techniques and technologies. New construction projects provide substantial opportunities for energy efficiency measures, but they represent only a small portion of the building stock. Conversely, while existing buildings are plentiful, they typically have a much narrower range of feasible energy efficiency options. Therefore, there will continue to be a need for the development of new and improved energy efficiency measures for new building construction and even more so for deep retrofits of existing buildings. This thesis provides an overview of the research performed into the on-going development at MIT of a high-performance panelized insulation system based on silica aerogel. Two test methods were used for measuring the thermal conductivity of the granules: the transient hot-wire technique and the guarded hot-plate system. Utilizing the hot-wire set-up, it was demonstrated that compressing a bed of granules will decrease the thermal conductivity of the system until a minimum point is reached around the monolithic density of the aerogel. For the Cabot granules, this was seen at 13 mW/m-K and about 150 kg/m3. The MIT granules showed equal performance to the Cabot granules at bed densities 20-30 kg/m3 lower. The hot-plate testing was able to experimentally evaluate previous analytical predictions regarding the conductivity impact of the internal panel truss and the under-prediction of radiant heat transfer in the hot-wire method. Hot-wire testing was also done in a vacuum chamber to quantify potential performance improvements at reduced air pressures. Since a vacuum would require the incorporation of a barrier film into the panel system, some analyses were done into the thermal bridging potential and gas diffusion requirements of such a film. Additionally, physical prototyping was done to explore how the film would be incorporated into the existing panel design. The aerogel-based insulation panel being developed at MIT continues to show promise, though there are still plenty of opportunities remaining in the development cycle.
Colloidal magnetic fluids as extractants for chemical processing applications
The feasibility of using high gradient magnetic separation (HGMS) to separate the Fe₃0₄ nanoparticles was studied in this work. We present a general model for nanoparticle capture based on calculating the limit of static nanoparticle buildup around the collection wires in an HGMS column. Model predictions were compared successfully with experimental results from a bench-scale HGMS column. Permanent capture of individual nanoparticles is limited by diffusion away from the wires; however, 60-125 nm aggregates of particles can be captured permanently in the bench-scale column. The model provided estimates of the minimum particle size for permanent capture of individual nanoparticles and nanoparticle aggregates.
Redefining the typology of land use in the age of big data
Land use classification is important as a standard for land use description and management. However, current land use classification systems are problematic. Labels such as "residential use" and "commercial use" do not fully reveal how the land use is used in terms of function, mix use and changes over time. As a result, land use planning is often a natural prompt of segregation; Land use is poorly connected with other fields of urban studies such as transportation and energy consumption. The problems of land use are partly because land use classification has been an expediency rather than of rigorous thought. However, recent researches about land use classification have mainly focused on the methods of estimating land use types, without challenging the conventional instructional definition of land use typology itself. In contrast, this thesis aims to ask a more fundamental question: what are the elements, the principles, and the process to build the land use typology for given purposes. This thesis accordingly proposes the syntax of developing a land use typology, where five basic elements compose the framework of land use description: land use function, land use intensity, land use connectivity, probability and scale. While the elements are abstract concepts, when developing a land use typology, each of them could be defined with specific measures for purposes such as land use planning, land use management, energy analysis, transportation study. After the land use typology is composed with the defined elements, it can be applied to examine land mixed use, land use conflict, land use change and estimation. The syntax then proposes the basic principles and process to develop a satisfied land use typology, with respect to the reliability and validity, the significance and necessity, the measurability and operability, and the adaptability and flexibility. With that, this thesis argues that beyond the theoretical definition, the practical context, such as data availability or planning schema will influence the feasibility of a land use typology. While the scope of the syntax could be limited by practical tools and availability of data, the coming age of big data provides a changing context of land use typology. The followed case study illustrates such a process of developing land use typology with geo-social network data. The case develops a social media based land use typology, collects data for two example cities: Boston, U.S and Shenzhen, China, and applies the defined land use typology to classify their uses of land. As a result, Boston's land use I classified by its function, intensity and the level of mix use; Shenzhen land use is classified by its intensity, connectivity and the level of mix use. Compared with the conventional land use classification systems, the social media based typology provides a more comprehensive description of land use, with its focuses on human activities of the city and multiple dimensions of urban land use. It also has advantages with the flexibility and efficiency of data collection. In conclusion, the syntax of land use typology highlights the process of building land use typology, by defining the basic components of land use typology. It enables many possibilities of land use description with the help of big data, and reserves enough space to go beyond the existing tools and techniques. At last, the thesis proposes for future studies on the different interpretations of the syntax, its application on planning tools and systems, and potential for new types of land use.
Biologically-plausible six-legged running : control and simulation
This thesis presents a controller which produces a stable, dynamic 1.4 meter per second run in a simulated twelve degree of freedom six-legged robot. The algorithm is relatively simple; it consists of only a few hand-tuned feedback loops and is defined by a total of 13 parameters. The control utilizes no vestibular-type inputs to actively control orientation. Evidence from perturbation, robustness, motion analysis, and parameter sensitivity tests indicate a high degree of stability in the simulated gait. The control approach generates a run with an aerial phase, utilizes force information to signal aerial phase leg retraction, has a forward running velocity determined by a single parameter, and couples stance and swing legs using angular momentum information. Both the hypotheses behind the control and the resulting gait are argued to be plausible models of biological locomotion.
Optical system for high-speed AFM
This thesis presents the design and development of an optical cantilever deflection sensor for a high speed Atomic Force Microscope (AFM). This optical sensing system is able to track a small cantilever while the X-Y scanner moves in the X-Y plane at 1KHz over a large range of 50x50 microns. To achieve these requirements, we evaluated a number of design concepts among which the lever method and the fiber collimator method were selected. Experiments were performed to characterize the performance of the integrated AFM and to show that the cantilever tracking while the scanner is in operation was accomplished. A triangular grating was imaged with the lever method optical subassembly integrated with the scanner to demonstrate the effectiveness of the approach.
Pattern classification of terrain during amputee walking
In this thesis I study the role of extrinsic (sensors placed on the body) versus intrinsic sensing (instruments placed on an artificial limb) and determine a robust set of sensors from physical and reliability constraints for a terrain adaptation in a robotic ankle prosthesis. Further, during this thesis I collect a novel data-set that contains seven able-bodied participants walking over 19 terrain transitions and 7 non-amputees walking over 9 transitions, forming the largest collection of transitions to date using an exhaustive set of sensors: inertial measurement units, gyroscopes, kinematics from motion capture, and electromyography from 16 sites on the lower limbs for non-amputee subjects and 9 sites or amputee subjects. This work extends previous work [3] by using more conditions, a larger subject group, and more sensors on amputees, and includes non-amputees.I present a novel machine learning algorithm that uses sensor data during rapid transitions from pre-foothold to just prior to post-foothold to predict different terrain boundaries. This advances the field of biomechatronics, our understanding of terrain adaptation in people both with and without amputations, contributes to the development of a fully terrain adaptive robotic ankle prosthesis, and improves the quality of life for the physically challenged. Specifically we set out to prove between pre and post-foothold the ankle and knee positions calculated using an IMU attached to an amputees powered prosthetic ankle can discriminate with greater than 99% accuracy between 9 conditions. Our results suggest that myography as a non-volitional sensing modality for terrain adaptive prostheses was not needed.
Adaptive governance of contested rivers : a political journey into the uncertain
Governance of international rivers is characterized by complex institutional arrangements aimed at minimizing uncertainty and making it difficult for participants to avoid their responsibilities. However, as new information emerges, new impacts of activities on rivers are identified, new stakeholders emerge and new technologies are developed, international river management agreements and treaties may have to be modified. At the very least, the implementation of the governance arrangements may need to be adjusted. Most river governance agreements are the product of extended negotiations in which the parties work hard to codify and define the details. This makes the task of modifying the agreements, or even of implementing them in new ways, difficult. In some cases the details and format of the institutional arrangements make it hard to respond to the changing nature of the social and ecological problems that emerge over time. In other cases they do not. This raises the question, "Why and how do efforts to formulate international water resource arrangements that bring together countries with common resource management concerns but conflicting interests, limit or support needed adjustments?" This dissertation explores what I call the conventional versus the adaptive approach to international river basin governance. The former makes it hard to adjust over time; the latter, less so. Climate change appears to be increasing the need for flexibility in river basin governance. So, I compare how institutional arrangements that reflect a conventional approach to uncertainty and conflict impede the ability of water governance participants to make necessary adjustments, while institutional arrangements that reflect an adaptive approach are more likely to provide the flexibility that is required. Case studies of the navigation and water protection regimes for the Danube River and the benefit sharing agreement for the Nile River provide the basis for my conclusions.
Integrated feedback circuit for OLED display driver
Organic LEDs (OLEDs) offer the potential of ultra low power, portable display technology. The chief barrier to their usage lies in producing OLEDs that will emit light at predictable and consistent amplitudes. We propose the use of optical feedback to generate the desired luminosity pixel by pixel. We implement this technique in an integrated silicon chip. The simulation and verification of fabricated integrated circuits with deposited OLEDs validates the utility of the technique.
Language for describing physics
Sensors embedded within hardware platforms such as smart-watches and cars read in streams of data. These sensor data may be related to each other by invariants or may have other value constraints, but computing in sensor platforms currently ignores these invariants between sensor data. If the programmer wanted to exploit these invariants to perform safety checks or optimize performance, she has to hard-code the invariants in the program. To exploit invariants in software automatically, each compiler of the language used for every sensor platform could be modified to be aware of different sets of invariants in the programs it compiles, or the compilers could take in a configuration file that describes these invariants. This MEng thesis introduces Newton, a language in which the configuration files can be written, as well as a compile-time library and a runtime library that can be used by other compilers to make compile-time transformations to their source code and exploit the invariants in a Newton description at runtime. We introduce two compile-time algorithms that transform intermediate representations of other compilers. The first transformation adds reliability by checking invariants on program variable values at runtime and by running an error handler function if invariants are violated. The second transformation trades off reliability gained from sensor redundancy for performance by removing code that deals with redundant sensors. This thesis describes twelve examples of realistic physical systems that may benefit from using Newton.
Measuring elastic constants of laminated Copper/Niobium composites using resonant ultrasound spectroscopy
Layered copper/niobium (Cu/Nb) composites with small layer widths contain a high area per unit volume of solid-state interfaces. Interfaces have their own elasticity tensor, which affects the elastic properties of the composite as a whole. We have studied the elastic constants of Cu/Nb composites with different layer thicknesses with a view to determining the elastic constants of Cu/Nb interfaces. Our work relied on resonant ultrasound spectroscopy (RUS): a technique for deducing elastic constants from measured resonance frequencies. Resonance frequencies of three samples with differing layer widths were measured. A numerical approach for matching measured and computed resonance frequencies was developed and used in deducing the elastic constants of the composite. The uncertainties in the elastic constants thereby obtained were too large to estimate interface elastic properties. However, several sources of this uncertainty were identified, paving the way to improved elastic constant measurements in the future.
The global landscape of gender quotas on corporate boards : contexts for adoption and opposition in 2014
There has been an acceleration in the adoption of gender quotas on boards since Norway pioneered legislation in 2003. Countries that have made parallel reforms have primarily been in the western European bloc, while other countries have displayed strong resistance to this measure. The reasons underlying support and resistance have been argued across the globe, and their consistent application raises questions about why certain countries have been more aggressive in adopting quotas, compared to those that continue to resist. This paper aims to understand some of the national contexts of these countries. In doing so, a theoretical framework is applied using a selection of factors that may facilitate understanding and simplifying complexities within each polity. In addressing what common factors and disparities exist within and between countries that have adopted quotas compared to those that continue to oppose, potential implications for policy makers such as the use of critical junctures and the media become apparent.
Essays on individuals and organizations
This dissertation focuses on the dynamics of innovative industries; specifically how individual choices and actions impact the performance, founding, and death of firms. While most research examining these outcomes focuses on the role of organizational factors - such as strategy, capabilities, or resources - firms ultimately consist of individuals with different preferences, abilities, and approaches to entrepreneurship and organizing. This work attempts to expand our understanding of firm and industry dynamics by looking to the role of the individuals who make up firms. As the performance of a growing number of firms and entrepreneurial ventures comes to depend on human capital, knowledge and creative work, there is increasing need to understand how these differences between individuals influences firms and industries. This dissertation consists of three essays exploring these relationships. The first essay, "People and Process, Suits and Innovators: Individuals and Firm Performance," empirically untangles the contributions of organizations and individuals to firm performance. The results indicate that variation among individuals matters far more in organizational performance than is generally assumed. Surprisingly, the analysis also demonstrates that middle managers, rather than innovators, have a particularly large impact on firm performance.
Phenomena that determine knock onset in spark-ignited engines
Experiments were carried out to collect in-cylinder pressure data and microphone signals from a single-cylinder test engine using spark timings before, at, and after knock onset for four different octane-rated toluene reference fuels. This data was then processed and analyzed in various ways to gain insight into the autoignition phenomena that lead to knock. This was done to develop a more fundamentally based prediction methodology that incorporates both a physical and chemical description of knock. The collected data was also used to develop a method of data processing that would detect knock in real time without the need to have an operator listening to the engine. Bandpass filters and smoothing techniques were used to process the data. The processed data was then used to determine knock intensities for each cycle for both the cylinder pressure data and microphone signal. Also, the rate of build-up before reaching peak amplitude in a bandpass filtered pressure trace was found. A trend was found showing that cycles with knock intensities greater than 1 bar with rapid build-up (5-10 oscillations) before reaching the peak are the type the cycles whose autoignition events lead to engine knock.
Electrochemical modulation of fluorescence of nitrogen vacancy centers in nanodiamonds for voltage sensing applications
The nitrogen vacancy (NV) color center in diamond has been used to sense environmental variables such as temperature and electric and magnetic fields. Most sensing protocols depend on the optically detectable magnetic resonance of the negatively charged NV- spin state. As such, fluctuations in the NV charge state present a challenge for NV- spin-based sensing. This thesis discusses the electrochemical modulation of NV charge state and fluorescence as the basis for an alternative sensing scheme. An externally applied electrochemical potential shifts the occupation probabilities of the NV in each charge state, which manifest as changes in NV fluorescence intensity and emission spectra. In this thesis, the voltage dependence of fluorescence in high pressure high temperature nanodiamonds is demonstrated in an electrochemical cell. Following this, the mechanisms for NV response to externally applied electrical bias are investigated in other electrochemical cell morphologies, capacitors, and interdigitated electrode arrays. Finally, a design of an optical microscope setup for future studies of NV sensing in nanodiamond is outlined.
Novel transport regimes in graphene
Transport phenomena in solids -- such as energy and charge flows in response to external fields -- is a subject of fundamental interest for solid state physics. Carrier transport exhibits a wide variety of intriguing and potentially useful behaviors arising due to a rich and complex interplay between electron-disorder, electron-electron, and electron-phonon interactions. Graphene, a newly discovered carbon one-atom-thick material, has unique transport characteristics, some of which are already well understood, whereas some are being under investigation or are waiting to be discovered. The two-dimensional character and exceptional cleanness of graphene, as well as gate tunability of the carrier density and electron-electron interactions, make graphene an excellent platform to study a range of new transport regimes, such as quantum-coherent ballistic transport, electron hydrodynamics and energy dissipation at the atomic scale. We will study ballistic transport in the context of electronic lensing. We will also demonstrate that electron-electron scattering alters ballistic transport in a dramatic way, giving rise to hole backflows and "memory effects", and leading to experimental signatures such as negative non-local resistance. Upon further increase of the electron-electron interaction strength, the system enters the hydrodynamic regime, where a host of new phenomena can emerge. We also show that the electron-disorder interactions have important implications for energy transport, with energy dissipation occurring predominantly at atomic-scale defects. In this thesis, we will provide a detailed discussion of these topics and their connection to the ongoing experiments.
Online decision problems with large strategy sets
In an online decision problem, an algorithm performs a sequence of trials, each of which involves selecting one element from a fixed set of alternatives (the "strategy set") whose costs vary over time. After T trials, the combined cost of the algorithm's choices is compared with that of the single strategy whose combined cost is minimum. Their difference is called regret, and one seeks algorithms which are efficient in that their regret is sublinear in T and polynomial in the problem size. We study an important class of online decision problems called generalized multi- armed bandit problems. In the past such problems have found applications in areas as diverse as statistics, computer science, economic theory, and medical decision-making. Most existing algorithms were efficient only in the case of a small (i.e. polynomial- sized) strategy set. We extend the theory by supplying non-trivial algorithms and lower bounds for cases in which the strategy set is much larger (exponential or infinite) and the cost function class is structured, e.g. by constraining the cost functions to be linear or convex. As applications, we consider adaptive routing in networks, adaptive pricing in electronic markets, and collaborative decision-making by untrusting peers in a dynamic environment.
The role of ERO1 in oxidative protein folding in the endoplasmic reticulum
The formation of native disulfide bonds is critical for the folding and stability of many secreted proteins. We describe an essential S. cerevisiae gene, ER01, which encodes a conserved ER membrane protein required for disulfide bond formation in the er .doplasmic reticulum (ER). In a conditional ero 1-1 mutant, secretory proteins that would normally contain disulfide bonds, such as carboxypeptidase Y (CPY), are retained in the ER in reduced form, as shown by thiol modification with AMS. ER01 levels determine cellular oxidizing capacity, since mutation of ER01 causes hypersensitivil/ to the reductant OTT, whereas overexpression of ER01 confers resistance to OTT. Moreover, the thiol oxidant diamide can restore growth and secretion to ero1 mutants. These results suggest that Ero1p provides the oxidizing equivalents utilized for disulfide bond formation in the ER. Oxidizing equivalents are transferred directly from Ero1p to the abundant ER oxidoreductase PDI (protein disulfide isomerase) and its homolog Mpd2p. PDI is oxidized in wild-type cells, but reduced in the ero 1-1 mutant. Thiol-disulfide exchange between POI and Ero1p is indicated by the capture of PD1-Ero1p mixed-disulfides. PDI oxidizes secretory proteins, since newly-synthesized CPY remains fully reduced in POI-depleted cells. Mixed-disulfides between PDI and p1 CPY are also detected, indicating that PDI engages directly in thiol-disulfide exchange with this substrate. Together, these results define a pathway for protein disulfide bond formation in the ER wherein oxidizing equivalents flow from Ero1p to POI (and Mpd2p) and then to substrate proteins through direct thiol-disulfide exchange reactions. Oxidized glutathione (GSSG) does not serve as an obligate intermediate In this pathway, since oxidative protein folding proceeds normally in a gsh 1.1 mutant devoid of intracellular glutathione. Mutational analysis of ER01 identifies two pairs of conserved, vlclnal cystelnes essential for Ero1p function. Mutation of Cys100, Cys105, Cys352, or Cys355 of Ero1 p disrupts cell viability, CPY folding, and thiol-disulfide exchange between POI and Ero1p. Cys100 of Ero1p may be preferentially attacked by POI, while the Cys352- Cys355 disulfide may re-oxidize the Cys 100-Cys 105 cystelne pair. The properties of yeast Ero1 p resemble those of E. coli DsbB.
Search for D⁰-D̄⁰ mixing in D⁰ --> K⁺ [pi]⁻ [pi]⁻ [pi]⁺ decays
We present results of a search for D⁰-D̄⁰ mixing by analyzing D⁰ --> K⁺ [pi]⁻ [pi]⁻ [pi]⁺ decays from events in 230.4 fb-1 e+e- data recorded by BABAR. Assuming CP conservation, we measure the time-integrated mixing rate RM = (0.0019 ... (stat.) ± 0.002 (syst.))%, and RM < 0.048% at 95% confidence. Using a frequentist method, we estimate that the data are consistent with no mixing at 4.3% confidence level. We present results both with and without the assumption of CP conservation.
Habitation model for Great Barrington, A suggested
Great Barrington, Massachusetts. A naturally beautiful setting combined with four distinct weather seasons dictates a lifestyle for this small South Berkshire town's residents and visitors alike. This thesis proposes that it is desirable and possible for a dwelling to be unique to an area, and as such, to reflect and embody specific aspects of that area. A dwelling of this type could easily become the cornerstone of a recognizable new neighborhood, one which Great Barrington is in dire need of. In this thesis I have isolated the topography, climate, and mode of life which are specific to the Berkshire area and especially to Great Barrington. These extracted qualities specific to Great Barrington are used to generate a design for a single family house in this South Berkshire town. Through this residential model of the single family house it is suggested that one's physical habitation is able to both directly and indirectly reflect and relate to the ever changing seasons and weather. It is this ability of the single family house to form and foster relationships with its environment which makes Great Barrington a special place to be in; a place all of its own.
Methods and devices for noninvasive physiologic fluid volume assessment
Fluid volume status is a physiologic parameter that currently lacks a reliable diagnostic tool. Volume control becomes an issue during sickness and/or stress (physical and mental) in a wide range of populations. Unfortunately, current diagnostics suffer from being imprecise, invasive, and/or easily confounded and cannot unambiguously and practically inform volume status. There exists a need for a tool that can inform individuals and clinicians of fluid status in a noninvasive, rapid, and reliable manner. Drawing on the molecular sensitivity of IH nuclear magnetic resonance (NMR), we explored the ability of NMR methods to quantitate physiologic fluid volume changes. We first proved that NMR methods could detect volume changes in an animal model of dehydration. Correlation between NMR value changes in specific tissues and clinical tools used to assess dehydration validate NMR as a viable tool. We then proceeded to design and fabricate practical NMR sensors that could be easily integrated into the clinic. New methods of magnetic instrument design optimized for both field strength and spatial resolution were developed resulting in compact device prototypes with signal fidelity rivaling those of impractical commercial systems. Finally, we explored the ability of these devices to detect intravascular fluid changes during hemodialysis. Our methods and devices were able to detect intravascular blood property changes associated with blood dilution, in addition to overall fluid volume changes due to hemodialysis therapy. These results, methods, and devices provide the foundation and framework for the integration of NMR-based personalized fluid volume assessment into standard clinical practice.
A comprehensive system for non-intrusive load monitoring and diagnostics
Energy monitoring and smart grid applications have rapidly developed into a multi-billion dollar market. The continued growth and utility of monitoring technologies is predicated upon the ability to economically extract actionable information from acquired data streams. One of the largest roadblocks to effective analytics arises from the disparities of scale inherent in all aspects of data collection and processing. Managing these multifaceted dynamic range issues is crucial to the success of load monitoring and smart grid technology. This thesis presents NilmDB, a comprehensive framework for energy monitoring applications. The NilmDB management system is a network-enabled database that supports efficient storage, retrieval, and processing of vast, timestamped data sets. It allows a flexible and powerful separation between on-site, high-bandwidth processing operations and off-site, low-bandwidth control and visualization. Specific analysis can be performed as data is acquired, or retroactively as needed, using short filter scripts written in Python and transferred to the monitor. The NilmDB framework is used to implement a spectral envelope preprocessor, an integral part of many non-intrusive load monitoring workflows that extracts relevant harmonic information and provides significant data reduction. A robust approach to spectral envelope calculation is presented using a 4-parameter sinusoid fit. A new physically-windowed sensor architecture for improving the dynamic range of non-intrusive data acquisition is also presented and demonstrated. The hardware architecture utilizes digital techniques and physical cancellation to track a large-scale main signal while maintaining the ability to capture small-scale variations.
Computers at home, new spatial needs? : a case study
This thesis investigates five families in Boston who have introduced computers into their homes. The analysis is interdisciplinary and each case has been considered in terms of psycho-social and architectural terms. The conclusions address issues of control, gender relations, feelings toward computers, and architectural constraints to easy adaptation to the computer. The thesis concludes that the computer is not just a machine that one takes out of its box and plugs in. There are many considerations in bringing computers into the home.
Families of p̳-adic Galois representations
In this thesis, I first generalize Kisin's theory of finite slope subspaces to arbitrary p-adic fields, and then apply it to the generic fibers of Galois deformation spaces. I study the finite slope deformation rings in details by computing the dimensions of their Zariski cotangent spaces via Galois cohomologies. It turns out that the Galois cohomologies tell us not only the formal smoothness of finite slope deformation rings, but also the behavior of the Sen operator near a generic de Rham representation. Applying these results to the finite slope subspace of two dimensional Galois representations of the absolute Galois group of a p-adic field, we are able to show that a generic (indecomposible) de Rham representation lies in the finite slope subspace. It follows from the construction of the finite slope subspace that the complete local ring of a point in the finite slope subspace is closely related to the finite slope deformation ring at the same point. As a consequence, we manage to show the flatness of the weight map near generic de Rham points, and accumulation and smoothness of generic de Rham points. In particular, we have a precise dimension formula for the finite slope subspace. Taking into account twists by characters, we define the nearly finite slope subspace, which is believed to serve as the local eigenvariety, as is suggested by Colmez's theory of trianguline representation. Following Gouv~a- Mazur and Kisin, we construct an infinite fern in the local Galois deformation space. Moreover, we define the global eigenvariety for GL2 over any number field, and give a lower bound of its dimension.
Learning commonsense categorical knowledge in a thread memory system
If we are to understand how we can build machines capable of broad purpose learning and reasoning, we must first aim to build systems that can represent, acquire, and reason about the kinds of commonsense knowledge that we humans have about the world. This endeavor suggests steps such as identifying the kinds of knowledge people commonly have about the world, constructing suitable knowledge representations, and exploring the mechanisms that people use to make judgments about the everyday world. In this work, I contribute to these goals by proposing an architecture for a system that can learn commonsense knowledge about the properties and behavior of objects in the world. The architecture described here augments previous machine learning systems in four ways: (1) it relies on a seven dimensional notion of context, built from information recently given to the system, to learn and reason about objects' properties; (2) it has multiple methods that it can use to reason about objects, so that when one method fails, it can fall back on others; (3) it illustrates the usefulness of reasoning about objects by thinking about their similarity to other, better known objects, and by inferring properties of objects from the categories that they belong to; and (4) it represents an attempt to build a autonomous learner and reasoner, that sets its own goals for learning about the world and deduces new facts by reflecting on its acquired knowledge. This thesis describes this architecture, as well as a first implementation, that can learn from sentences such as "A blue bird flew to the tree" and "The small bird flew to the cage" that birds can fly. One of the main contributions of this work lies in suggesting a further set of salient ideas about how we can
Evaluating alternatives for housing India's urban poor : design studies, model and application in Ahmedabad
The study evaluates the three alternatives identified by the (National) Planning Commission for housing the Urban Poor in India: Upgrading, site and services, and housing. The basis for evaluation is the relationship of the cost of development to the cost of each of the components in development and the number of beneficiaries. The framework for evaluation is proposed as a model to assist: 1 ) Project designers to identify the relative importance of the various design parameters in development and to indicate quickly to the concerned agencies the impact of standards and regulations, 2) State and local agencies to determine the affordable standards, and 3) Allocation of available National resources by choosing affordable alternatives for housing the urban poor. The application of the model is illustrated for Ahmedabad. Conclusions are drawn from the application and for a specific set of assumptions. The assumptions governing the values assigned to the parameters of the model are based on case studies and design studies for three low-income settlements in Ahmedabad.
The use of Bluetooth in Linux and location aware computing
The Bluetooth specification describes a robust and powerful technology for short-range wireless communication. Unfortunately, the specification is immense and complicated, presenting a formidable challenge for novice developers. Currently, there is a lack of satisfactory technical documentation describing Bluetooth application development and the parts of the Bluetooth specification that are relevant to software developers. This thesis explains Bluetooth programming in the context of Internet programming and shows how most concepts in Internet programming are easily translated to Bluetooth. It describes how these concepts can be implemented in the GNU/Linux operating system using the BlueZ Bluetooth protocol stack and libraries. A Python extension module is presented that was created to assist in rapid development and deployment of Bluetooth applications. Finally, an inexpensive and trivially deployed infrastructure for location aware computing is presented, with a series of experiments conducted to determine how best to exploit such an infrastructure.
Validating performance and simplicity of highly concurrent data structures utilitizing the ATAC broadcast mechanism
I evaluate the ATAC broadcast mechanism as the foundation for a new paradigm in the design of highly scalable concurrent data structures. Shared memory communication is replaced, alleviating the contention that prevents data structures from achieving high performance on the next generation of manycore computers. The alternative model utilizes thread local memory and relies on the ATAC broadcast for inter-core communication, thus avoiding the complicated protocols that contemporary data structures use to mitigate contention. I explain the design of the ATAC barrier and run benchmarking to validate its high performance relative to existing barriers. I explore several concurrent hash map designs built using the ATAC paradigm and evaluate their performance, explaining the memory access patterns under which they achieve scalability.
Generation of sand ripples and sand bars by surface waves
Part I Generation of Sand Ripples by Surface Waves In Chapter 1, we study the sand ripple instability under partially standing surface waves in constant water depth. For gently sloped ripples, the approximate flow field is· worked out. By invoking an empirical formula of sediment transport rate, an eigenvalue problem is obtained, which gives rise to the equation for initial ripple growth with coefficients depending on local wave conditions. It is found that the wave-induced steady streaming has no effect on ripple growth. Thus, ripple instability is locally similar to the cases for oscillatory flows and for purely progressive waves, and is driven by ripple-induced flow. But the intensity of this process varies spatially with the period of half the surface wavelength due to the reflection. The results show that beneath the envelope minima (nodes) ripples grow the fastest and are the longest; under the envelope maxima (antinodes) ripples are unlikely. Part II Generation of Sand Bars and Sediment/Wave Interaction In this part we study the formation mechanism of sand bars under reflected surface waves and the mutual influence of the waves and bars through Bragg resonance. In Chapter 2, we first give an analysis of the effects of shoreline reflection on Bragg resonance by considering rigid bars, aiming at acquiring a deeper understanding of the physical processes of the Bragg resonance mechanism. We show that finite reflection by the shoreline can increase the wave energy arriving at the shore, in contrast to the result from most previous studies, suggesting that the mechanism can enhance the attack of the incident sea on the beach. The phase relation of the rigid bars and the shoreline reflection is found to be a key to the qualitative change of wave response. In Chapter 3, we develop a quantitative theory to describe the formation mechanism of sand bars by coupling sediment dynamics and wave hydrodynamics. Assuming that the slopes of waves and bars are comparably gentle and sediment motion is dominated by the bedload, an approximate evolution equation of bar height is derived. This equation shows that sand bars grow and evolve via a forced diffusion process rather than instability. Both the forcing and diffusivity depend on the flow field above the current bed. In Chapter 4, the coupled evolution of sand bars and waves is investigated, in which the Bragg scattering mechanism has been understood as two concurrent physical processes: energy transfer between two wave-trains propagating in opposite directions and change of their wavelength. Both effects are found to be controlled locally by the position of bar crests relative to wave nodes. In the absence of shoreline reflection, it is found that pre-existing sand bars cannot be maintained by their own Bragg scattered waves and the formation of sand bars offshore by Bragg scattering is at best a transient phenomenon. Comparison with experimental data supports the description of bar formation as a forced diffusion process. In Chapter 5, we examine the effects of horizontal variation of eddy viscosity on the evolution of bars. This variability arises because (1) the intensity of wave oscillation at the bottom changes in space due to the reflection; (2) the bottom roughness is not uniform due to the formation of ripples. While the forced diffusion mechanism is not changed qualitatively, it is found that the variable turbulent intensity inside the wave boundary layer strongly enhances the spatial fluctuation of the sand flux induced by wave stresses, thus causes stronger forcing to the bar growth.
Computational comparative genomics : genes, regulation, evolution
Understanding the biological signals encoded in a genome is a key challenge of computational biology. These signals are encoded in the four-nucleotide alphabet of DNA and are responsible for all molecular processes in the cell. In particular, the genome contains the blueprint of all protein-coding genes and the regulatory motifs used to coordinate the expression of these genes. Comparative genome analysis of related species provides a general approach for identifying these functional elements, by virtue of their stronger conservation across evolutionary time. In this thesis we address key issues in the comparative analysis of multiple species. We present novel computational methods in four areas (1) the automatic comparative annotation of multiple species and the determination of orthologous genes and intergenic regions (2) the validation of computationally predicted protein-coding genes (3) the systematic de-novo identification of regulatory motifs (4) the determination of combinatorial interactions between regulatory motifs. We applied these methods to the comparative analysis of four yeast genomes, including the best-studied eukaryote, Saccharomyces cerevisiae or baker's yeast. Our results show that nearly a tenth of currently annotated yeast genes are not real, and have refined the structure of hundreds of genes. Additionally, we have automatically discovered a dictionary of regulatory motifs without any previous biological knowledge. These include most previously known regulatory motifs, and a number of novel motifs. We have automatically assigned candidate functions to the majority of motifs discovered, and defined biologically meaningful combinatorial interactions between them. Finally, we defined the regions and mechanisms of rapid evolution, with important biological implications.
Portfolio evaluation of advanced coal technology : research, development, and demonstration
This paper evaluates the advanced coal technology research, development and demonstration programs at the U.S. Department of Energy since the 1970s. The evaluation is conducted from a portfolio point of view and derives implications for future program design and implementation. The evaluation framework consists of four categories of criteria that assess the portfolio from strategy, diversity, partnership, and project merit points of view. The analysis of the successes and the failures of the past programs in technical, financial and managerial respects shows that these programs are reasonably successful in (1) remarkably advancing coal technologies to enable the U.S. to use coal as its major energy resource in the electricity sector when facing more stringent environmental regulation or possibly even in a greenhouse gas constrained world; (2)accumulating effective program management experience, especially involving industry in technology development from the beginning of the process to facilitate future deployment. Among these successes, a number of important features incorporated in the CCTDP are especially worth noting. These features are: (1) The program goal was well defined, which was accelerating commercialization of ACTs;
Prognostic models for mesothelioma : variable selection and machine learning
Malignant pleural mesothelioma is a rare and lethal form of cancer affecting the external lining of the lungs. Extrapleural pneumonectomy (EPP), which involves the removal of the affected lung, is one of the few treatments that has been shown to have some effectiveness in treatment of the disease [39], but this procedure carries with it a high risk of mortality and morbidity [8]. This paper is concerned with building models using gene expression levels to predict patient survival following EPP; these models could potentially be used to guide patient treatment. A study by Gordon et al built a predictor based on ratios of gene expression levels that was 88% accurate on the set of 29 independent test samples, in terms of classifying whether or not the patients survived shorter or longer than the median survival [15]. These results were recreated both on the original data set used by Gordon et al and on a newer data set which contained the same samples but was generated using newer software. The predictors were evaluated using N-fold cross validation. In addition, other methods of variable selection and machine learning were investigated to build different types of predictive models. These analyses used a random training set from the newer data set. These models were evaluated using N-fold cross validation and the best of each of the four main types of models -
Improving project timelines using artificial intelligence/ machine learning to detect forecasting errors
This project focuses on the creation of a novel tool to detect and flag potential errors within Amgen's capacity management forecast data, in an automated manner using statistical analysis, artificial intelligence and machine learning. User interaction allows the tool to learn from experience, improving over time. While the tool created here focuses on a specific set of Amgen's data, the framework, approach and techniques offered herein can more broadly be applied to detect anomalies and errors in other sets of data from across industries and functions. By detecting errors in Amgen's data, the tool improves data robustness and forecasts, which drive decisions, actions and ultimately results. Flagging and correcting this data allows for overcoming errors, which would otherwise damage the accurate allocation of Amgen's human resources to activities in the drug pipeline, ultimately hampering Amgen's ability to develop drugs for patients efficiently. A user interface (UI) dashboard evaluates the tool's performance, tracking the number of errors correctly identified, the accuracy rate, and the estimated business impact. To date the tool has identified 893 corrected errors with a 99.2% accuracy rate and an estimated business impact of $77.798M optimized resources. Using the paradigm of intelligent augmentation (IA), this tool empowers employees by focusing their attention and saving them time. The tool handles the human-impossible task of sifting through thousands of lines and hundreds of thousands of data points. The human user then makes decisions and takes action based on the tool provided output.
Architectural practice and the planning of minor palaces in Renaissance Italy, 1510-1570
This dissertation proposes to study how the commission and design of minor palaces contribute to the understanding of architectural practice in early 16th century Italy. The particular nature of the small urban palace as a reduced and less expensive version of larger palaces and its recurrent nature in the practice of architects malke this type of building very important in illustrating the changes in the profeSSion at that time. Minor palace commissions also show architects dealing with a growing private market for the exercise of the profession: in Rome, the architect's clients belong to a lesser nobility composed of merchants and professional men (doctors, lawyers, notaries, artists, diplomats, bureaucrats) mostly connected to the Papal civil service. Moreover, the planning of these buildings manifest the increasing specialization of the profession at that time, when expertise in Ancient Roman architecture and the mastering of new instruments of representation (orthogonal projection, perspective, sketches) were added to the usual technical and artistic skills required of an architect. The dissertation focus on how architects define a planning procedure to cope with the new set of circumstances related to the commission of a minor palace (budget, site, program, recurrence). The design of a palace comprised different functions arranged in horizontal sequence with a few vertical connections; therefore, drawings of plans were the central instrument of their design. The dissertation is primarily based on the study of original plans that illustrate the working methods of 16th century Italian architects. Three of them were chosen (Antonio da Sangallo the Younger, Baldassare Peruzzi and Andrea Palladio) based on their activity as ~esigners of minor palaces and the existence of a substantial amount of plans for this kind of building by them. A second part of this work presents a general view of the working procedures employed by these three architects in commissions of minor palaces. Through the study of their drawings and planning procedures, this dissertation intends to illustrate the establishment of the modern sense of architectural practice in 16th century Italy as shown through the design of minor palaces.
Neural VAD and its practical use
The task of producing a Voice Activity Detector (VAD) that is robust in the presence of non-stationary background noise has been an active area of research for several decades. Historically, many of the proposed VAD models have been highly heuristic in nature. More recently, however, statistical models, including Deep Neural Networks (DNNs) have been explored. In this thesis, I explore the use of a lightweight, deep, recurrent neural architecture for VAD. I also explore a variant that is fully end-to-end, learning features directly from raw waveform data. In obtaining data for these models, I introduce a data augmentation methodology that allows for the artificial generation of large amounts of noisy speech data from a clean speech source. I describe how these neural models, once trained, can be deployed in a live environment with a real-time audio stream. I find that while these models perform well in their closed-domain testing environment, the live deployment scenario presents challenges related to generalizability.
First-principles investigation of Li intercalation kinetics in phospho-olivines
This thesis focuses broadly on characterizing and understanding the Li intercalation mechanism in phospho-olivines, namely LiFePO₄ and Li(Fe,Mn)PO₄, using first-principles calculations. Currently Li-ion battery technology is critically relied upon for the operation of electrified vehicles, but further improvements mainly in cathode performance are required to ensure widespread adoption, which in itself requires learning from existing commercial cathode chemistries. LiFePO₄ is presently used in commercial Li-ion batteries, known for its rapid charge and discharge capability but with underwhelming energy density. This motivates the three central research efforts presented herein. First, we investigate the modified phase diagram and electrochemical properties of mixed olivines, such as Li(Fe,Mn)PO₄, which offer improved theoretical energy density over LiFePO₄ (due to the higher redox voltage associated with Mn²+/Mn³+). The Lix(FelyMny)PO₄ phase diagram is constructed by Monte Carlo simulation on a cluster expansion Hamiltonian parametrized by first-principles determined energies. Deviations from the equilibrium phase behavior and voltages of pure LiFePO₄ and LiMnPO₄ are analyzed and discussed to good agreement with experimental observations. Second, we address why LiFePO₄ exhibits superior rate performance strictly when the active particle size is brought down to the nano-scale. By considering the presence of immobile point defects residing in the 1D Li diffusion path, specifically by calculating from first principles both defect formation energies and Li migration barriers in the vicinity of likely defects, the Li diffusivity is recalculated and is found to strongly vary with particle size. At small particle sizes, the contribution from defects is small, and fast 1D Li diffusion is accessible. However, at larger particle sizes (pm scale and above) the contribution from defects is much larger. Not only is Li transport impeded, but it is also less anisotropic in agreement with experiments on large LiFePO₄ single crystals. Third, we investigate why LiFePO₄ can be charged and discharged rapidly despite having to undergo a first-order phase transition. Conventional wisdom dictates that a system with strong equilibrium Li segregation behavior requires both nucleation and growth in the charge and discharge process, which should impede the overall kinetics. Rather, through first-principles calculations, we determine the minimal energy required to access a non-equilibrium transformation path entirely through the solid solution. Not only does this transformation mechanism require little driving force, but it also rationalizes how a kinetically favorable but nonequilibrium path is responsible for the extremely high rate performance associated with this material. The consequences of a rapid non-equilibrium single-particle transformation mechanism on (dis)charging a multi-particle assembly, as is the case in porous electrodes, are discussed and compared to experimental observations.
Social perspective of mobility sharing : understanding, utilizing, and shaping preference
Advances in information and communications technologies are enabling the growth of real-time ride sharing-whereby drivers and passengers or fellow passengers are paired up on car trips with similar origin-destinations and proximate time windows-to improve system efficiency by moving more people in fewer cars. Lesser known, however, are the opportunities of shared mobility as a tool to foster and strengthen human interactions. In this dissertation, I used preference as a lens to investigate the social interaction in mobility sharing, including how the interpersonal preference in mobility sharing can be understood, utilized and reshaped.
Usability and game design : improving the Massachusetts Institute of Technology Augmented Reality Game Editor
Creating MIT Augmented Realty (MITAR) games can be a daunting task. MITAR game designers require a usable game editor to simplify this process. The MITAR Game Editor was the first editor to provide game designers with the means to effectively create MITAR games, however, there were several areas that needed improvement. This motivated the development of several other incarnations of MITAR editor, each with its own unique usability strengths. However, these editors haven't seen much use by the game designers, and so much of the usability research that went into their development has gone to waste. In this project a new MITAR editor, the Full Editor, was developed. It combines the most usable portions of the newer editors together with the features of the original Game Editor into one game development solution. In addition, the Full Editor also introduces a new feature, the Flow View, which increases its usability further. Heuristic analysis and informal testing suggest that the Full Editor is a highly usable MITAR editor that will replace the Game Editor as the primary development platform for MITAR games.
Small molecule binding to electrophilic trigonal pyramidal platinum, palladium, and nickel
Chapter 1 A general introduction to the concepts and background of several types of transition metal complexes that motivate and inform the research described herein. These include a-complexes and molecular adducts of dinitrogen, dihydrogen, and carbon dioxide. Chapter 2 Trigonal bipyramidal platinum(II) complexes of the monoanionic, tetradentate, triphosphine [SiPR₃ ([SiP₃R]- = [(2-R₂PC₆H₄)₃Si]-; R = Ph, iPr) ligand are prepared and shown to provide access to cationic species with divergent behavior. The less electron-rich phenyl-substituted ligand renders the platinum center extremely electrophilic, leading to structurally characterized examples of weakly-donating ligands bound in the fifth, apical coordination site. Of particular interest is the structure of the toluene adduct, which suggests a possible interaction between the platinum center and an aryl C-H bond. When the ligand phosphines are instead substituted by the more electron-rich isopropyl groups, the electrophilicity of the cationic platinum is shown to be mitigated, allowing access to a four-coordinate, trigonal pyramidal platinum center. The crystallographically characterized geometry for this divalent platinum is in contrast to the canonical square planar configuration for d⁸, 16-electron transition metal complexes. The palladium analogue is also synthesized and shown to possess the same coordination. Chapter 3 Cationic nickel complexes of the [SiPR₃] ligand are synthesized and, in contrast to their platinum and palladium congeners, facilitate the surprising binding of molecular dinitrogen to electrophilic nickel(II) centers. The extremely high stretching frequencies of these bound N₂ moieties attest to their minimal activation, and the stability of these complexes is shown to arise from increased adonation from the N₂ to the cationic nickel center, which compensates for the relative lack of it back-bonding that stabilizes N₂ adducts in less electrophilic systems. These cationic nickel species are additionally shown to form thermally stable adducts of molecular dihydrogen. The relative binding strengths of N₂ and H₂ to these nickel centers are explored and shown to be modulated by the ligand phosphine substituents. Furthermore, evidence of linear binding of carbon dioxide is presented, representing an electrophilic approach to carbon dioxide activation that is in contrast to the low-valent, nucleophilic metal paradigm. Chapter 4 The four-coordinate neutral nickel boratrane (TPiPrB = (2-iPr₂PC₆H₄)₃B) reported in the literature represents an isostructural counterpart to the cationic {[SiiPr₃]Ni}+ species presented in Chapter 3. Though these two compounds are formally separated by two oxidation states of nickel, the Lewis-acidic nature of the Z-type borane ligand in (TP'PrB)Ni renders it valence-isoelectronic with {[SiiPr3]Ni}+. The reactivity toward N₂ and H₂ of (TPiPr'B)Ni, as well as that of the new compound (TPPhB)Ni, is explored and discussed in context of what is observed for the {[SiPR3]Ni}+ system. The neutral (TPiPr'B)Ni, while presumably a better [pi] back-bonder than cationic {I[SiPip' 3]Ni}T, is demonstrated not to bind N2, though a very weak, fluxional interaction with H₂ at low temperature is hypothesized. The more electrophilic (TP PhB)Ni exhibits room temperature interactions with both N₂ and H₂, though the nature of these interactions has yet to be confirmed. These results thus underline the importance of [sigma]-donation in stabilizing N₂ and H₂ adducts of poorly 7r back-bonding metal centers. Chapter 5 Cobalt(I) complexes of [SiPR3] provide an additional isostructural, isoelectronic point of comparison to the cationic nickel species presented in Chapter 3. The dinitrogen adducts [SiP'i' 3]Co(N2) and [SiPPh3]Co(N₂), previously reported from our laboratory, feature strongly bound N₂ ligands that are not labile to vacuum. The corresponding dihydrogen adducts are generated slowly under an H₂ atmosphere. The intact nature of both dihydrogen ligands, which also are not labile to vacuum, is reflected in their NMR spectroscopic parameters. The thermal stability of these compounds enabled crystallization of [SiPi'' 3]Co(H₂) which, along with the related (TP'i'B)Co(H₂) complex also developed in our laboratory, represent the first structurally characterized dihydrogen adducts of cobalt. Additional comparisons are made between the relative N₂ and H₂ binding strengths of this system and those of the structurally and electronically related family of [SiPR3] and (TpRB) metal complexes. Appendix A The asymmetric dinucleating ligand [NOPPh], designed to contain both a hard, N-donor binding site and a soft-P-donor binding site, is synthesized and shown to form a diiron complex that features asymmetric bonding to the bridging acetates. The corresponding symmetric, allphosphine dinucleating ligand [POPPh], proves to be more conducive to further study, and provides access to the symmetric diiron, di-([mu]-bromide) starting material {[POPPh ]Fe 2Br2} {BArF4 }. Addition of hydrazine generates the asymmetric, unbridged N₂H₄ adduct, which features localized diamagnetic and paramagnetic iron centers. The conformation of this species additionally demonstrates the flexibility of this ligand framework. Reduction of the diiron(II) starting material in the presence of PMe₃ results in formation of a putative asymmetric iron(O)/iron(I) dimetallic complex, in which an N₂ molecule is bound to the diamagnetic iron center, while the PMe₃ is ligated to the high-spin iron center and rendered NMR silent. The N₂ ligand is shown to be reversibly displaced by H₂ , suggesting the formation of a dihydrogen adduct, as well as by CO₂, which is postulated to bind as a bent, [eta]²(C,O) ligand.
Migration from electronics to photonics in multicore processor
Twenty - first opportunities for Gigascale Integration will be governed in part by a hierarchy of physical limits on interconnect. Microprocessor performance is now limited by the poor delay and bandwidth performance of the on - chip global wiring layer. This thesis is envisioned as a critical showstopper of electronic industry in the near future. The physical reason behind the interconnect bottleneck is the resistive nature of metals. The introduction of copper in place of aluminum has temporarily improved the interconnect performance, but a more disruptive solution will be required in order to keep the current pace of progress, optical interconnect is an intriguing alternative to metallic wires. Many - core microprocessors will push performance per chip from the 10 gigaflop to the 10 teraflop range in the coming decade. Pin limitations, the energy cost of electrical signaling, and the non - scalability of chip - length global wires are significant bandwidth impediments. Silicon nanophotonic based many core architecture are introduced in order to meet the bandwidth requirements at acceptable power levels.
Sharing school of architecture
Pedagogical experiments played very important role in shaping architectural discourse and practice in the second half of the 20th century. Along the history, the architecture discipline developed and struggled for new territories by articulating its relationship to the technological, socio-political and cultural transformations of the time -- and education became a vehicle for these actions. The rise of information technology brought sharing economy to urban life. Accessibility to spaces has been redistributed along with the notion of private and public territories. As companies starting to build platforms like Airbnb, Breather to accelerate the mixing of multi-programmatic spaces, institutional organizations tend to stay unchanged for their spatial arrangements. With the title of "Sharing School of Architecture", this thesis is putting together an argument as well as an attempt to push architecture school to the frontier of sharing economy by reimaging its spatial and programatic organization in the contemporary urbanism context, which eables architecture elements to access, curate and reinvent spaces into pedagogical programs. Instead of a static campus with traditional curriculum, architecture school should be an ever-growing network of spaces as part of urbanization, and a system continuously generating creative content that fullfills people's contenporary urban life.
Topics in non-convex optimization and learning
Non-convex optimization and learning play an important role in data science and machine learning, yet so far they still elude our understanding in many aspects. In this thesis, I study two important aspects of non-convex optimization and learning: Riemannian optimization and deep neural networks. In the first part, I develop iteration complexity analysis for Riemannian optimization, i.e., optimization problems defined on Riemannian manifolds. Through bounding the distortion introduced by the metric curvature, iteration complexity of Riemannian (stochastic) gradient descent methods is derived. I also show that some fast first-order methods in Euclidean space, such as Nesterov's accelerated gradient descent (AGD) and stochastic variance reduced gradient (SVRG), have Riemannian counterparts that are also fast under certain conditions. In the second part, I challenge two common practices in deep learning, namely empirical risk minimization (ERM) and normalization. Specifically, I show (1) training on convex combinations of samples improves model robustness and generalization, and (2) a good initialization is sufficient for training deep residual networks without normalization. The method in (1), called mixup, is motivated by a data-dependent Lipschitzness regularization of the network. The method in (2), called Zerolnit, makes the network update scale invariant to its depth at initialization.
Should we stay or should we go? : managing justice and retreat in the resilient city
In recent years, scholars and planning practitioners have turned to managed retreat as an adaptation response to climate change. This provokes questions about how equity and justice are addressed in the relocation of people because historic planning practice has led to the marginalization of already vulnerable populations to environmentally risky areas. Through a review of the existing definitions of managed retreat and its purported benefits, this thesis asserts that the language of "managed retreat" is inherently at odds with the language of justice as understood through movement building and advocacy. Managed retreat focuses on outcomes and strategies for the removal of assets from risk rather than developing processes of transformational change for the relocation of people. Managed retreat does not focus on power building and creating recognitional, procedural and distributional justice in the face of climate impacts. Using this review and case study analysis, this thesis outlines the critical components of retreat that current planning practice fails to meet in regards to both the benefits of retreat and outcomes of a just process. Through a speculative spatial analysis, this thesis also outlines a sample method for planners and policy makers to apply the process of managing retreat, a reconceptualization of managed retreat with the focus on a just and deeply democratic process. The result a proposed relocation suitability index that identifies the potential areas communities may move to, in order to understand the opportunities, challenges and constraints of relocation. The analysis reaffirms that a community's collective ownership over place is central to the role of planning practice in conveying and creating a life-enhancing, equitable and legitimate future that meets the needs of all people.
Reverse logistics and large-scale material recovery from electronics waste
Waste consolidation is a crucial step in the development of cost-effective, nation-wide material reclamation networks. This thesis project investigates typical and conformational tendencies of a hypothetical end-of-life electronics recycling system based in the United States. Optimal waste processor configurations, along with cost drivers and sensitivities are identified using a simple reverse logistics linear programming model. The experimental procedure entails varying the model scenario based on: type of material being recycled, the properties of current recycling and consolidation practices, and an extrapolation of current trends into the future. The transition from a decentralized to a centralized recycling network is shown to be dependent on the balance between transportation costs and facility costs, with the latter being a much more important cost consideration than the former. Additionally, this project sets the stage for a great deal of future work to ensure the profitability of domestic e-waste recycling systems.
Measurement and device design of left-handed metamaterials
The properties of a variety of left-handed metamaterial (LHM) structures are analyzed and measured to verify consistent behavior between theory an measurements. The structures are simulated using a commercial software program and a retrieval algorithm is used to determine the effective constitutive parameters. The constitutive parameters are used to predict the behavior of the metamaterial under various configurations. Measurements are conducted to verify the presence of a negative index of refraction. Transmission through an LHM slab from several incidences is shown to be consistent with theory. A four-port device utilizing the dispersive nature of an LHM prism is designed and measured. The measurements show that the refraction angle of an incident signal is frequency dependent. Two ports are constructed to receive the positively refracted and negatively refracted power. In the frequency band where the incident signal cannot propagate in the LHM prism, the power is reflected from the interface towards a third measurement port. The three ports are shown to achieve unique mutually exclusive bandwidths. A general study is conducted on the design of such a device. Finally, the use of a left-handed metamaterial as a substrate for a microstrip line is investigated.
Contribution of three-dimensional sound to the human-computer interface
Sound inherently has a spatial quality, an ability to be localized in three dimensions. This is the essence of 3-D, or spatial, sound. A system capable of recording sounds as digitized samples and playing them back in a localized fashion was developed in the course of this research. This sound system combines special hardware and interactive software to create a system more flexible and powerful than previous systems. The spatial qualities of 3-D sound contribute to man's ability to interact with sound as data. An application which capitalized on these qualities was developed, allowing the user to interact with 3-D sound in a spatial environment. This application, called the Spatial Audio Notemaker, was not unlike a bulletin board, where the paper notes were recorded messages and the bulletin board was the user's environment. Using the Spatial Audio Notemaker, exploration into the manipulation of 3-D sound and the necessary interaction (using voice and gesture) and feedback (both visual and audio) to aid in this manipulation was accomplished.
Architectural epidemiology : a computational framework
Architecture affects our health, especially in hospitals. However, our ability to learn from existing hospitals to design buildings that improve patient outcomes is limited. If we want to leverage large datasets of health outcomes to build knowledge about how architecture affects health, then we need new methods for analyzing spatial data and health data jointly. In this thesis, I present several steps toward the goal of developing a computational model of architectural epidemiology that aims to leverage both human and machine intelligence to do so. First, I outline the need for structured architectural datasets that capture spatial information in schemas that current drawing formats do not allow. These datasets need to be wide to capture multifaceted and qualitative aspects of the built environment, and so we need new methods to generate this data. Finally, we need strategies for surfacing insight from these datasets by involving both humans and machines in the process.
Decentralized detection in sensor network architectures with feedback
We investigate a decentralized detection problem in which a set of sensors transmit a summary of their observations to a fusion center, which then decides which one of two hypotheses is true. The focus is on determining the value of feedback in improving performance in the regime of asymptotically many sensors. We formulate the decentralized detection problem for different network configurations of interest under both the Neyman-Pearson and the Bayesian criteria. In a configuration with feedback, the fusion center would make a preliminary decision which it would pass on back to the local sensors; a related configuration, the daisy chain, is introduced: the first fusion center passes the information from a first set of sensors on to a second set of sensors and a second fusion center. Under the Neyman-Pearson criterion, we provide both an empirical study and theoretical results. The empirical study assumes scalar linear Gaussian binary sensors and analyzes asymptotic performance as the signal-to-noise ratio of the measurements grows higher, to show that the value of feeding the preliminary decision back to decision makers is asymptotically negligible. This motivates two theoretical results: first, in the asymptotic regime (as the number of sensors tends to infinity), the performance of the "daisy chain" matches the performance of a parallel configuration with twice as many sensors as the classical scheme; second, it is optimal (in terms of the exponent of the error probability) to constrain all decision rules at the first and second stage of the "daisy chain" to be equal.
Computational bounce flash for indoor portraits
Portraits taken with direct flash look harsh and unflattering because the light source comes from a small set of angles very close to the camera. Advanced photographers address this problem by using bounce flash, a technique where the flash is directed towards other surfaces in the room, creating a larger, virtual light source that can be cast from different directions to provide better shading variation for 3D modeling. However, finding the right direction to point a bounce flash towards requires skill and careful consideration of the available surfaces and subject configuration. Inspired by the impact of automation for exposure, focus and flash metering, we automate control of the flash direction for bounce illumination. We first identify criteria for evaluating flash directions, based on established photography literature, and relate these criteria to the color and geometry of a scene. We augment a camera with servomotors to rotate the flash head, and additional sensors (a fisheye and 3D sensors) to gather information about potential bounce surfaces. We present a simple numerical optimization criterion that finds directions for the flash that consistently yield compelling illumination and demonstrate the effectiveness of our various criteria in common photographic configurations.
Uncovering the dynamics of everyday life through playful modeling
It is not easy to understand the dynamics underlying everyday life. The change around us is so ubiquitous; the processes governing change are invisible; the relationships between cause & effect are usually disconnected in time or space, and probabilistic causation adds uncertainty to the mix. This dissertation is about a new modeling language and a tangible simulation environment that together help children gain an intuitive understanding of the dynamics underlying everyday life phenomena, from fashion trends and financial markets fluctuations to vicious cycles of violence and virtuous cycles of popularity growth. I present the Flowness modeling language, a unique combination of Systems Thinking languages that results in an intuitive-to-understand yet computationally simulate-able language. I present FlowBlocks: a tangible learning technology designed in the spirit of early childhood construction kits (a field pioneered by Friedrich Froebel), with special attention to physical representation of abstract concepts (a field pioneered by Maria Montessori). FlowBlocks are a set of wooden blocks with embedded computation that simulate continuous flow using a moving light signal, making dynamic processes visible and manipulable.
Construction trades training facility for the eastern Canadian Arctic
On April 1, 1999, the Inuit of the Eastern Canadian Arctic achieved sovereignty over a new territory, Nunavut, envisioning economic self-reliance, political self-determination, and renewal of confidence in Inuit community. Life in Nunavut, however, remains circumscribed by adversities: poverty, crowded houses, and long winters. Both government and industry are constrained by inexperienced administration and insufficient budgets. Perhaps no sector is as challenged as the construction industry, caught between the vast demand of a housing crisis and the extreme cost of importing labor. The territory must invest in building skills to reduce the cost of housing. Trades training in the Eastern Arctic will have political, cultural, and economic significance for a community long dependent on remote governments and migrant workers. Moreover, local tradesmen will be indispensable to an affordable construction strategy for community buildings serving a population expanding at twice the national rate. Over the course of fifty years of permanent settlement in Nunavut, no construction system has yet been devised for civic spaces that respond to its social, physical, and logistical conditions.
Sustainable and equitable urban environments in Asia
This study identifies some of the factors and conditions that can encourage the development of sustainable and equitable urban environments. It argues that cities will continue to grow and that it is not productive to view that growth as a crisis or a tragedy; instead it must be seen as a challenge for the future. The urban policies that have evolved over the last several decades have combined the role of government agencies, private-sector investment, and community involvement. Projects undertaken in developing countries are often supported by international development agencies seeking to promote cooperative ventures through pilot or demonstration projects. This study, however, suggests that it is time to move on and to incorporate the lessons learned from these demonstrations into full -scale local and national urban-management strategies. Developing criteria for sustainable and equitable housing and urban services is the next goal. Among them, this study argues, is the need to reduce inequity in the way housing and urban services are planned and developed. To do this two interrelated approaches are suggested: one is to increase choices that the community is given and create conditions that promote community decision-making; the other is to optimize the role played by governments agencies, private-sector organizations, community groups, non-government agencies, and other local groups. Several projects in Asia and South Asia were evaluated to determine the process by which new housing programs are planned and developed, the kinds of decisions taken, and the roles played by the various participating groups. The role of non government organizations and community organizations in settlement upgrading programs; the advantages and risks of private sector involvement; and the potential role of community groups, non-government organizations, private developers, government agencies, and housing finance institutions in new housing projects, were also evaluated. The study concludes by showing that housing and urban-services programs have a better chance of becoming sustainable and equitable if they are developed through consensus rather than confrontation, and when private-sector involvement is encouraged and promoted under conditions that are clearly understood and instituted. The study also concludes that community accountability and decisionmaking must be increased, local-management promoted, and program components in which the community has a larger implementing role introduced. Similarly, the role of small-scale building contractors must be enhanced; and the needs of the broadened client groups understood and reflected in planning and design. Finally site design for urban developments has to be integrated into the larger community and respect the needs of its immediate surroundings. Many of the suggestions and proposals offered here are not broad strategies, but suggestions for feasible ways of improving society's chances of solving its urban development problems. They are not blueprints, but simply ideas for generating new approaches that will deal more adequately with the immediate and increasingly severe housing shortage, and recommend actions for preventing difficulties that may otherwise arise in the future. Finally, the recommendations in this study are strategic, not project-oriented; in their implementation the locus of responsibility rests with the cities themselves.
Nonadiabatic electron transfer in the condensed phase, via semiclassical and Langevin equation approach
In this dissertation, we discuss two methods developed during my PhD study to simulate electron transfer systems. The first method, the semi-classical approximation, is derived from the stationary phase approximation to the path integral in the spin-coherent representation. The resulting equation of motion is a classical-like ordinary differential equation subject to a two-ended boundary condition. The boundary value problem is solved using the "near real trajectory" algorithm. This method is applied to three scattering problems to compute the transmission and reflection probabilities. The strength and weakness of this approach is investigated in details. The second approach is based on the generalized Langevin equation, in which the quantum transitions of electronic states are condensed into a linear regression equation. The memory kernel in the regression equation is computed using a second perturbation expansion. The perturbation is optimized to achieve the best convergence of the second order expansion. This procedure results in a tow-hop Langevin equation, the THLE. Results from a spin-boson system validate the THLE in a wide range of parameter regimes. Lastly, we tested the feasibility of using Monte Carlo sampling to compute the memory kernel from the spin-boson system and proposed a smoothing technique to reduce the number of sampling points.
Field induced orientation of semicrystalline and non-crystallne block copolymer microdomain patterns
Various block copolymer microdomain structures are controlled in bulk as well as in thin film by employing flow fields, directional solidification, and substrates. In bulk systems, flow fields generated by the 'roll casting' process orient amorphous cylindrical microdomains along the flow direction in a semicrystalline block copolymer. Subsequent crystallization of the crystalline block is significantly influenced by the pre-existing oriented amorphous cylindrical microdomains. The orientation of crystalline lamellae is achieved parallel to the cylinder axis, completely suppressing spherulite formation. Microdomain structures of block copolymers are also controlled in thin films by directional solidification of a crystallizable solvent. This new method is based on the use of crystalline organic materials, which are solvents for the block copolymers above their melting temperatures. The directional crystallization of the solvent induces the directional microphase separation of the block copolymer. Furthermore, the flat (001) crystal face of benzoic acid or anthracene provides both a surface for epitaxy of a semicrystalline polyethylene block as well as a confining surface for the thin polymer film which forms between the crystallizing solvent and the glass or silicon wafer substrate. Several semicrystalline and non-crystalline block copolymers were directionally solidified using a crystallizable solvent. A bi-axially ordered edge-on crystalline lamellar structure is obtained due to the epitaxy between a melt-compatible semicrystalline block copolymer and benzoic acid single crystal. Directional solidification generates vertically aligned lamellar and cylindrical microdomain structures of conventional non-crystalline block copolymers such as PS/PMMA and PS/PI.
Symplectic isotopy for cuspidal curves
This work has three purposes. The first one is to prove unobstructedness of deformation of pseudoholomorphic curves with cusps and tacnodes. We show that if the first Chern class of a 4-dimensional symplectic manifold is sufficiently positive then the deformation is unobstructed. We prove this result when the curves have cusps and nodes, not in a prescribed position. We also prove a similar result when the curves have cusps and tacnodes in a prescribed position with a prescribed tangency and in addition nodes, not in a prescribed position. The second part of this work deals with the local symplectic isotopy problem for cuspidal curves. Let B be the unit ball in R4 with the standard symplectic form wst. Let J0 be a wst-tame almost complex structure. Let Co c B be a connected J-holomorphic curve in B with a isolated singularity at 0 E B and without multiple components. Assume in addition that the boundary OCo is smoothly embedded. We prove that any two connected, reduced pseudoholomorphic curves in B, with the same number of irreducible components, the same number of nodal points and at most one ordinary cusp point, both sufficiently close to Co, are symplectic isotopic to each other. The third part of this work deals with the global symplectic isotopy problem. As an application of unobstructedness of deformation, we show that any irreducible rational pseudo-holomorphic curve in CP2 of degree d, with only nodes and m ordinary cusps as its singularities, is symplectic isotopic to a holomorphic curve as long as d > m.
Understanding the active sites and reaction mechanism of water oxidation on metal oxides
Solar energy irradiating the Earth's surface exceeds human energy consumption by four orders of magnitude and the key to alleviating the global energy crisis lies in efficiently harnessing it. An ideal means of storing surplus energy from solar is to convert it to hydrogen using proton exchange membrane water electrolyzers, which are amenable to integration with solar devices due to their high performance under fluctuating power input. Water oxidation to molecular oxygen is the most energy intensive part of the water splitting process, limiting the overall efficiency of water splitting devices. Rutile Ruthenium Dioxide (RuO₂) is a gold standard catalyst for water oxidation in acidic solutions. It can also undergo fast surface redox reactions in the electrochemically stable potential window of water, making it an ideal material for electrochemical capacitors that can charge and discharge in a much shorter time scale than batteries.
Simulation development and analysis of attitude-control system architectures for an astronaut mobility unit
Control-moment gyroscopes (CMGs) are spacecraft attitude-control actuators which control the spacecraft's orientation and pointing. CMGs operate on electrical power and therefore obey the the conservation of angular momentum. Single-gimbal CMGs are equipped with a high-speed flywheel which can be gimbaled to impart gyroscopic torques. The net reaction torques are observed by the spacecraft resulting in pure rotation. A CMG based attitude control system (ACS) is favorable compared to a cold gas thruster ACS because of fundamental differences in how the reaction torques are produced. CMGs provide a continuous range of motion while RCS thrusters are limited by the minimum on-off time for the thruster valves. This minimum open-close time leads to a bang-bang response as opposed to the smoother CMG response. Furthermore, CMGs are powered using batteries and can therefore be recharged, while RCS thrusters use propellant which depletes over time. CMG sizing, the act of designing and choosing the electrical and mechanical parameters for a given spacecraft ACS, is studied in this thesis. The CMG sizing tool analyzes the specific system configuration (i.e. mass properties, thruster location and placement, CMG architecture, etc.) and the mission and system requirements to provide an "idealized" CMG model. Detailed simulation results and recommendations are presented for the design and analysis of the Mobility Augmenting Jetpack with Integrated CMGs (MAJIC) system. The CMG sizing software acts as a parametric tool which can be adopted to any spacecraft system.
Novel approaches to Newtonian noise suppression in interferometric gravitational wave detection
The Laser Interferometer Gravitational-wave Observatory (LIGO) attempts to detect ripples in the curvature of spacetime using two large scale interferometers. These detectors are several kilometer long Michelson interferometers with Fabry-Perot cavities between two silica test masses in each arm. Given Earth's proximity to various astrophysical phenomena LIGO must be sensitive to relative displacements of 1018 m and thus requires multiple levels of noise reduction to ensure the isolation of the interferometer components from numerous sources of noise. A substantial contributor to the Advanced LIGO noise in the 1-10 Hz range is Newtonian (or gravity gradient) noise which arises from local fluctuations in the Earth's gravitational field. Density fluctuations from seismic activity as well as acoustic and turbulent phenomenon in the Earth's atmosphere both contribute to slight variations in the local value of g. Given the direct coupling of gravitational fields to mass the LIGO test masses cannot be shielded from this noise. In an attempt to characterize and reduce Newtonian noise in interferometric gravitational wave detectors we investigate seismic and atmospheric contributions to the noise and consider the effect of submerging a gravitational wave detector.
Analysis of the accessory business : focus on electromechanical grips
Today, many manufacturing companies are facing numerous challenges that had not been present in the past. The paradigm of how companies must perform has dramatically changed over the years. Back in the 1980's customer service was used as a tool to gain competitive advantage. Now, good customer service is expected from the vendor and few companies survive if they don't embrace best of breed practices such as this one. In addition, quality, cost and delivery time have become intrinsic values for the consumer. Not only does the product need to be at a lower cost but they also need to be of higher quality and be delivered promptly. Instron Corporation is one of the companies that is searching for ways to remain as the industry leader given the fierce competition they face. This company sells electromechanical testing machines and has a large after market for accessories for these machines. While the company has placed a lot of effort into certain areas, others have been completely neglected. This project will focus on the accessory business of the electromechanical systems. The intent has been to identify the major problems that the accessory business faces and provide the company with a set of tools and guidelines that will help the company perform more effectively. Due to time constraints, the research was done on one segment of the accessory business, the grips. Therefore, this thesis should be used as a template for the rest of the accessory business. Topics included are; product rationalization, redesign with product platforms, and an inventory model to reduce existing inventory investments and increase inventory turns.
Topological Hochschild homology of twisted group algebras
Let G be a group and A be a ring. There is a stable equivalence of orthogonal spectra ... between the topological Hochschild homology of the group algebra A[G] and the smash product of the topological Hochschild homology of A and the cyclic bar construction of G. This thesis generalizes this result to a twisted group algebra AT[G]. As an A-module, Ar[G] = A[G], but the multiplication is given by ag. a'g' = ag(a') gg', where G acts on A from the left through ring automorphisms. The main result is given in terms of a variant THH9(A) of the topological Hochschild spectrum that is equipped with a twisted cyclic structure inherited from the cyclic structure of the cyclic pointed space THH(A)[-]. We first define a parametrized orthogonal spectrum E(A, G) over the cyclic bar construction NCY(G). We prove there is a stable equivalence of spectra between the associated Thom spectrum of E(A, G) and THH(AT[G]). We then prove there is a stable equivalence of orthogonal spectra ... where the wedge-sum on the left hand side ranges over the conjugacy classes of elements of G and the equivalence depends on a choice of representative g E (g) of every conjugacy class of elements in G.
Studies of exon scrambling and mutually exclusive alternative splicing
The goals of this thesis work were to study two special alternative splicing events: exon scrambling at the RNA splicing level and mutually exclusive alternative splicing (MEAS) by computational and experimental methods. Chapter 1 presents work on the study of exon scrambling, in which exons are spliced at canonical splice sites but joined together in an order different from that predicated by the genomic sequence. The public expressed sequence tag (EST) database was searched for transcripts containing scrambled exons. Stringent criteria were used to exclude genome annotation or assembly artifacts. This search identified 172 human ESTs representing 90 exon scrambling events, which derive from 85 different human genes. In several cases, the scrambled transcripts were validated using an RT-PCR-sequencing protocol, confirming the reproducibility of these unusual events. Exon scrambling of transcripts from the GLI3 gene, which encodes a transcription factor involved in hedgehog signaling, was also conserved in mouse. Specific gene features, including the presence of long flanking introns were found to be associated with exon scrambling. Chapter 2 deals with mutually exclusive alternative splicing (MEAS), in which only one of a set of two or more exons in a gene is included in the final transcript. A database with 101 human genes and 25 mouse genes containing mutually exclusive exons (MXE) has been established with GENOA annotation software. Specific sequence features were analyzed. A genome-wide search for a special "tandem MEAS" events was undertaken and 10 such human genes were identified. A fluorescence reporting system was built to study intronic cis-elements regulating MEAS.
Noncommutative ring spectra
Let A be an Ax ring spectrum. We give an explicit construction of topological Hochschild homology and cohomology of A using the Stasheff associahedra and another family of polyhedra called cyclohedra. Using this construction we can then study how THH(A) varies over the moduli space of AO structures on A, a problem which seems largely intractable using strictly associative replacements of A. We study how topological Hochschild cohomology of any 2-periodic Morava K-theory varies over the moduli space of AO structures and show that in the generic case, when a certain matrix describing the multiplication is invertible, the result is the corresponding Morava E-theory. If this matrix is not invertible, the result is some extension of Morava E-theory, and exactly which extension we get depends on the AO structure. To make sense of our constructions, we first set up a general framework for enriching a subcategory of the category of noncommutative sets over a category C using products of the objects of a non-E operad P in C. By viewing the simplicial category as a subcategory of the category of noncommutative sets in two different ways, we obtain two generalizations of simplicial objects.
Ship collision and the OFNP : analysis of possible threats and security measures
The OFNP research group in the Nuclear Science and Engineering Department at MIT is developing a power plant that combines two well-established technologies -- light water reactors and offshore platforms -- into a new design called the Offshore Floating Nuclear Plant (OFNP). Deploying a nuclear reactor aboard a floating platform up to 12 nautical miles into the ocean raises unique security questions and considerations. This investigation presents a framework for analyzing the threat of intentional ship collision, modeling damage and characterizing the effectiveness of potential solutions, as well as integrating or adapting the recommended security strategies into existing regulatory and legal environments. First, a collision risk assessment is completed and a postulated design-basis collision threat (DBT) is determined to be a 150,000 DWT ship. Next, using the DBT characteristics and the finite element modeling software ABAQUS, estimations for damage are provided for a reference case and for cases with variations in collision characteristics. Results indicate increased ship penetration from faster and larger ships, wedge-shaped ship hulls, fixed OFNP moorings, direct broadside collisions, and OFNP designs with less internal structural support. Additionally, in order to minimize risk of unacceptable damage, the results indicate that vessels larger than 70,000 DWT should be restricted from entering within an eight-nautical mile exclusion zone. The results from the previous assessments are then used to present technical, operational, and regulatory recommendations for damage mitigation. The analysis concludes with an assessment of the existing regulatory and legal environments in which the regulatory solutions would have to be implemented, provides an analysis of the degree to which the ideal regulations comply with existing laws, and then culminates with the presentation of further recommendations and a regulatory strategy framework for meeting security goals while achieving legal compliance. In summary, this investigation considers the threat of intentional collision with an Offshore Floating Nuclear Plant and utilizes risk assessment techniques, numerical modeling, and legal research to contextualize the threat, model possible damage, and present technical, operational, and regulatory solutions for avoiding or mitigating damage.
Interdiffusivity in titanium-tantalum alloys processed at 1473 K
Titanium-tantalum (Ti-Ta) alloys are likely to have a high biocompatibility and corrosion resistance that renders them novel materials of interest for biomedical applications[7, 14, 2]. With high strength and a low elastic modulus, Ti-Ta alloys have attracted attention as candidates for such uses as hip replacements[2]. A current challenge impeding use of these alloys is that, with a melting temperature of 3269 K, homogeneous alloys involving Ta are difficult to produce by conventional melting practice[3]. The objective of this work was to, as most structural changes occur via diffusion, gain insight into this matter through determination of the interdiffusivity in Ti-Ta alloys. A scanning electron microscope was utilized to perform energy dispersive x-ray analysis on Ti-Ta alloy samples in the range of 20 to 60 weight percent (wt %) Ta. A computational model that employed Fick's Second Law was used to extract interdiffusivity values from the data. Interdiffusivity values, which ranged from 4.0. 10-13-Tfor 20 wt % Ta to 3.0. 0-14- for 60 wt % Ta, exhibited a systematic variation with composition. The interdiffusion coefficient was seen to decrease with increasing weight fraction Ta.
High throughput three-dimensional tissue cytometry
This thesis presents the ongoing technological development of high throughput 3-D tissue cytometry.and its applications in biomedicine. 3-D tissue cytometry has been developed in our laboratory based on two-photon microscopy (TPM) that is capable of in situ 3-D imaging of tissue up to a volume of several mm3 with subcellular resolution. This high throughput tissue cytometry achieves an imaging rate of 2 mm3/hour. I optimize the performance of this instrument by developing two new techniques. First, the image signal-to-noise-ratio and the microscope penetration depth can be improved by reducing tissue scattering. Optical clearing agents can significantly lower tissue scattering by index matching of different tissue constituents. While the application of tissue clearing agent has been extensively studied in fresh tissues, its application has not been extensively studied in paraffin fixed and frozen tissues. Frozen tissues are particularly important as tissue sections can be retained for biochemical and genetic analysis after fluorescence imaging. We investigate the effects of optical clearing agents in sub-zero temperature in terms of TPM image contrast and penetration depth. Second, tissue cytometry are often used in the detection of rare cells.
Multiple-part-type production scheduling for high volume manufacturing (time-based approach)
An effective production scheduling strategy would lead to efficient production line performance as well as increased profit. However, there is no fixed or generalized solution. In this thesis, Nonlinear Programming and time-based Control Point Policy were applied in sequence to solve the production scheduling problems at a high volume industry. The strategy provided the company a systematic way to tackle production problems. A distinct tradeoff between average inventory and frequency of changeover is observed. A recommended selection is made based on minimizing total cost (inventory holding cost and changeover cost). Comparing with current line behavior, the recommended selection will reduce the total cost by more than half.
Somatic retrotransposition in the cancer genome
Cancer is a complex disease of the genome exhibiting myriad somatic mutations, from single nucleotide changes to various chromosomal rearrangements. The technological advances of next-generation sequencing enable high-throughput identification and characterization of these events genome-wide using computational algorithms. Retrotransposons comprise 42% of the human genome and have the capacity to "jump" across the genome in a copy-and-paste manner. Recent studies have identified families of retrotransposable elements that are currently active. In fact, retrotransposons constitute a major source of human genetic variation, and somatic retrotransposon insertions have been implicated in several cancers, including an insertion into the APC tumor suppressor in a colorectal tumor. Because of the highly repetitive nature of these elements, however, the full extent of somatic retrotransposon movement across cancer remains largely unexplored. To this end, we developed TranspoSeq, a computational framework that identifies retrotransposon insertions from paired-end whole genome sequencing data, and TranspoSeq- Exome, a tool that localizes these insertions from whole-exome data. TranspoSeq identifies novel somatic retrotransposon insertions with high sensitivity and specificity in simulated data and with a 94% validation rate via site-specific PCR. Next, we applied these methods to wholegenomes from 200 tumor/normal pairs and whole-exomes from 767 tumor/normal pairs across 11 tumor types as part of The Cancer Genome Atlas (TCGA) Pan-Cancer Project. We discover more than 800 somatic retrotransposon insertions primarily in lung squamous, head and neck, colorectal and endometrial carcinomas, while glioblastoma multiforme and acute myeloid leukemia show no evidence of somatic retrotransposition. Moreover, many somatic retrotransposon insertions occur in known cancer genes. TranspoSeq-Exome uncovers 35 additional somatic retrotransposon insertions into exonic regions, including an insertion into an exon of the PTEN tumor suppressor in endometrial cancer. Finally, we integrate orthogonal genomic and clinical data to characterize features of retrotransposon insertion and samples that exhibit extensive somatic retrotransposition. We present a large-scale, comprehensive analysis of retrotransposon movement across tumor types using next-generation sequencing data. Our results suggest that somatic retrotransposon insertions may represent an important class of tumor-specific structural variation in cancer and future studies should incorporate this form of somatic genome aberration.
Understanding the effect of protonation on the self-assembly of a model polyelectrolyte-neutral block copolymer
Charge-containing polymers are used in a wide variety of commercial products including fuel cell membranes, heat sealing packages, and golf ball covers. Traditionally made as random copolymers of charged and uncharged monomers, morphological understanding and control is limited due to the lack of long range order and small length scale of the structural inhomogeneities. Moreover, the charge functionality is typically introduced in a permanent way that is not modifiable after synthesis, locking in a chemistry and structure that may not be optimal for the ultimate application. This thesis develops and studies the morphologies of a model block copolymer which is controllably charged in a novel way: by protonating a weak base. This polymer is composed of one hydrophilic but uncharged block, poly(oligoethylene glycol methyl ether methacrylate) (POEGMA), and one weak polyelectrolyte, poly(2-vinylpyridine) (P2VP) which can be controllably charged by varying the amount of acid to which it is exposed. This thesis presents the synthesis and morphological characterization of this polymer using scanning probe microscopy and small angle X-ray scattering. First, the ability of P2VP protonation to change the morphology of the diblock is demonstrated; while miscible in the absence of charge, the diblock undergoes a disorder to order transition upon protonation by a variety of acids. Thin films with varying levels of polyelectrolyte protonation are created and the efficacy of several polar aqueous and organic annealing solvents are presented. The introduction of acid in either the vapor or liquid phase is also shown to induce microphase separation. This is followed by a thorough treatment of the bulk morphologies of POEGMA-P2VP as a function of acid content, temperature, and minority block volume fraction. For all diblocks, protonation is found to increase the segregation strength between the two blocks and disorder to order transitions are observed with increasing protonation and temperature. Polymers with minority block volume fractions closest to 0.5 are the most immiscible, while those richer in majority block require more acid and higher temperatures to demix. Finally, the effect of acid type is investigated in detail by the comparison of two monoprotic with one diprotic acid. The diprotic acid is found to be more efficient at inducing microphase separation than either monoprotic acid for two diblocks of differing composition.
Pricing bundles of products and services in the high-tech industry
The High-Tech industry faces tremendous complexity in product design because of the large number of different products that can be offered and the mix of products and services that exists. Information Technology (IT) products and services available in the market are increasing exponentially. Bundling appears in this industry as a natural mechanism to reduce complexity for sellers and buyers and to reduce variability on the customer's valuation of individual products. The first chapter of this dissertation discusses these issues. Chapter Two addresses the real-world problem of pricing bundles of IT services and products contracts when there is a high setup cost. Customers pay a fixed monthly fee. The company finances the hardware (HW) and software (SW) while the services and support are paid in a monthly basis out of the fee. The solution approach computes the monthly fee to be charged for every offered bundle, taking into account that customers may defect before the end of the contract. The dynamics of the system account for defection of current customers and arrival of new ones, at each period. Optimal pricing policies and equilibrium points of the system are characterized. Chapter Three addresses the determination of the optimal bundle's composition and price while maximizing total expected profits. The setting is a high-tech company in a highly competitive environment that must build a bundle and put it out in the market. Bundles are built from a set of components that meet technical constraints. The customers' choice among competitors' bundles (not under the company's control) and the company's bundle (under its control) is modeled in a random utility framework. A nonlinear mixed integer programming formulation of the company's decision problem is presented and solved.
The impact of the geographic distribution of design engineers on the pace of engineering development
The increasing use of digital design tools and broadband information networks is creating an environment that permits the geographic distribution of design engineers. In order to successfully distributed engineering the consequences need to be understood. Through the examination of records of project execution, this thesis investigates whether the decision to geographically distribute engineers has a measurable impact on the pace of engineering development. A task-based Design Structure Matrix (DSM) was developed and showed that the projects studied were developed using a highly integral process. It is hypothesized the unanticipated consequences of distributing engineers geographically will slow the pace of engineering development to such an extent that costs incurred in protracted engineering development outweigh the benefits.
Analysis of open-loop pointing and closed-loop tracking with noisy platform attitude information
U.S. military assets' increasing need for secure global communications has led to the design and fabrication of airborne satellite communication terminals that operate under protected security protocol. Protected transmission limits the closed-loop tracking options to eliminate pointing error in the open-loop pointing solution. In an airborne environment, aircraft disturbances and noisy attitude information affect the open-loop pointing performance. This thesis analyzes the open-loop pointing and closed-loop tracking performance in the presence of open-loop pointing error and uncertainty in the received signal to assess hardware options relative to performance requirements. Results from the open-loop analysis demonstrate unexplained harmonics at integer frequencies while the aircraft is banked, azimuth and elevation errors independent of the inertial pointing vector and aircraft's yaw angle, and uncorrelated azimuth and elevation errors for aircraft pitch and roll angles of +/-10° and +/-30°, respectively. Several conclusions are drawn from the closed-loop tracking analysis. The distribution of the average noise power has a stronger influence than the distribution of the received isotropic power on the signal-to-noise ratio distribution. The defined step-tracking algorithm reduces pointing error in the open-loop pointing solution for a pedestal experiencing aircraft disturbances and random errors from the GPS/INS. The rate of performance improvement as a function of the number of hops is independent of the antenna aperture size and the GPS/INS unit. Pointing performance relative to the HPBW is independent of the antenna aperture size and GPS/INS unit for on-boresight, but not for off-boresight. With signal-to-noise ratios averaged over 100 hops and pointing biases less than or equal to 0.5 the half-power beamwidth, the step-tracking algorithm reduces the pointing error to within 0.1 the half-power beamwidth of the boresight, for all tested configurations. The overall system performance is bounded by the open-loop pointing solution, which is based on hardware selection. Closed-loop tracking performance is a function of the number of sampled hops and is for the most part independent of the hardware selection.
Design and process/measurement for immersed element control in a reconfigurable vertically falling soap film
Reinforcement learning has proven successful at harnessing the passive dynamics of underactuated systems to achieve least energy solutions. However, coupled fluid-structural models are too computationally intensive for in-the-loop control in viscous flow regimes. My vertically falling soap film will provide a reconfigurable experimental environment for machine learning controllers. The real-time position and velocity data will be collected with a High Speed Video system, illuminated by a Low Pressure Sodium Lamp. Approximating lines of interference within the soap film to known pressure variations, controllers will shape downstream flow to desired conditions. Though accurate measurement still eludes those without Laser Doppler Velocimetry, order of magnitude Reynolds numbers can be estimated to describe the regime of controller inquiry.
The urban waterfront in flux : accommodating uncertainty in Brooklyn
Urban waterfronts are host to every shade of a city's development. Once pulsating with trade and production, the very reason for the city's existence, the mid 20th century brought jarring macroeconomic shifts and technological change that left this vibrant edge largely abandoned. Nothing remains static at the shore; new value was found in the void amidst the remaining industry. Warehouses, factories, and waterfront infrastructure have often proven adaptable to the post-industrial city. As we continue to redevelop this urban waterfront, are our methods and institutions allowing for flexibility for the next wave of change? I argue that we could improve. As various actors with conflicting interests compete for space at the waterfront, their constructions lend a level of permanence to the built environment. Because the urban form is so enduring, we should seek to maximize flexibility in order to avoid the negative aspects of obsolescence and decline. In this research I investigate the forces that influence our development decisions, the reasons for each claim to the waterfront, and the processes by which one is prioritized over another through the lens of Brooklyn, New York. Brooklyn has a great diversity of land uses, industries, and demographics. Its history is colorful and has led to a present condition replete with challenge and opportunity along the shore. Residential development, industrial retention, maritime industry, green space, and access, are some of the themes that need to be reconciled. Through its recent waterfront development we see clear evidence of societal values manifest in the built environment. It is imperative that we recognize the fleeting nature of even these as well as the exogenous variables that can swiftly transform our way of life. As the city experiences growth and decline, the waterfront in flux is host to both sides of the growth curve. Through both market outcomes and tools of government intervention, cities can seek to set the conditions to gracefully accommodate change and give those in the future a voice. Like a distant object looming on the horizon, the uncertain and the unforeseen are not so formidable if we plan for their imminent arrival.
Improving Tale Blazer analytics
TaleBlazer is a platform for creating and playing augmented reality location-based mobile games. TaleBlazer Analytics is an automated system for collecting and analyzing anonymized player data from these games. This thesis presents additions and improvements made to TaleBlazer Analytics to allow for a more in-depth view of data from individual games, as well as aggregated across games. The updated system will ultimately help researchers, game designers, partner organizations, and the TaleBlazer development team in better understanding how users play TaleBlazer games.
Design, fabrication and implementation of a flexure-based micropositioner for Dip Pen Nanolithography
Dip Pen Nanolithography (DPN) takes the concept of a quill-tip pen and shrinks it to the nanometer scale. DPN uses a machine to pick up and deposit proteins and liquids in arrays. A problem with the machine however is aligning the pen tip relative to the machine. Currently, it is aligned manually, which is time and labor intensive. It would drastically increase productivity and throughput if a machine was developed that could perform this task accurately and repeatedly. This would also allow quick tool changes for experiments involving multiple DPN processes. The impact of this alignment machine is that it solves problems not only for DPN machines, but also for atomic force microscopes and similar instruments. This thesis is about the design and implementation of this alignment machine. The user would arbitrarily place the pen tip on a ball mount. The ball mount would have three holes that are larger than three balls. The balls are held stationary, while the ball mount can move over it. An overhead camera is used to determine the actual and desired position of the ball mount relative to the balls. Once the ball mount reaches its desired position, the balls are glued in place using UV-cured epoxy. This half of a kinematic coupling would then attach to the other half of a kinematic coupling on the DPN machine. The repeatability of the ball mount holder was tested and has an in-plane 1[sigma] translational repeatability of 15.9 [mu]m and 0.0122 rad. This can be improved and further work is suggested.
A real-time 256 x 256 pixel parallel image processing system by Zubair Aman Talib.
This thesis explores how a Pixel-Parallel Image Processor (PPIP) chip serves as the basis for a real-time low-level image processing system as used in a machine vision based intelligent vehicle control application. Utilizing a processor-per-pixel scheme, the PPIP integrates 64 x 64 Processing Elements (PEs) on a single chip. Multiple chips can be arrayed to process larger pixel images. Previous work had been done to test and demonstrate the PPIP chip. A data-path and controller board are used in conjunction with a 2 x 2 array (four-chip) PPIP test board to process 128 x 128 pixel images. A revision of the PPIP chip has been tested and characterized. A compact printed-circuit board design utilizes a 4 x 4 array of 16 PPIP chips to process a 256 x 256 pixel image. Logic was designed to govern data transfer to and from the chips and tv govern communication with the existing data-path and controller hardware. Small in size and requiring no test equipment, the PE Array board is suitable for demonstrating an intelligent vehicle control system. While supporting the existing test and demonstration system, the PE Array board is flexible enough to be incorporated into future systems. Although legacy communication protocols limit the data-path to one input image, future designs, for example, will be able to utilize multiple input images.
energy efficient complementary metal oxide semiconductors interface to CNT sensor arrays
A carbon nanotube is considered as a candidate for a next-generation chemical sensor. CNT sensors are attractive as they allow room-temperature sensing of chemicals. From the system perspective, this signifies that the sensor system does not require any micro hotplates, which are one of the major sources of power dissipation in other types of sensor systems. Nevertheless, a poor control of the CNT resistance poses a constraint on the attainable energy efficiency of the sensor platform. An investigation on the CNT sensors shows that the dynamic range of the interface should be 17 bits, while the resolution at each base resistance should be 7 bits. The proposed CMOS interface extends upon the previously published work to optimize the energy performance through both the architecture and circuit level innovations. The 17-bit dynamic range is attained by distributing the requirement into a 10-bit Analog-to-Digital Converter (ADC) and a 8-bit Digital-to-Analog Converter (DAC). An extra 1-bit leaves room for any unaccounted subblock performance error. Several system-level all-digital calibration schemes are proposed to account for DAC nonlinearity, ADC offset voltage, and a large variation in CNT base resistance. Circuit level techniques are employed to decrease the leakage current in the sensitive frontend node, to decrease the energy consumption of the ADC, and to efficiently control the DAC. The interface circuit is fabricated in 0.18 /m CMOS technology, and can operate at 1.83 kS/s sampling rate at 32 pW worst case power. The resistance measurement error across the whole dynamic range is less than 1.34% after calibration. A functionality of the full chemical sensor system has been demonstrated to validate the concepts introduced in this thesis.
Information technology adoption in hospitals : social networking, governance and the clockspeed of change
The healthcare industry is expanding swiftly and total healthcare expenditures are expected to reach 18% of GDP by 2008. However there exist steep variances in quality of care and high incidences of medical error. This has given impetus to efforts at progressively evolving the healthcare delivery system. The role of information technology (IT) is seen as being central to cost reduction and quality improvement of healthcare delivery. Furthermore, efficiency gains will realize approximately 20% in cost reductions. However, there are significant challenges associated with widespread adoption of IT by healthcare providers. Despite the existence of vendor and technology maturity, implementation rates for clinical patient record systems were only 35% in 2006. This study addresses the problem of low IT adoption in hospitals through a three pronged analysis methodology. A Clockspeed analysis has revealed a dichotomy between the maturity levels of technology and vendors on the one hand and delivery processes on the other. This has resulted in lower business value being realized from IT investments by healthcare providers.
Long-term contracts for new investments in power generation capacity : pain or gain?
In recent years, a debate has ensued regarding the role of long-term power purchase agreements for securing investments in power generation capacity in organized wholesale markets. This thesis illuminates the issues surrounding the debate and provides a framework for understanding the nature and use of long-term contracts. The main questions of interest for the formulation of a policy on long-term contracts are (1) whether parties encounter obstacles to their beneficial use and (2) in what situations should policy encourage or compel market entities to enter into such agreements? The analysis finds that long-term contracts do not appear to be "essential" for securing new investments in generation capacity. Long-term contracts are desirable in cases where the investments are highly specific to the relationship. Relationship-specificity is not a general feature of the industry; power producers or customers that find themselves in relationship-specific situations can identify the gain available through the use of specific assets and choose to use a long-term contract. However, this is a voluntary business decision and does not call for explicit policy guidance. Additionally, contracts are inherently "incomplete" and create the possibility of ex post regret and stranded costs. The electricity industry in the U.S. is familiar with the potential for significant economic distortions in the aftermath of the indiscriminate use long-term contracts. Policymakers should avoid mandating their use, and be careful to disapprove contracts that present a significant risk of ex post regret. Obstacles appear to exist for the beneficial selective use of long-term contracts. Current rules in restructured markets preclude the consideration of long-term contracts for most types of generators and make it unfeasible for the economy to benefit from relationship specific circumstances. Policies should encourage or require the selective use of long-term contracts only in relationship-specific situations where identified non-zero sum gains are not being realized under dominant practices. In cases where these gains are not observable or are insignificant, the long-term contracts are more likely to cause pain than gain.
Spectral methods for circuit analysis
Harmonic balance (HB) methods are frequency-domain algorithms used for high accuracy computation of the periodic steady-state of circuits. Matrix-implicit Krylov-subspace techniques have made it possible for these methods to simulate large circuits more efficiently. However, the harmonic balance methods are not so efficient in computing steady-state solutions of strongly nonlinear circuits with rapid transitions. While the time-domain shooting-Newton methods can handle these problems, the low-order integration methods typically used with shooting-Newton methods are inefficient when high solution accuracy is required. We first examine possible enhancements to the standard state-of-the-art preconditioned matrix-implicit Krylovsubspace HB method. We formulate the BDF time-domain preconditioners and show that they can be quite effective for strongly nonlinear circuits, speeding up the HB runtimes by several times compared to using the frequency-domain block-diagonal preconditioner. Also, an approximate Galerkin HB formulation is derived, yielding a small improvement in accuracy over the standard pseudospectral HB formulation, and about a factor of 1.5 runtime speedup in runs reaching identical solution error. Next, we introduce and develop the Time-Mapped Harmonic Balance method (TMHB) as a fast Krylov-subspace spectral method that overcomes the inefficiency of standard harmonic balance for circuits with rapid transitions. TMHB features a non-uniform grid and a time-map function to resolve the sharp features in the signals. At the core of the TMHB method is the notion of pseudo Fourier approximations. The rapid transitions in the solution waveforms are well approximated with pseudo Fourier interpolants, whose building blocks are complex exponential basis functions with smoothly varying frequencies. The TMHB features a matrix-implicit Krylov-subspace solution approach of same complexity as the standard harmonic balance method. As the TMHB solution is computed in a pseudo domain, we give a procedure for computing the real Fourier coefficients of the solution, and we also detail the construction of the time-map function. The convergence properties of TMHB are analyzed and demonstrated on analytic waveforms. The success of TMHB is critically dependent on the selection of a non-uniform grid. Two grid selection strategies, direct and iterative, are introduced and studied. Both strategies are a priori schemes, and are designed to obey accuracy and stability requirements. Practical issues associated with their use are also addressed. Results of applying the TMHB method on several circuit examples demonstrate that the TMHB method achieves up to five orders of magnitude improvement in accuracy compared to the standard harmonic balance method. The solution error in TMHB decays exponentially faster than the standard HB method when the size of the Fourier basis increases linearly. The TMHB method is also up to six times faster than the standard harmonic balance method in reaching identical solution accuracy, and uses up to five times less computer memory. The TMHB runtime speedup factor and storage savings favorably increase for stricter accuracy requirements, making TMHB well suited for high accuracy simulations of large strongly nonlinear circuits with rapid transitions.
Sediment-water exchange of polycyclic aromatic hydrocarbons in the lower Hudson Estuary
Polyethylene devices (PEDs), which rely on the partitioning of hydrophobic organic contaminants (HOCs) between water and polyethylene, were shown to be useful for the measurement of dissolved HOCs like polycyclic aromatic hydrocarbons (PAHs) in natural waters. These PEDs allow for the measurement of the fugacity or "fleeing tendency" of such chemicals in water. These dissolved concentrations are of ecotoxicological concern as they reflect the HOC fraction that is driving uptake by the surrounding organisms. Because PEDs require on the order of days to equilibrate in the field, their use provides time-averaged measurements. Laboratory-measured polyethylene-water partition coefficients for two PAHs were: 17,000 ±1000 (mol/LPE)/(mol/Lw) for phenanthrene and 89,000 ± 6000 (mol/LPE)/(mol/Lw) for pyrene. These organic polymer-water partition coefficients were found to be comparable to other organic solvent-water partitioning coefficients. These large coefficients allowed for the measurement of dissolved concentrations as low as 1 pg/L for benzo(a)pyrene and 400 pg/L for phenanthrene in the lower Hudson Estuary. Sampling performed in the lower Hudson Estuary during neap and spring tides revealed increased concentrations of dissolved pyrene and benzo(a)pyrene, but not phenanthrene, during increased sediment resuspension. These data suggest that resuspension events mostly influence the bed-to-water exchange of PAHs with greater hydrophobicities. PAH water concentrations predicted assuming dissolved and sorbed concentrations related via the product, fomKom, where fom is the fraction of organic matter in the suspended sediments and Kom is the organic-matter-normalized solid-water partition coefficient for the PAH of concern, were far from observed concentrations. Adding the influence of soot to the partition model via Kd = fomKom + fse,4Ke, where fse. is the weight fraction of soot carbon in the solid phase and Ke is the soot carbon-water partition coefficient estimated form activated carbon data, yielded predicted concentrations that were much closer to the observed values when PAH partitioning to soot was included in the partitioning model. This finding suggests that soot plays an important role in controlling the cycling of PAHs in the aquatic environment. However, even when the soot partitioning of PAHs was included in the model, the predicted dissolved values were still larger than the measured values. This suggests that the time of particle resuspension is too short to allow for particle-water sorptive equilibrium. Using ratios of source indicative PAHs, it was estimated that 90% of the dissolved PAH fraction was derived from petrogenic sources. In contrast, the same source ratios for the total (dissolved and sorbed) PAH concentrations indicated that only 55% of the total were petrogenically-derived. The observations in this work suggest that efforts to regulate and remediate PAH-contaminated sediments must consider the potential impacts of soot associations of the PAHs.
Creation of "place" by Mexicans and Mexican Americans in East Los Angeles
In this thesis I will examine how residents of East Los Angeles use their front yards and streets to create a sense of "place." The environment created in this way I call "enacted." People are both users and creators of a place and thus become "texture" in the urban landscape. People activate environments merely by their presence. People enact place because of use and social interactions, contrary to the belief of architects that peoples' lives neatly revolve around functions of physical form. There are many different approaches to understanding physical environments and social characteristics of people, but very few deal with "enacted environments." The sociologist examines peoples' behavior. The urban planner analyzes numbers. The anthropologist examines artifacts while movie directors and writers recreate the "feeling" of a place by combining peoples' lives with the physical form. All are excellent in understanding a specific dimension of a place. However, in comprehending the complexities of the enacted environment one needs to rely on all these disciplines. Those who "enacted" in East Los Angeles are Mexican and Mexican American. By the year 2010 it is estimated that the Latino population will be 40% of the total population of Southern California. By understanding the transformations of the physical form and social relations in Latino neighborhoods, I can develop a framework of what is taking place so that this thesis can serve as an aid to better understand the "Mexicanization" of space in the suburbs of Los Angeles, but this methodology can also be used in understanding other "enacted environments" in the urban landscape.
Design of a hydraulic bulge test apparatus
The various equi-biaxial tension tests for sheet metal were studied and compared to determine the most appropriate equipping in the Impact and Crashworthiness Laboratory, MIT, for the testing of Advanced High Strength Steel. The hydraulic bulge test was identified as the most economical solution. The equipment was designed to accommodate material strength of up to 1000MPa with plate thickness between 1.0mm and 1.8mm. The design process is explained in detail with focus on the challenges faced. The closed-form solution for the hydraulic bulge test was also derived. Two methods of deriving the stress-strain relationship in the material were also proposed. The first method uses the optical measuring system to determine displacement and surface strain distribution. The second method uses geometrical approximations and dome height measurements. A new experimental technique and step-by-step procedure were also developed. Tests were successfully conducted using galvanized steel to demonstrate the effectiveness of the hydraulic bulge test apparatus in achieving the equi-biaxial stress state in sheet metal.
Case studies in protein folding and adaptation to time-varying fields
In this thesis, we use the methods of statistical physics to provide quantitative insights into the behavior of biological systems. In the first half of the thesis, we use equilibrium statistical physics to develop a phenomenological model of how the hydrophobic effect impacts the structure of proteins, and in the second half, we study the phenomenon of adaptation and Darwinian selection from the standpoint of nonequilibrium statistical physics. It has been known for a long time that the hydrophobic effect plays a major role in driving protein folding. However, it has been challenging to translate this understanding into a predictive, quantitative theory of how the full pattern of sequence hydrophobicity in a protein helps to determine its structure. Here, we develop and apply a phenomenological theory of the sequence-structure relationship in globular protein domains. In an effort to optimize parameters for the model, we first analyze the patterns of backbone burial found in single-domain crystal structures and discover that classic hydrophobicity scales derived from bulk physicochemical properties of amino acids are already nearly optimal for prediction of burial using the model. Subsequently, we apply the model to studying structural fluctuations in proteins and establish a means of identifying ligand-binding and protein-protein interaction sites using this approach. In the second half of the thesis, we undertake to address the question of adaptation from the standpoint of physics. Building on past fundamental results in nonequilibrium statistical mechanics, we demonstrate a generalization of the Helmholtz free energy for the finite-time stochastic evolution of driven Newtonian matter. By analyzing this expression, we show a general tendency in a broad class of driven many-particle systems toward self-organization into states formed through reliable absorption and dissipation of work energy from the surrounding environment. We demonstrate how this tendency plays out in the familiar example of Darwinian competition between two exponentially growing self-replicators. Subsequently, we illustrate the more general mechanism by which extra dissipation drives adaptation by analyzing the process of random hopping in driven energy landscapes.