text
stringlengths
8
5.77M
990 F.2d 623 U.S.v.Hunter NO. 92-2271 United States Court of Appeals,Second Circuit. Jan 21, 1993 1 Appeal From: W.D.N.Y. 2 AFFIRMED.
1. Introduction {#sec1-nutrients-11-02898} =============== Advancing age is characterized by a progressive decline in multiple physiological functions, leading to an increased vulnerability to stressors and augmented risk of adverse outcomes \[[@B1-nutrients-11-02898],[@B2-nutrients-11-02898],[@B3-nutrients-11-02898]\]. During the aging process, several factors may affect body shape from both clinical and functional perspectives. Reduction in smell and taste senses, poor appetite (the so-called "anorexia of aging"), and decreased energy expenditure may all contribute to poor nutrition. Moreover, illnesses, medications, as well as poor oral health (for example, due to teeth loss and poorly fitting dentures) can exacerbate anorexia \[[@B4-nutrients-11-02898],[@B5-nutrients-11-02898],[@B6-nutrients-11-02898]\]. Nutritional status among older people may be also influenced by living or eating alone, poor financial status, dismobility, and decreased ability to shop or prepare meals \[[@B7-nutrients-11-02898],[@B8-nutrients-11-02898]\]. Psychosocial factors including loneliness, sleep disorders, dementia, and depression are also recognized to have a negative impact on the dietary intake of older subjects \[[@B9-nutrients-11-02898]\]. Furthermore, with aging, there is a progressive loss in muscle mass and strength, whereas fat mass and fat infiltration of muscle increase \[[@B10-nutrients-11-02898],[@B11-nutrients-11-02898]\]. Sarcopenia is the term, introduced for the first time in 1988 by Irwin Rosenberg, to indicate the pathologic reduction in muscle mass and strength leading to a poor function \[[@B12-nutrients-11-02898],[@B13-nutrients-11-02898]\]. Interestingly, in recent years, it has been highlighted that sarcopenia is not limited to lower limbs, but is a whole-body process \[[@B14-nutrients-11-02898],[@B15-nutrients-11-02898],[@B16-nutrients-11-02898]\], also affecting the muscles devoted to chewing and swallowing \[[@B10-nutrients-11-02898],[@B17-nutrients-11-02898]\], with a negative impact on food intake. In fact, atrophy of muscles critical for the respiratory and swallowing functions has been reported \[[@B14-nutrients-11-02898],[@B18-nutrients-11-02898],[@B19-nutrients-11-02898],[@B20-nutrients-11-02898],[@B21-nutrients-11-02898],[@B22-nutrients-11-02898]\]. The variety of dental problems experienced by older people can result in chewing difficulties determining changes in food selection, thus leading to malnutrition and consequently to frailty \[[@B23-nutrients-11-02898]\] and sarcopenia \[[@B10-nutrients-11-02898],[@B23-nutrients-11-02898]\]. Poor oral status may also predispose one to a chronic low-grade systemic inflammation through periodontal disease \[[@B24-nutrients-11-02898],[@B25-nutrients-11-02898]\], which has an increased prevalence in those who are not able to perform the daily oral hygiene procedures \[[@B26-nutrients-11-02898]\], and it is a well-known risk factor in the pathogenesis of frailty \[[@B27-nutrients-11-02898]\] and sarcopenia \[[@B28-nutrients-11-02898]\]. Furthermore, periodontal disease has been associated with faster decline in handgrip strength \[[@B29-nutrients-11-02898]\], and recent studies showed an association between chewing difficulties and frailty \[[@B24-nutrients-11-02898]\]. Therefore, a hypothetical triangle oral status--nutrition--sarcopenia, exposing the older person to the frailty disabling cascade, may be suggested, as seen in [Figure 1](#nutrients-11-02898-f001){ref-type="fig"}. 2. Oral Changes with Aging {#sec2-nutrients-11-02898} ========================== Poor oral health is not an inevitable part of aging since good care throughout the life course can result in the maintenance of functional teeth later in life \[[@B24-nutrients-11-02898]\]. Throughout a lifetime, the oral cavity experiences a variety of physiological modifications, such as enamel changes, fractures lines and stains, as well as dentin exposure and darkening of the tooth. At the same time, in the inner part of the tooth, several changes, such as the deposition of secondary dentin reducing the size of the pulp chamber and canals, may also occur \[[@B30-nutrients-11-02898]\]. Furthermore, in older people, tooth wear is frequently observed, affecting more than 85% of all the teeth groups in both the mandible and maxilla \[[@B31-nutrients-11-02898]\]. Additionally, a loss in terms of elastic fibers in the connective tissue has been documented and, subsequently, the oral mucosa becomes less resilient \[[@B32-nutrients-11-02898]\]. However, older people, especially those who are institutionalized or with limited financial resources, may experience problems to access oral care. Furthermore, it has been documented that older people frequently have difficulty expressing complaints and assign low priority to oral health until dental problems become intolerable \[[@B33-nutrients-11-02898]\]. Oral problems among older people have been implicated in a high prevalence of tooth loss, dental caries, periodontal disease, xerostomia, and oral precancer/cancer lesions \[[@B34-nutrients-11-02898]\]. Periodontitis and dental caries are very common diseases, especially in older people, and are considered the main cause of tooth loss \[[@B35-nutrients-11-02898]\]. Around the age of 70, there is also a peak of root/cementum caries, as a result of both tooth retention and major exposure of these surfaces following periodontal support loss. Moreover, older people are at higher risk of periodontitis since it is a cumulative disease, especially with regard to the multirooted teeth \[[@B36-nutrients-11-02898]\]. 2.1. Edentulism {#sec2dot1-nutrients-11-02898} --------------- Edentulism is a pathological condition characterized by multiple missing teeth; it can be partial or total. The etiology of tooth loss includes factors such as predisposition, diet, hormonal status, coexisting diseases, hygiene habits, and use of dental clinics. Additionally, edentulism may result from an unsuccessful periodontal treatment or important carious lesions \[[@B37-nutrients-11-02898],[@B38-nutrients-11-02898]\]. Dental disease and loss of teeth are not part of normal aging, but if this occurs, it is probably a result of neglected oral hygiene and/or an inadequate treatment \[[@B39-nutrients-11-02898],[@B40-nutrients-11-02898]\]. Edentulism is exacerbated when masticatory function is not restored with dental prostheses \[[@B41-nutrients-11-02898]\]. Tooth loss affects the individual ability to chew determining an alteration of food choices \[[@B42-nutrients-11-02898]\]. Indeed, edentulous people are at greater risk of malnutrition than dentate or partially dentate individuals \[[@B43-nutrients-11-02898]\], and, consequently, with an increased susceptibility to sarcopenia and frailty \[[@B25-nutrients-11-02898]\]. Tooth loss is also a risk factor for disability, since it impedes self-sufficiency and worsens the quality of life \[[@B42-nutrients-11-02898]\]. 2.2. Dry Mouth {#sec2dot2-nutrients-11-02898} -------------- Saliva is pivotal for bolus formation and consequently is also related to the sensory and textural experience. Xerostomia is a clinical condition characterized by an excessive sensation of dryness in the mouth, which is not necessarily linked to salivary gland hypofunction \[[@B30-nutrients-11-02898],[@B44-nutrients-11-02898]\]. Xerostomia is estimated to affect 25--50% of older individuals \[[@B45-nutrients-11-02898]\]. Etiologic factors include polypharmacy (especially with antihypertensives, antidepressants, and antipsychotics) \[[@B46-nutrients-11-02898]\], diseases, poor general health, female sex, and older age \[[@B47-nutrients-11-02898],[@B48-nutrients-11-02898]\]. Furthermore, radiation for head and neck cancers can damage salivary glands, leading to permanent xerostomia \[[@B49-nutrients-11-02898]\]. With aging, there is also a reduced salivary flow in salivary glands, which cannot be explained only on the basis of medications \[[@B50-nutrients-11-02898]\]. In fact, salivary hypofunction and xerostomia are two distinct constructs that are frequently improperly used interchangeably \[[@B33-nutrients-11-02898]\]. However, it has been reported that nearly one third of older adults complaining of xerostomia do not present any reduction of the salivary flow or saliva secretion. This suggests a psychological component may be involved when reporting the symptom \[[@B30-nutrients-11-02898]\]. Nonetheless, hyposalivation may seriously compromise chewing function and early digestive process. A reduced quantity of saliva can, in fact, affect the preparation of the alimentary bolus and the swallowing \[[@B51-nutrients-11-02898]\]. 2.3. Periodontal Disease {#sec2dot3-nutrients-11-02898} ------------------------ Periodontitis is described as a chronic inflammatory disease that affects the supporting tissues of the teeth, leading to a progressive destruction of the periodontium \[[@B52-nutrients-11-02898]\]. It can also cause mobility and displacement of the remaining teeth and is often linked to difficulty in chewing. Prevalence of periodontal disease, considering a periodontal index score of 4 (deep pockets), ranges from approximately 5% to 70% among older people \[[@B53-nutrients-11-02898]\]. Periodontitis is a cumulative disease; therefore, it becomes increasingly severe as the person ages \[[@B30-nutrients-11-02898]\]. Poor oral hygiene is a critical determinant of periodontitis since it leads to the formation of dental plaque containing microorganisms \[[@B54-nutrients-11-02898]\]. Systemic risk factors for periodontal disease also include other behaviors, such as smoking, medical conditions (i.e., poorly controlled diabetes, obesity, stress, osteopenia), and inadequate dietary consumption of calcium and vitamin D \[[@B55-nutrients-11-02898]\]. Since periodontitis share some characteristics with other systemic inflammatory diseases, a relationship between periodontitis and other inflammatory pathologies (i.e., diabetes, cardiovascular diseases, adverse pregnancy outcomes, and rheumatoid arthritis) has been proposed \[[@B56-nutrients-11-02898]\]. In recent years, the role of the diet in periodontitis has been highlighted. To date, it has been documented that a diet poor in fruit and vegetables and therefore in micronutrients may lead to a greater inflammatory response of periodontal tissues that support the tooth. Interestingly, a recent systematic review of the relationship between dietary intake and periodontal health in community-dwelling older adults, reported positive associations between periodontal disease and lower intakes of docosahexaenoic acid, vitamin C, vitamin E, β-carotene, milk, fermented dairy products, dietary fiber, fruits and vegetables, and higher intakes of omega-6/omega-3 ratio and saturated fatty acids \[[@B57-nutrients-11-02898]\]. Additionally, micronutrient deficiencies can negatively affect healing following periodontal surgery \[[@B58-nutrients-11-02898]\]. At the same time, the loss of dental elements due to periodontitis can negatively affect the nutritional status of the patient, resulting in a discomfort during chewing and leading to a selection of soft and easy-to-chew foods. 2.4. Dental Caries {#sec2dot4-nutrients-11-02898} ------------------ Dental caries is a multifactorial infectious disease characterized by the demineralization and destruction of the dental substance: enamel, in fact, is susceptible to acid dissolution over time. The pathological changes of the dental structure may have serious consequences, ultimately leading to the breakdown of the teeth themselves \[[@B59-nutrients-11-02898]\]. The prevalence of dental caries varies between 20% and 60% in community-dwelling older people and 60% and 80% in care home settings \[[@B60-nutrients-11-02898],[@B61-nutrients-11-02898],[@B62-nutrients-11-02898],[@B63-nutrients-11-02898],[@B64-nutrients-11-02898]\]. Various predisposing conditions to dental caries have been reported, including carbohydrate (especially simple sugars) consumption, diabetes, and poor socioeconomic conditions \[[@B60-nutrients-11-02898],[@B65-nutrients-11-02898],[@B66-nutrients-11-02898],[@B67-nutrients-11-02898],[@B68-nutrients-11-02898]\]. With increasing age, people may experience physical and cognitive decline, which may result in poor oral hygiene, leading to an increased incidence of caries. Over time, small lesions already filled can need a larger dental restoration, that can lead to a tooth fracture or an endodontic treatment \[[@B30-nutrients-11-02898]\]. Endodontic therapy (also known as root canal treatment) is a necessary procedure in case of inflamed or infected dental pulp. It consists in the removal of the pulp, both in the coronal and radicular part of the tooth, and in its replacement with a gutta-percha permanent filling (a substance of vegetable origin such as natural rubber). Xerostomia is closely related to a higher risk for developing caries since loss of saliva may lead to an increased acidity of the mouth. This leads to different situations that may contribute to the development of the dental caries: the proliferation of bacteria, the loss of minerals from the tooth surfaces, and the loss of lubrication \[[@B69-nutrients-11-02898]\]. 2.5. Impact of Oral Health on Nutritional Status {#sec2dot5-nutrients-11-02898} ------------------------------------------------ Nutrition is a key modulator of health in older persons. Inadequate intake of nutrients is a well-known contributing factor in the progression of many diseases. This also has a significant impact in the complex etiology of sarcopenia and frailty \[[@B70-nutrients-11-02898],[@B71-nutrients-11-02898],[@B72-nutrients-11-02898]\]. Due to a decline in many functions, including poor oral status, dietary intake is often compromised in older people and the risk of malnutrition is increased. Particularly, acute and chronic illnesses and medications as well as poor dentition can exacerbate anorexia \[[@B5-nutrients-11-02898],[@B70-nutrients-11-02898],[@B73-nutrients-11-02898]\]. Oral problems in older individuals are associated with modifications in food selection and, therefore, in nutrient intake \[[@B25-nutrients-11-02898]\]. Deterioration of oral health can ultimately lead to the development of chronic conditions such as diabetes \[[@B74-nutrients-11-02898]\] and cardiovascular problems \[[@B75-nutrients-11-02898],[@B76-nutrients-11-02898],[@B77-nutrients-11-02898]\]. Masticatory performance is affected by the number of teeth in functional occlusion \[[@B78-nutrients-11-02898],[@B79-nutrients-11-02898],[@B80-nutrients-11-02898]\], the maximal biting force \[[@B81-nutrients-11-02898],[@B82-nutrients-11-02898]\], denture wearing \[[@B83-nutrients-11-02898]\] and xerostomia \[[@B84-nutrients-11-02898]\]. The functional occlusion during mandibular closure is provided by the even and simultaneous contact of all remaining teeth (at least 20 with 10 contiguous teeth in each arch) \[[@B85-nutrients-11-02898]\]. Tooth loss has been implicated in the reduction of chewing ability and in difficulties in bolus formation \[[@B86-nutrients-11-02898]\]. To date, it has been reported that as number of remaining teeth decrease, the bolus size increases leading to a dysfunctional swallowing \[[@B87-nutrients-11-02898]\]. Edentulous individuals, even when using well-made dentures, may experience more chewing difficulties than dentate people \[[@B88-nutrients-11-02898]\]. Therefore, they may be considered as the group more prone to changing their diet \[[@B89-nutrients-11-02898],[@B90-nutrients-11-02898]\]. Older people who experience dental problems frequently avoid harder foods such as meats, fruits, and vegetables which are typically major sources of proteins, fiber, vitamins, and minerals \[[@B41-nutrients-11-02898],[@B88-nutrients-11-02898],[@B91-nutrients-11-02898]\]. The lack of these latter key nutrients may expose older individuals to an increased risk for malnutrition, frailty, and sarcopenia \[[@B24-nutrients-11-02898],[@B92-nutrients-11-02898]\]. In addition, it is well established that micronutrient deficiencies, even subtle, may lead to oxidative stress and consequently to inflammation. Therefore, these processes can further exacerbate sarcopenia and frailty and become a clear risk factor for periodontitis. Nutritional deficiencies may also negatively affect the mineralization process, increasing the susceptibility to dental caries \[[@B93-nutrients-11-02898]\]. Furthermore, undernutrition can exacerbate the severity of oral infections \[[@B94-nutrients-11-02898]\]. Indeed, with advancing age, people show a tendency to select soft foods due to difficulty and fatigue of chewing \[[@B10-nutrients-11-02898],[@B95-nutrients-11-02898]\]. However, these latter are frequently processed foods that are high in fat and sugar and with a poor content of vitamins and minerals, leading to fat deposition, oxidative stress, inflammation, and, consequently, increased risk of cardiovascular disease and metabolic syndrome \[[@B88-nutrients-11-02898],[@B95-nutrients-11-02898],[@B96-nutrients-11-02898],[@B97-nutrients-11-02898]\]. In fact, it is well established that obesity leads to chronic low-grade inflammation, increasing the susceptibility to dental caries, periodontal disease, and tooth loss \[[@B98-nutrients-11-02898]\]. The excess of energy is stored in adipocytes and leads to both hypertrophy and hyperplasia, resulting in an abnormal adipocyte function. This may increase mitochondrial stress and altered endoplasmatic reticulum function. Furthermore, adipocyte-associated inflammatory macrophages can also induce oxidative stress \[[@B99-nutrients-11-02898]\]. On the other hand, it is widely recognized that an excessive consumption of simple sugars is a major risk factor for dental caries \[[@B100-nutrients-11-02898],[@B101-nutrients-11-02898]\]. Large epidemiological studies, such as the UK National Diet and Nutrition Survey (NDNS) \[[@B102-nutrients-11-02898]\] and the US National Health and Nutritional Examination Surveys (NHANES) \[[@B103-nutrients-11-02898],[@B104-nutrients-11-02898]\], reported an association between poor dental status and inadequate dietary intake in older people. In particular, they reported that edentulous subjects, with and without prosthesis, consumed less fruits and vegetables. Moreover, decreased protein and micronutrient intake, together with increased carbohydrate consumption, has been reported in people with less than 21 teeth \[[@B104-nutrients-11-02898]\]. 3. Sarcopenia and Oral Status {#sec3-nutrients-11-02898} ============================= Sarcopenia, defined as the progressive and accelerated loss of muscle mass and function, is a major determinant of several adverse outcomes including frailty, disability, and mortality \[[@B13-nutrients-11-02898],[@B105-nutrients-11-02898]\]. Although sarcopenia is a condition commonly observed with the aging process, it can also occur earlier in life \[[@B106-nutrients-11-02898]\]. Since 2016, sarcopenia has been recognized as an independent condition with an International Classification of Disease, 10th Revision, Clinical Modification (ICD-10-CM) Diagnosis Code \[[@B107-nutrients-11-02898]\]. Recently, the European Working Group on Sarcopenia in Older People (EWGSOP) \[[@B106-nutrients-11-02898]\] updated their consensus on definition and diagnosis (EWGSOP2). In this revised consensus, low muscle strength is considered a key characteristic of sarcopenia, and poor physical performance is identified as indicative of severe sarcopenia. Moreover, EWGSOP2 have recommended specific cut-off points to identify and characterize the sarcopenic condition, and provide an algorithm that can be used for case-finding. Sarcopenia has a complex multifactorial pathogenesis, which involves lifestyle habits (i.e., malnutrition, physical inactivity), disease triggers, and age-dependent biological changes (i.e., chronic inflammation, mitochondrial abnormalities, loss of neuromuscular junctions, reduced satellite cell numbers, hormonal alterations) \[[@B108-nutrients-11-02898],[@B109-nutrients-11-02898]\]. Sarcopenia is a whole-body process, affecting not only lower extremities, but also muscles dedicated to breathing, mastication, and swallowing \[[@B14-nutrients-11-02898],[@B18-nutrients-11-02898],[@B19-nutrients-11-02898],[@B20-nutrients-11-02898],[@B21-nutrients-11-02898],[@B22-nutrients-11-02898]\]. In particular, swallowing is a complex mechanism involving several head and neck muscles simultaneously and in conjunction to coordinate the entire process \[[@B110-nutrients-11-02898]\]. Several age-related changes, such as as reduction of tissue elasticity, changes of the head and neck anatomy, reduced oral and pharyngeal sensitivity, and impaired dental status, may contribute to different degrees to a subtle swallowing impairment, the so called "presbyphagia". It is usually an asymptomatic condition in which swallowing function is preserved, but tends to slowly worsen as the aging process advances \[[@B16-nutrients-11-02898],[@B111-nutrients-11-02898]\]. Presbyphagia may increase the risk of dysphagia and aspiration in older people, especially during acute illnesses and other stressors \[[@B112-nutrients-11-02898]\]. Moreover, reductions in muscle mass of the geniohyoid, pterygoid, masseter, tongue, and pharyngeal muscles have been documented in older individuals \[[@B20-nutrients-11-02898],[@B113-nutrients-11-02898],[@B114-nutrients-11-02898],[@B115-nutrients-11-02898]\]. Several authors also reported a decline in the strength of the swallowing muscles with aging or sarcopenia \[[@B116-nutrients-11-02898]\]. Maximal tongue strength decreases with aging \[[@B116-nutrients-11-02898],[@B117-nutrients-11-02898],[@B118-nutrients-11-02898],[@B119-nutrients-11-02898]\], and there is some evidence that aging leads to a decreased jaw-opening force in older men. Several authors also reported an association between tongue strength and handgrip strength \[[@B120-nutrients-11-02898],[@B121-nutrients-11-02898]\]. A decrease in tongue strength has been associated with a decline of activities of daily living \[[@B122-nutrients-11-02898]\], and a reduced tongue thickness has been noted in people with low body weight \[[@B20-nutrients-11-02898]\]. Lip function is also important for feeding. In fact, poor lip muscle closure may cause leakage through the corners of the mouth \[[@B123-nutrients-11-02898]\]. Additionally, decreased lip strength has been suggested to occur due to sarcopenia and to be related to difficulties in eating and drinking (i.e., dysphagia) \[[@B117-nutrients-11-02898]\]. Lip force has been associated with hand grip strength and lip pendency has been associated with aging \[[@B117-nutrients-11-02898],[@B124-nutrients-11-02898]\]. Indeed, since it has been shown that skeletal muscle mass and strength decline may affect both swallowing and general muscle groups, a new condition, called "sarcopenic dysphagia" has been coined \[[@B22-nutrients-11-02898],[@B124-nutrients-11-02898],[@B125-nutrients-11-02898]\]. Swallowing muscles are characterized by a high percentage of type II fibers, which are more easily affected by malnutrition and sarcopenia than type I muscle fibers \[[@B22-nutrients-11-02898]\]. However, some cranial muscles, including the jaw-closers, are very different in fiber-type composition than other skeletal muscle groups (i.e., limbs or abdomen). For instance, the masseter muscle, which originates from the zygomatic arch, contains both type I and type II fibers, but shows a predominance of type I muscle fibers, which are more strongly affected by inactivity rather than aging \[[@B126-nutrients-11-02898],[@B127-nutrients-11-02898]\]. Given that the meal texture of older people frequently becomes softer, less power of tongue movement and of masseter muscle is required, which may result in decreased activity of these muscles. Interestingly, poor oral health may predispose one to a chronic low-grade inflammatory state through periodontal disease, which is a well-known risk factor for frailty and sarcopenia \[[@B25-nutrients-11-02898],[@B128-nutrients-11-02898],[@B129-nutrients-11-02898]\]. In fact, the detrimental effects of periodontitis are not confined solely to the oral cavity, but extend systemically, leading to metabolic alterations \[[@B130-nutrients-11-02898]\], including insulin resistance \[[@B131-nutrients-11-02898]\], diabetes \[[@B131-nutrients-11-02898],[@B132-nutrients-11-02898]\], arthritis \[[@B133-nutrients-11-02898]\], and heart disease \[[@B134-nutrients-11-02898]\]. Furthermore, alterations in mitochondrial function leading to oxidative stress through the production of reactive oxygen species (ROS) have also been reported to mediate both oral and systemic pathologies (i.e., sarcopenia) \[[@B108-nutrients-11-02898],[@B135-nutrients-11-02898],[@B136-nutrients-11-02898],[@B137-nutrients-11-02898]\]. Given their regulatory role as signaling molecules in autophagy, it has been speculated that elevated ROS production in periodontal disease could lead to autophagic alterations \[[@B138-nutrients-11-02898]\]. Bullon et al. \[[@B139-nutrients-11-02898]\] found high levels of mitochondrial-derived ROS, accompanied by mitochondrial dysfunction in peripheral blood mononuclear cells from patients with periodontitis. Moreover, oral gingiva seems to be highly responsive to the lipopolysaccharides (LPS), which are bacterial endotoxins prevalent in periodontal disease. In fact, gingival fibroblasts, which play an important role in remodeling periodontal soft tissues, may directly interact with LPS. In particular, LPS from *Porphyromonas gingivalis* enhances the production of inflammatory cytokines \[[@B140-nutrients-11-02898]\]. *Porphyromonas gingivalis* has been found to be responsible for high mitochondrial ROS and coenzyme Q10 levels, and for mitochondrial dysfunction, given its influence on the amount of respiratory chain complex I and III \[[@B138-nutrients-11-02898],[@B139-nutrients-11-02898]\]. Indeed, LPS-mediated mitochondrial dysfunction could explain the oxidative stress onset in patients with periodontitis. Furthermore, Hamalainen et al. \[[@B29-nutrients-11-02898]\] reported an association between periodontitis and quicker declines in handgrip strength. On the other hand, as discussed in the previous section, the variety of dental problems experienced by older people can lead to a decline in general health through poor nutrient intake, pain, and low quality of life \[[@B25-nutrients-11-02898]\]. Poor oral status has been reported to affect 71% of patients in rehabilitation settings \[[@B141-nutrients-11-02898]\] and 91% of people in acute-care hospitals \[[@B142-nutrients-11-02898]\], and has been associated with malnutrition, dysphagia, and reduced activities of daily living \[[@B17-nutrients-11-02898]\]. Hence, poor oral status may lead to sarcopenia through poor nutrient intake. Moreover, inflammation further contributes to malnutrition through various mechanisms, such as anorexia, decreased nutrient intake, altered metabolism (i.e., elevation of resting energy expenditure), and increased muscle catabolism \[[@B143-nutrients-11-02898]\]. Chronic inflammation is a common underlying factor, not only in the etiology of sarcopenia, but also for frailty. In fact, sarcopenia and frailty are closely related and show a remarkable overlap especially in the physical function domain \[[@B144-nutrients-11-02898],[@B145-nutrients-11-02898],[@B146-nutrients-11-02898]\]. The presence of oral problems, alone or in combination with sarcopenia, may thus represent the biological substratum of the disabling cascade experienced by many frail individuals. 4. Interventions {#sec4-nutrients-11-02898} ================ The management of older people should be multimodal and multidisciplinary, especially for those with or at risk of malnutrition \[[@B147-nutrients-11-02898]\], in order to improve different conditions (i.e., oral problems and sarcopenia). From a practical point of view, comprehensive geriatric assessment (CGA) is the multidimensional, interdisciplinary diagnostic and therapeutic process aimed at determining the medical, psychological, and functional problems of older people. The CGA's objective is the development of a coordinated and integrated plan for treatment and follow-up in order to maximize overall health with aging \[[@B148-nutrients-11-02898]\]. To date, increasing evidence suggests that prosthodontic treatment in combination with personalized dietary counselling may improve the nutritional status of patients \[[@B51-nutrients-11-02898]\]. Here, we provide an overview on the management of oral problems, malnutrition, and sarcopenia. 4.1. Oral Management {#sec4dot1-nutrients-11-02898} -------------------- The stomatognathic system is very vulnerable over time, but with special care, it can be preserved throughout the lifetime \[[@B30-nutrients-11-02898]\]. Nevertheless, one of the major challenges in providing both restorative and preventive care for older adults is to check dental status on a regular basis \[[@B34-nutrients-11-02898]\]. Prevention is pivotal to detecting oral disease as soon as possible and requires regular patient contact. However, since it has been reported that older people frequently fail to achieve a good oral hygiene, both patients and caregivers should be made more aware about the importance to check dental status as well as oral hygiene. The oral health-care professionals should develop a personalized program, in order to prevent all the problems related to the aging process. In some cases, it is difficult to provide dental care in the hospital setting in a short time, since in many countries there are long waiting lists (especially in publicly funded hospitals) \[[@B149-nutrients-11-02898]\]. Therefore, private dentists also need better awareness concerning the complexity of older people. There is, first and foremost, a need to understand the level of dependency, the medical condition, and the physical or cognitive impairment of the patient. Secondly, it is important to establish an oral healthcare plan that includes both professional and self-care elements \[[@B150-nutrients-11-02898]\]. The oral management of older people usually involves different aspects:(1)For the teeth affected by carious lesions, it must be recommended that prompt treatment be provided in order to prevent tooth loss. It would be equally appropriate for endodontic treatments for teeth with endodontic problems.(2)It is very important to monitor the periodontal status of the older patient and to provide a proper treatment plan, such as modification of general health-risk factors and oral health-specific risk factors, but professional hygiene or surgical procedures may also be necessary.(3)Prosthetic rehabilitation of the edentulous patient may help to prevent malnutrition \[[@B151-nutrients-11-02898]\] since it restores the chewing function.(4)In order to prevent problems related to the xerostomia and reduce exacerbation of carious lesions, it may be helpful to treat with saliva substitutes. 4.2. Nutritional Interventions {#sec4dot2-nutrients-11-02898} ------------------------------ As discussed above, nutrition is an important determinant of health in older people. Thereby, it is pivotal to provide adequate amounts of energy, proteins, fluid, and micronutrients in order to prevent or treat excess or deficiencies, and therefore improve several health-related outcomes in terms of morbidity and mortality. A personalized approach is pivotal in order to respect individual preferences, needs, and to increase compliance to the diet. Nutritional status should be assessed before each intervention, and the amount of energy and proteins should be individually adjusted with regard to nutritional status, physical activity level, disease status, and tolerance \[[@B152-nutrients-11-02898]\]. The European Society for Clinical Nutrition and Metabolism (ESPEN) \[[@B152-nutrients-11-02898]\], in its guidelines on clinical nutrition and hydration in geriatrics, recommends a guiding value for energy intake of 30 kcal/kg of body weight/day. However, as stated above, it should be adapted individually. Both ESPEN \[[@B153-nutrients-11-02898]\] and the PROT-AGE study group \[[@B147-nutrients-11-02898]\] recommend providing a protein intake of at least 1.0 g/kg body weight/day in older people to maintain muscle mass, increasing the intake up to 1.2--1.5 g/kg body weight/day in presence of acute or chronic illness. Additionally, it seems that the per-meal anabolic threshold of protein intake is higher in older individuals (i.e., 25 to 30 g protein/meal, containing about 2.5 to 2.8 g leucine) than young adults \[[@B147-nutrients-11-02898]\]. However, since older people may experience difficulty of ingesting large amounts of proteins in a single meal, supplementation should be considered. Since serum vitamin D levels decline gradually with aging \[[@B154-nutrients-11-02898],[@B155-nutrients-11-02898]\] and have been associated with reduced muscle mass and strength, supplementation should thus be considered in those who are deficient. Food texture should be adapted depending on the chewing and swallowing condition in order to avoid choking risk \[[@B10-nutrients-11-02898]\]. Harder foods may be modified to soft consistencies (i.e., bite-sized, minced, pureed) requiring little chewing, as well as liquids, which may be thickened to render the swallowing process slower and safer \[[@B10-nutrients-11-02898],[@B156-nutrients-11-02898],[@B157-nutrients-11-02898]\]. Controlling the intake of simple sugars is pivotal to prevent both dental caries \[[@B101-nutrients-11-02898]\] and metabolic complications \[[@B158-nutrients-11-02898]\]. World Health Organization recommends to limit the intake of free sugars to less than 10% of total energy intake to minimize the risk of dental caries \[[@B159-nutrients-11-02898]\]. Fruit and vegetables are major sources of minerals and vitamins with antioxidant properties; therefore, their consumption should be promoted both for oral and general health. It has been documented that excessive antioxidant supplementation could compromise both the mechanism of adaption to exercise and have even pro-oxidant effects. Thus, supplementation in people who are not deficient should be regarded carefully \[[@B160-nutrients-11-02898]\]. Dietary consumption of fatty fish (i.e., salmon, mackerel, herring, lake trout, sardines, albacore tuna, and their oils), which are a major source of omega-3 fatty acids, has been associated with a greater fat-free mass \[[@B161-nutrients-11-02898]\]. Given their antioxidant role, omega-3 fatty acid supplementation has been suggested to improve inflammatory status both in periodontal disease \[[@B162-nutrients-11-02898]\] and sarcopenia \[[@B163-nutrients-11-02898]\]. However, more studies are needed to further elucidate the exact time and dosage of supplementation as well as long term effects \[[@B164-nutrients-11-02898]\]. Nevertheless, consumption of foods rich in omega-3, such as as fatty fish, should be promoted. 4.3. Exercise and Rehabilitative Strategies {#sec4dot3-nutrients-11-02898} ------------------------------------------- Physical inactivity is considered one of the main causes of sarcopenia \[[@B165-nutrients-11-02898]\] because it determines a resistance to muscle anabolic stimuli \[[@B166-nutrients-11-02898]\]. Moreover, it has been proposed that physically inactive individuals may have a greater risk of periodontal disease \[[@B167-nutrients-11-02898]\]. In particular, resistance training seems to be the most effective type of exercise to counteract sarcopenia \[[@B168-nutrients-11-02898]\]. Furthermore, since sarcopenia is a systemic process \[[@B15-nutrients-11-02898],[@B21-nutrients-11-02898]\], it has been recommended to perform a holistic training involving all muscle groups \[[@B15-nutrients-11-02898]\]. In fact, it has been documented that both masticatory and swallowing functions can be improved through muscle-strengthening exercises \[[@B169-nutrients-11-02898],[@B170-nutrients-11-02898]\]. Several studies reported enhancements in subjective chewing ability, swallowing function, salivation, relief of oral dryness, and oral-health quality of life. Indeed, the synergistic effect of nutritional interventions coupled with physical exercise may improve both muscle \[[@B164-nutrients-11-02898]\] and oral health \[[@B167-nutrients-11-02898]\]. Recently, Kim et al. \[[@B171-nutrients-11-02898]\] reported an improvement in oral function following an exercise program which included stretching of the lip, tongue, cheek, masticatory muscle exercise, and swallowing movements. Several studies have been focused on swallowing rehabilitation. To date, a positive effect of expiratory muscle resistance training has been documented in improving suprahyoid muscle activity \[[@B172-nutrients-11-02898],[@B173-nutrients-11-02898]\]. Furthermore, head lift exercises showed a beneficial impact on swallowing movements \[[@B174-nutrients-11-02898],[@B175-nutrients-11-02898]\], and tongue strengthening exercises have been reported to enhance tongue strength \[[@B176-nutrients-11-02898],[@B177-nutrients-11-02898]\]. Yeates et al. \[[@B178-nutrients-11-02898]\] demonstrated that isometric tongue strength exercises and tongue pressure accuracy tasks improved isometric tongue strength, tongue pressure generation accuracy, bolus control, and dietary intake by mouth. It has also been reported that tongue exercises prevented general sarcopenia \[[@B178-nutrients-11-02898],[@B179-nutrients-11-02898]\]. Indeed, swallowing muscles training, despite its focus on swallowing function, may exert its beneficial effects systemically. 5. Conclusions {#sec5-nutrients-11-02898} ============== Aging is characterized by a progressive loss of physiological integrity, leading to a decline in many functions and increased vulnerability to stressors. Many changes in masticatory and swallowing function are subtle but can amplify disease processes seen with aging. Nevertheless, it is often difficult to clearly distinguish the effects of diseases from the underlying age-related modifications. Several stressors, including oral problems, may therefore negatively impact on the increasingly weak homeostatic reserves of older individuals. As a healthy diet may have a systemic beneficial effect, oral care also shows an important role in maintaining and improving not only oral health, but also general health and well-being. Overall, severe tooth loss, as well as swallowing and masticatory problems, partly contribute to restricted dietary choices and poor nutritional status of older adults, leading to frailty and sarcopenia. On the other hand, oral diseases might be influenced both by frailty and sarcopenia, probably through biological and environmental factors that are linked to the common burden of inflammation and oxidative stress. A multidisciplinary intervention of dental professionals, geriatricians, nutritionists, and dietitians may help to provide better care and preserve the functional status of older people. Increasing evidence also suggests that oral care, when offered with personalized nutritional advice, may improve the nutritional status of patients. A life course approach to prevention at a younger age, including diet optimization and oral preventive care, as well as physical activity, may help in preserving both oral and muscle function later in life. D.A. and P.C.P. equally contributed to conceptualizing and writing the manuscript. P.D.A., G.B.P., A.D. and M.C. edited and revised manuscript. D.A., P.C.P., P.D.A., G.B.P., A.D. and M.C. approved the final version of manuscript. This research received no external funding. The authors declare no conflict of interest. ![Overview of the interplay between poor oral status, malnutrition, and sarcopenia. GI---gastrointestinal.](nutrients-11-02898-g001){#nutrients-11-02898-f001} [^1]: These authors contributed equally to this work.
Foster City, CA Precalculus Tutors These private tutors in Foster City, CA are brought to you by WyzAnt.com, the best place to find local tutors on the web. When you use WyzAnt, you can search for Foster City, CA tutors, review profiles and qualifications, run background checks, and arrange for home lessons. Click on any of the results below to see the tutor's full profile. Your first hour with any tutor is protected by our Good Fit Guarantee: You don't pay for tutoring unless you find a good fit. Subject ZIP Aaron K. ...Through high school and college I was a math tutor and have been tutoring roughly 20 hours each week as a hobby since August 2013. I have worked with at least 10 different students in all major mathematics classes starting with Algebra through Advanced Calculus. I have worked with many college ... Serina K. ...Most of my experience tutoring math has been in elementary math, algebra, algebra 2, geometry, trigonometry, and pre-calculus. As an engineer, I've studied many math topics and will help where needed. I can also tutor English. Jim L. ...I have experience in developing skills for grade school and high school students. I am raising two boys, one of whom suffers from ADHD and required extensive home coaching on study skills, I have worked with other students as a tutor. I am proficient in math and writing skills, and in helping to develop efficient and lasting skills such as outlining, estimating and mnemonic learning. Elizabeth W. ...Approach: As your tutor, my goals are to help you learn the material in an efficient and memorable way, and to develop strategies for tackling different types of problems. We can work with your materials (textbook, notes, homework) or I can provide you with notes, worksheets, and online resources. Background: I am a credentialed high school math teacher. Julie O. ...I enjoy helping students in general chemistry, biochemistry, AP chemistry, SAT II chemistry and any other school chemistry test preparation. I hold a B.S. in chemistry and M.S. in biochemistry. Using molecular cloning, I have also conducted research in biochemistry and neuroscience laboratories at UC Riverside and San Diego State University.
Tuesday, October 29, 2013 Lush Buche de Noel Facial Cleanser: Review Available for a limited time as a part of the annual Lush Holiday Collection, the beloved Buche de Noel Facial Cleanser ($12.95 CAD for 100g) is a unique solid face wash loaded with fresh ingredients that promise a glowing holiday complexion. To learn more about this product and read my review, click...... Packaged in a recyclable black plastic tub containing 100g of product, the Buche de Noel Facial Cleanser is a solid face wash with a smooshed, oatmeal-like appearance. Although it doesn't look too pretty, the cleanser smells just lovely! Because the "log" is wrapped with a sheet of nori "bark", you get an immediate whiff of seaweed when you open up the jar. However, the actual cleanser has a yummy and sweet nutty fragrance thanks to the ground almonds that gently exfoliate the skin. Lush Buche de Noel Facial Cleanser Also formulated with kaolin, cocoa butter, almond oil, and a dash of cranberry bits for some festivity, Buche de Noel has a moist yet crumbly cookie dough texture. Although the product doesn't foam, it gets nice and creamy once mixed with water. The cleanser is kind of messy as bits of cranberry and almond invariably falls into the sink or onto the counter. However, it rinses off very nicely, leaving behind a clean and fresh feeling. Thanks to the ground almonds and bits of cranberry, Buche de Noel also doubles as a face scrub; it gently exfoliates while leaving the skin hydrated but non-greasy. Overall, the Buche de Noel is undoubtedly one of my favorite Lush holiday products, the formulation smells amazing and the cleanser does a great job at gently cleansing and exfoliating the skin. If you haven't already given it a try, definitely pick one up when the Lush holiday collection rolls around! Note: This review was updated on September 9th, 2017 for clarity. Availability: Lush is available online and in stores at Lush boutiques. The Buche de Noel is a limited edition product that's only available during the holiday months.
1. Technical Field The present invention relates generally to semiconductor memories and more specifically to controlling of wordline signals. 2. Background Art Microprocessors are used in many applications including personal computers and other electronic systems. A goal of any microprocessor is to process information quickly. One problem has been the communication rate between a microprocessor and main memory. The instructions to be executed by the microprocessor and the data on which operations implemented by the instructions are to be performed are stored at addresses within main memory. To access instructions and data, the microprocessor transmits addresses to main memory. The main memory decodes the address and makes the contents at the requested address available for reading and/or writing. The time required for the microprocessor to transmit an address to main memory and receive the respective contents therefrom can significantly constrain system performance. One technique, which is used to increase the speed with which the microprocessor processes information, is to provide the microprocessor with an architecture, which includes a fast local memory called a cache memory A cache memory is a small, fast memory that keeps copies of recently used data or instructions. When these items are reused, they can be accessed from the cache memory instead of main memory. Instead of operating at slower main memory access speeds, the microprocessor can operate at faster cache memory access speeds most of the time. In order to further increase performance, microprocessors have come to include more than one cache memory on the same semiconductor substrate as the microprocessor. The most commonly used cache memories use static random access memory (SRAM) circuitry, which provide high densities using wordlines and bitlines to access SRAM memory cells. However, in order to place as much memory on the microprocessor die as possible, SRAM circuitry requires minimal cell and read/write circuit architectures. To support minimal architectures, a memory cell is accessed by enabling a row wordline wire and enabling a selected column-gating transistor to read the value from the memory cell. The use of memory circuits in battery-operated and other low-voltage devices make it desirable to operate the memory circuits at lowest voltage possible. Typically, when read or write operations are done in memory arrays, the wordline is set high with the power applied while the information stored in the memory cells is read by being transferred onto bitlines or information on the bitlines is written by being stored in the memory cells. For read operations, bitlines are then read by a sense-amplifier, or sense-amp. Sense-amps are common to all memories whether the memories are dynamic, static, Flash, or other types of memories. For write operations, information on the bitlines change the held charge in the memory cell. While the wordline is kept on, power is being consumed. The wordline remains on during and after the desired operation, whether it is a read or a write, to ensure the operation is complete; i.e., power is consumed even when no longer required. Reading reliable results from memory circuits operating at a low-power supply voltage is complicated by the large capacitance of the wordlines and the threshold drop produced by the gating transistor. Low-power supply voltages reduce memory speed, and at very low voltages, the reliability of the information drops. To address the reliability problem, memory circuits, which have a bootstrapped boost voltage applied to the wordlines, have been developed. The row wordline is charged to a voltage that is higher than the power supply line. In addition, the row wordline is charged prior to accessing the memory location by switching on the column-gating transistor. Boost circuits provide reliable memory operation at low voltages. One of the problems with boost circuits is that at high voltages, the access circuitry is over-stressed. This limits the upper end of the power supply operating range of a memory device. Another problem is that boosting increases the power consumption of a memory circuit. At high supply voltages, the power dissipation can exceed tolerable levels and the memory circuitry is subject to failures due to overheating. Power saving has been a persistent need. Because low-power consumption is becoming even more important, it is desirable to provide a method and apparatus for operating a memory device in a manner that saves power. Furthermore, it is desirable to achieve reliable read and write operations at low voltages. With the urgency of increasing speed and saving power, solutions to these problems have been long sought but have long eluded those skilled in the art. The present invention provides a memory system, and method of operation therefor, having memory cells for containing data, bitlines for writing data in and reading data from the memory cells, and wordlines connected to the memory cells for causing the bitlines to write data in the memory cells in response to wordline signals. A decoder is connected to the wordlines for receiving and decoding address information in response to a clock signal and an address signal to select a wordline for a write to a memory cell. Latch circuitry is connected to the decoder and the wordlines. The latch circuitry is responsive to the clock signal for providing the wordline signal to the selected wordline for the write to the memory cell and for removing the wordline signal from the selected wordline when the write to the memory cell is complete. The memory system conserves power while permitting reliable read and write operations at low voltages. Certain embodiments of the invention have other advantages in addition to or in place of those mentioned above. The advantages will become apparent to those skilled in the art from a reading of the following detailed description when taken with reference to the accompanying drawings.
[NOTE: The following article is a press release issued by the aforementioned network and/or company. Any errors, typos, etc. are attributed to the original author. The release is reproduced solely for the dissemination of the enclosed information.] SATURDAY, JUNE 23 [EDITOR'S NOTE 1: Local programming will air on the West Coast, following FOX's MLB coverage.] WALTER SEARCHES FOR THE SOURCE OF ILLEGALLY RELEASED MUSIC ON AN "THE FINDER" SATURDAY, JUNE 23, ON FOX Curtis "50 Cent" Jackson Guest-Stars David Boreanaz (BONES) Directs Episode When new tracks are leaked from a late rapper, music mogul Big Glade (guest star Curtis "50 Cent" Jackson) and his business savvy lawyer (guest star Salli Richardson-Whitfield) ask Walter to find the source. Walter and Leo locate the DJ streaming the tracks with the help of Willa's online hacking expertise, but as Walter pieces together the events that led to the rapper's death, he uncovers a more complicated history. Meanwhile, Big Glade's lawyer and Leo rekindle their romance in "Life After Death" episode of THE FINDER airing Saturday, June 23 (11:00-Midnight ET/PT) on FOX. (FIN-108) (TV-14 D, L, V)
Des Traynor, Co-Founder of Intercom Everything You Know About Content Marketing is Wrong, with Des Traynor of Intercom In 2011, four lads from Dublin were running a successful business that let programers and engineers know when a user encountered a problem with their program. The problem was that none of them were particularly interested in the world of programming errors. Instead, they found their passions centered on why it was so difficult for online businesses to talk to customers. They didn’t know it at the time, but they were about to reinvent the concept of content marketing. So Des Traynor and his three co-founders sold their successful business, packed their bags, and moved to sunny California. “We were four Irish founders and basically our previous company, we had already done the bootstrapping thing. … When we were going through this change of business and this change of approach, we said, ‘What’s the opposite of running a bootstrapped business off the north side of Dublin?’ Well that’s come to Silicon Valley and raise a million dollars, and that’s what we did,” Traynor says. It turned out to be the right move, as the company that now known as Intercom raised more than $160 million in the past six years, building a customer base of over 17,000 customers, and making over $50 million in revenue. Their mission was simple: to make online businesses feel less like talking to a robot and feel more personal instead. The solution to that was to help businesses talk to their customers through their own websites and apps instead of the usual mish-mash of emails, texts, and phone calls. Intercom built its reputation and customer base through the power of content marketing, but in a way that might surprise you. Instead of following the traditional strategy of hiring a content team, focusing on SEO and backlinks, and churning out at as much content as possible, Intercom went in the completely opposite direction and developed a unique content strategy that led their business to go viral within the startup community, while building a beloved brand. “We’re not one of those people that do all that black hat stuff. I really, really hate that. We had a recommendation recently to go post on discussions.apple.com and write a piece that links back to your site, and it was just so puke-worthy. I could never get excited about gamifying the Google algorithm and building the business on such a messy, fragile house of cards,” Traynor says. Traynor goes in-depth with us in this episode about why the conventional content marketing strategy doesn’t work anymore, and how to really get your message across. Key Takeaways How to move quickly and stay lean while managing an international team Where to find top-tier talent for your startup, no matter where you are A sly way to make your business go viral No to SEO! The biggest mistakes marketers make when using SEO Why you don’t need a content marketing team to get half a million page views per post Full Transcript of the Podcast with Des Traynor Nathan: Hello and welcome to another episode of the Foundr Podcast. My name is Nathan Chan and I’m the CEO and host of this show. Now if you’re a new listener we interview extremely hard to reach founders that are either number one or two in the industry with the company that they’ve started and they’re disrupting a marketplace that they serve. And yeah, we’ve interviewed some of the greatest entrepreneurs of our generation on this show. And we also have a magazine and do a ton of other content around entrepreneurship and startups. So if you’re new listener welcome. If you’re an existing listener, I want to say thank you so much for taking the time to listen and as always we have an absolute treat. This interview is with a really, really, smart founder Irish fellow actually, we don’t really interview many people from Islands. There’s really transparent vulnerable guy. I love talking with him and he’s the founder a company called Intercom. We’re big fans of intercom. We’re a customer and we use their tool for all sorts of really, really, cool things around user on-boarding, but then also just speaking to visitors on our website. Okay, a really cool story you might find this interesting. So when we had a sale for one of our new courses that we launched with an instructor Thoth Greta what we did is we put Intercom on the sales page and a lot of people would come on the sales page and they get an automated pop-up it’s because it’s like a chat software and what would happen is we would say, “Hey let us know if you have any questions.” On automation and then like a lot of people are asking us all these questions on automation. And then like is there a guarantee on this product? When does it start? Is it a live class? Is a pre-recorded? How does it work? Can you tell us more about Greta’s businesses? And all of these questions that people were asking that was like gold feedback we’re like okay we need to put this into the sales page to communicate exactly what you get. So that’s just one of many ways we’ve used Intercom to grow our business and its really interesting to hear Des’s thoughts on email messaging because this is a new thing that’s happening, like if we’re recording this early July 2017, who knows what it’s going to be like in a couple of years, but messaging is becoming a massive thing with you know app messenger chat BOTS, and all these other things. So these guys are on the cutting edge and one way they grow their business is through content marketing. And they’re quite masterful at it you’re gonna learn a lot about it. It’s a buzzword that’s thrown around a lot and these guys do a very, very, good job we talk about all sorts of things as well around growing, scaling, hiring,challenges around that, leadership, you name it. So that’s it from me. If you are enjoying these episodes please do make sure you take the time to check out any of our other content just go to foundr.com F-O-U-N-D-R.com. We’ve got a ton of awesome content to help serve you. And if you are enjoying these interviews as well please do take the time to leave this review on Spotify, iTunes, Stitcher, SoundCloud, wherever you’re listening. All right guys that’s it from me. Now let’s jump into the show. All right so the first question that I ask everyone that comes on is, how did you get your job? Des: So I guess this is like a multi-part answer, but my immediate job I got by starting a company called Intercom and but maybe a more useful thing to do is maybe dial it back a little bit. So after college I attempted a PhD which was focused on teaching people how to teach computer science better. And I got bored with academia after I guess two and a half three years. And I dropped it to become a usability analyst out of consultancy. And I got bored of that after a year. So I quit that and started a consultancy with own, who is to CEO of Intercom now. One of the things we did while running that consultancy was we built our own side business called exceptional which is an error tracker for developers. And over time we realized it was a much bigger problem we had with our business exceptional that we cared about a lot more and that was we were totally out of touch with our customers and was really really hard to communicate with them. So we sell Exceptional and started building a solution to that problem which went on to become Intercom. Nathan: Gotcha. Now, you guys are everywhere now I heard about Intercom probably about three years ago and yeah you guys have really, really studied it to a massive traction like most startups use Intercom you see that icon somewhere in the corner whether its front end or you know using in a SAS product. It’s everywhere so can you talk to us about the early days you said that exceptional was was sold or you guys acquired. Did you like was with us all to a big company. I don’t come from a development background so yeah I’d love to love to hear a little bit more about that before we move and delve into the Intercom in the background. Des: Yes, certainly so Exceptional it let programmers or engineers know when a user had encountered an error in their product basically. And we had like thousands of customers it was genuinely a successful piece of work for anyone’s standards. It was just sadly the case that neither me nor our own were particularly passionate about programming errors effectively and we were quite passionate about this bigger a problem which was why is it so hard to see who our customers aren’t so hard to talk to them? So we got talking to a lot of different people at a few different events about kind of we had started building a intercom actually inside of Exceptional originally as a way to just send messages to our customers and see who is using our product. But we were just so much more passionate about that challenge so ultimately exceptional went on to become a part of Rackspace which is public company but it wasn’t sold directly then we sold it to a person who packaged it with a few other tools and sold it on. And you know, what that gave us was basically enough freedom to work on this at the time untitled problem which was basically there are thousands of SAS or like software businesses out there with like lots and lots users and it is so hard to see who is using my product today. And I really mean who they’re as in what customers so at the time like Google Analytics was very popular but that just told you only page views he had inside your part which wasn’t really a useful stuff from a business point of view. And we really cared about who was using our product and what they were doing and we look to talking to them about what they were doing went inside the product. So basically that became the origin of Intercom we started off with this idea of like basically being able to push messages inside your product and then we extended it from there to letting users reply to letting users start conversations to showing the the business owner here’s who’s active today here’s who has been active this month. And from there like we kind of came up with this thesis which is that like all businesses will become internet businesses but the experience when you move from like bricks and mortar to online is quite impersonal and it’s you know, it’s very like… that it’s very a ticket based and dear valued customer and all that shit that people hate. So we we really wanted to go against that so we sort of said our mission is to make internet business personal, and we wanna create a world where if you use a product a lot or if you frequently cite a lot whether it’s like a Shopify store or order it’s a publications such as founder if you go there a lot your recognizes viable customer you’re understood to be a good person people won’t try and sell you shit you’ve already bought till you know it they won’t call you a ticket number they’ll just engage you in conversation and as for business owners you’ll know who you should engage with in conversations. So our mission really is to try to just personalize a lot of conversations that businesses and customers have and that’s where we started working I think it was like 2011 when we started you’re saying our last three years. It’s been it’s you know being getting traction it’s fun like it’s it took it took us I guess three years to become that sort of overnight success in some sense but you know, it’s being quite a journey like we just released it on the start to better business trees recently we now have like 17,000 customers we’ve 100,000 monthly active users we’re… definitely hit some sort of like you know a tipping point where people finally understand we’re out there we as we said recently we’re about $50 million in revenue at this point and if people care about revenue. It has been quite a journey but it started with a kind of a very a deceptively a simple sounding but deceptively hard challenge which was how can we make talking to online businesses easy,simple,personal for one. Nathan: I see. So it sounds like you have an accent. I can see you guys are based out of San Fran where were you from? Des: Ireland, Dublin. Nathan: Gotcha and same with your co-founder? Des: Yeah, yeah so we Intercom we started it and I like on the mean streets of Dublin if you like it was like we’re four Irish founders and I’m basically we had our previous company we had done the whole bootstrap thing we like kind of ran it in you know in the remote streets of Dublin. I often like to joke although it’s actually just a fact we do is there was a single street in San Francisco where we had more customers than we did in all of Ireland at the time. So you know, we figured when we were going through this sort of change that of business and change of approach, what’s the opposite of running a bootstrap business in the north side of Dublin? Well let’s come to Silicon Valley and raise a million dollars so that’s basically what we did. Nathan: Gotcha. So now you’re fully migrated and you living in San Fran? Des: I live in both locations I have a place in both cities we like… the other four founders our CEO is here in SF I’m here most of time in SF and then we have so we actually have two offices from the very start today we’ve three offices but we started with a San Francisco office and the Dublin office and today we employ maybe I guess like 320 people or so across all three offices. Nathan: Gotcha. And core team where’s mainly core team focused how do you manage that because we’re we’re based out of Melbourne we’re gonna need to set up an office in the US next year and I’m really curious, how do you structure that do you structure core team in San Fran? still Dublin? Like can you talk talk to you about that? Des: You know, the big sort of thing we got right I mean it’s a challenge to split and to risk bifurcating your company and bifurcating your culture but like the same we got right was like functional divisions where there are geographical divisions so all of our Oren Diaz in Dublin the product is basically built and maintained in Dublin. And we have a sales presence and a support presence there now as well but like the product is primarily Dublin and then all of sales and go to market in general marketing etc. is primarily in San Francisco and that just kind of reduces the amount of like difficult transatlantic collaboration that order eyes will be necessary so it kind of impairs the leaders to make local decisions and a new fast effectively would happen to wait for another office to wake up before they can progress. Nathan: That’s interesting. So you know for for like you know a startup that is is right it was founded and you’re starting to build the team core team out of you know whatever that city and it’s not in the US what’s not in you know, one of these startup kinda clusters is that what you always recommend to do if you can? Des: So it’s hard like I mean I… I’m tempted to say yes, I think was granite I given that it worked out but like I can’t help but feel it would have been easier like even today I kind of a feel it would be easier if everyone was in the same place and that place up and be the best place to build software. But reality comes out too fast you know, so I think like in our case it wasn’t you know for a lot of reasons it wasn’t gonna be easy for us to entirely up the entire team and move to San Francisco to start with so to some degree it was driven by a necessity more than it was any sort of tactical or strategic sort of focus in general.I would advise people to follow that similar line of logic which is like for example a four-year folks you’ll struggle to move the entire office to San Francisco because visasm will come out to the cost of living will be tricky etc. So I think do what you can do to add that optimizes best sort of strategy for you for us our CEO was always gonna need to be here because we wanted to do like we wanted to raise money want to raise real venture capital not to sort of make EOP stuff that maybe exists in non-startup pubs and that meant being here in amongst it you know, just like actors go to Hollywood and finance people go to Wall Street like startups come to San Francisco. Nathan: Yeah, you know that’s that’s a really good point so I’m really curious as well around talent. You said products built mainly out of Ireland. You guys have access to enough programming and and engineering software engineering talents to to do that because it’s a great product like we’re a customer we’re a very, very big fan you know we do a lot of cool stuff with Intercom. So yeah, that’s that’s something that I was really curious about. Des: Yeah, and like this sort of smartass answer is like you kinda answered your own question I you know purely to product if you like to product how much obviously we find the people for that but what I would say is them… like it’s kind of a multi-part answer what I’d say is like you probably can’t name a single public software company that doesn’t have a significant footprint in Dublin and that means that there are clearly a lot of Engineers for Google Microsoft Facebook Airbnbstripe slack you name it they’re all office all have offices in Dublin. So there is talent there like what are things that benefit us specifically like University is a third level education aka University is free in Ireland which means you tend to have a good degree a good amount of like of engineers coming off you know currently leaving universities looking for good places to work. We are definitely one of the more prominent startups in the city so we you know that kind of works in our favor too, but there’s also this big other thing called Europe beside us which which we can also draw on them and does definitely you know a large degree of talent there as long as we can motivate them to move to the beautiful weather and sunshine drenched to Dublin that we have to offer. Nathan: Yeah, gotcha.That’s interesting because here sometimes I I find it same with Melbourne like I think there is good talent here and and you know sometimes people always ask me you know can it be you know can you build a solar team is there talent here and and do you think that you know where if you’re not in San Fran you still can build and and you kind of you know, eventually build something of true worth like you guys are building with Intercom even if you’re not based out of San Fran to start. Des: Yeah, you know I I absolutely believe that and like and you know I think that there are so many examples that are of great software companies that didn’t originate and maybe in some cases still aren’t in San Francisco. Even like looking specifically at Australia you could argue you could argue campaign monitor looking around America you can see like you know Qualtrics, SurveyMonke, MailChimp. None of those are Silicon Valley companies and they’re all multi-billion dollar companies you know. And even like going download I would like to startup scene even somewhere I was like somewhere like if I was to say startups in Melbourne and surely at that point it gets challenging but it actually doesn’t like we have plenty of customers there and some really really good product companies coming out of the place. So l 100% believe like you know, you can start a company anywhere I do think the challenge you run into is when you want to kind of take it take the next step and this is not necessarily product but like what specifically on sales and marketing side we’re like we might struggle in Melbourne is to find somebody who’s an expert at product marketing for a SAS business selling to developers there’s maybe eight of them in Melbourne and they probably already have good jobs. And that’s kind of the challenge that you have whereas there’s maybe like 850 of them in San Francisco. And at least 10% of them are looking you know. So I think like in general I think you can build a product in a lot of places for sure there’s more engineers and designers etc., right here now that’s kind of a double-edged sword and that you like you know it was there’s a genuine question around like employee tenure and the value is different I think than it is everywhere else. I think people might like you know, Europe at the very least you’re trying to cling on to your employees it’s harder when there’s like every other cool startup is next door also looking for them that can be a challenge. But I really feel on the sales and marketing side that’s where the skills haven’t kinda distributed evenly so… you know, again a simple example but I take it come to that campaign monitored rate they never have an office over here you know. It’s I think when you want to take sales to the next level take marketing to the next level that’s when you might find your hometown lacking. Nathan: Interesting. So you guys went through 500 stops, right? Des: We were part of that batch but we didn’t go shoot our program in the traditional sense we weren’t based in her office we weren’t attending to series or anything of that. I thought but yeah we did take their money early on. Nathan: Yes, got you. And are you guys Bay able to share if you guys are profitable? Des: We have no comment on that. Nathan: Yep that’s cool. That’s no stress at all. And tell me about how you guys are fueling growth like I can see definitely for sure one of your biggest you I guess natural inhibitors of growth is just when somebody sees that little icon on the bottom right hand corner or wherever it is and you know powered by Intercom that that must be massive. Because I see that everywhere that’s how I found out about you guys, but tell me else so talk to me about like other things that you guys are doing to to build you know that your SAS company and grow it. Des: Sure. So what you just referred to there it’s kind of what we call it powered by Intercom or we run on the Intercom I think is the current text. And that’s kind of like our sort of semi-viral element where basically you will see us everywhere which is awesome. It’s definitely good for like extending and sort of spreading the brand and it is a leaper of growth first but a lot of our growth comes in. Like in the early days I guess you know when we were starting out, a lot of what we tried to do was just we knew we had a product that we could sell to startups so all we wanted to do was produce content that startup folk would want to read and if that content happened to like point them towards, “Hey maybe it’d be good if you talked to your users after they log in.” Then we might just occasionally trow in an occasional screenshot of Intercom or maybe link up a sign-up page. And that probably got us our first like what fair few hundred customers. And to this day like you know we’re now look over 500 posts on our blog and we’ve kind of you know I for sure I wrote the first like 90 out of them but like today we have a whole content team and that’s been a significant leaper of growth for us as well. And the nice thing about like that content marketing which I hate that phrase because it really isn’t what we’re doing but it is genuinely a leaper of growth. The nice thing about it is it pays off longitudinally like there are literally articles I wrote in 2011 that still produce customers for us today. And you get that like a sort of long term you know if we stopped publishing today the wrong momentum of the blog coasts really well first and it is genuinely a significant part of our growth as the businesses matured as we raised you know $116 million of capital and all that sort of stuff. We’ve definitely added in some of the more traditional stuff we do actually advertising places now. And we sponsor the occasional blogger podcaster we particularly liked. And so like today I’d say our gross is a blend of like of like the viral stuff you talked about content marketing the raw quality of the product helps a lot like the the sort of effusive way you spoke about it. That generally tends to be how people speak about Intercom which means be really strong word of mouth which when you factor in things like Twitter word of mouth becomes super powerful when it’s when it’s positively rooted in the strength of the product. But yes, I think that that’s what kicked us off and then you know for any of the more traditional stuff like advertising sponsorship re-targeting etc. We do all that as well but like it’s that’s probably our less… our last like you know hot tips that have style stuff Des: No, we’re not really to be honest with you. I don’t like the I’m just to be clear on the inbound piece like we are 100% inbound so we aren’t we only customers only come to us to buy the product we don’t we don’t like cold call people or anything like that. So I think on the SEO question like we we try to not be dumb when we would speak about our own product if we’re talking about how to acquire customers we will link up our product which helps you acquire customers. However, we’re not what these people who like do all the black hat shit I really, really hate that like we had a recommendation recently oh you should go and post on discussions apple.com and write a piece that like links back to your site. And it’s just so cute wordy I could never I could never get excited about gamifying the Google algorithm I’m building a business on such a messy fragile house of cards that just damages the web. I think like you know, I think doing those sort of things it’s like it’s like taking weight-loss pills instead of going to the gym like it’s just not a good idea like you know, I’d rather grow Intercom on the quality of the product the quality of the brand the quality of the marketing. Not like knowing about one little weight it’s a Google SEO algorithm will over a reward a punitive back link or whatever I just don’t care. Nathan: That’s fair enough but do you guys have a strong focus on links and and link building and stuff like that, is that right? Des: It depends on what you mean like we don’t have we have anyone out there trying to create links back to our site on the web. The focus we really have is like if you know, there was a long period of time where we would be talking about a product I’m genuinely forget to link it up and so I like to think we’ve gotta stop that behavior, but now we don’t like I mean are genuinely like we do have a lot of like organic traffic which might otherwise be known as SEO. But like that has come from running a really popular blog it hasn’t come from like any sort of link rams or any strategies like that. Nathan: I know, gotcha. And what’s been the premise besides producing great content over a long period of time. Like if somebody wants you know they’re building a SAS company and they wanna and we won’t say the word content marketing but you know produce you know have a great blog that you know it’s quite iconic. You know, you guys are really well known around your customer support and stuff like I know that blog I’ve read your content. What else piece of advice would you give people besides you know great content long game because yeah, I’m really curious around that Des: Yeah, like I can give you some sort of tactical stuff but I would say like you know 80% of it is actually you know as you said like it’s you know good content relating to the market on a cell and on long game. And long game isn’t for everyone because you kind of need to you know it doesn’t match to all businesses do you obviously be around for the long came to actually for it to show up. But like stake you know to be just one degree more specific when we say great content what we mean basically is what products do we sell well we sell a part that appeals to customer support people as you said and we sell a product that appeals to marketing people or occasionally growth marketing or occasionally product people and that’s our engaged product. And so what we try to do is make sure that we frequently hit on content that is interesting and useful like so not not keyword spammy bullshit like 11 top tips or whatever. But just stuff that’s like genuinely like well thought out well written well Illustrated well-diagrammed and practical and tactical and applicable for people who we think could actually one day want to buy Intercom. So you’ll see posts like you know how to scale other support team and that’s because we sell to people who guess what you just can’t ever support team you’ll see but not opposed to that customer humble why because people you know marketers tend to worry about customer on-boarding and you know the posts come very genuinely steeped in our own experience. We’re not like hiring writers who don’t know anything about the topic we’re sitting down with our marketing people with our product people and we’re asking them like how do you think about this and we’re getting them to write pieces. And our content team is like three people today and there are maybe 327 other people in Intercom and we rely heavily on the other 327 for flashing at the content. The one thing I think most startups get wrong and especially most CEOs is like you know someone will be listen to this and think yeah that’s great Dez but where do I get to turn the blog and I would say to you like that’s I know exactly that sentence and I find it so annoying because you never if I was sitting here saying and the other nice thing about Ruby on Rails is you’re gonna create a scalable framework no one would say, “Yeah that’s nice Des wherever I got time to code.” You know, well it’s just as fucking important like it genuinely is and people don’t people don’t see it that way they they really feel like they’re you know oh it needs to be like you know blogging is some sort of optional extra but for some reason the lines of code or design are really important and that’s just not the case . If you’re genuinely serious but like we are going to have a popular blog guess what you need to be writing it needs to be people’s full-time jobs I’m not just your content people but it needs to be something that like you recognize and reward and all that holds like I would say in any given year our best pieces of content on the line I mean best is into the tune of a half a million page views a post. They come from like our VP of Product their VP of engineering a directors of design. Yeah, a CEO you name like they don’t come from the content marketing team I think that’s again that means yes we deliberately have to sacrifice time of otherwise very busy people in the company to like curate and structured our thinking in a way that can be shared and be really useful to other companies. And it’s a trade off but it’s like it’s a deliberate it’s a deliberate choice but it was like to have a good blog and if you want to copy… that’s what you have to copy you don’t you don’t acquire a team of ten content marketers and say you know go here’s your typewriters get busy you know. Nathan: That’s really interesting. So oh, jeez I’m impressed because I thought that you guys would have quite a large content marketing. So of those three people is one of them just just working with other people in the team, just interviewing them getting that content transcribed and editing it out then going back to them what do you think of this? Does this make sense? Is this is actionable? Is this is really good? Is this helpful? Is that how you guys are doing it? Because that’s really impressive. Des: Yeah, so we do a little bit of that we do actually we’re look you know for like a good chunk of the company like maybe 10% or so are like very talented capable and motivated writers. So we actually have people who just really, really like I’m a it’s genuine like we you know we talk to people all the time you’re like you know when they start an Intercom one of the things that they really dream to do is one day be publishing the blog and we’re like that’s awesome like that’s what we want. In terms of the activities of the team it’s it’s a mistake you know John who has the content team he would basically contest with me here I’m talking about the Magista just to the blog they actually do our podcast too and we publish a podcast every week and we also produce books so we produce English Don six books today so we have a another one coming soon and basically like what I think you know to make all that work we very genuinely and honestly we reward and request the entire company write pieces so a lot of what the content in do is they actually take inbound if you not I mean they receive submissions from the rest of the company and they work on them they edit them they tweak them they just and they make sure they have beautiful illustrations. And then they scheduled and go live and when they’re not doing that they’re make you’re chasing their guests for a podcast order collating the best of the best and putting them at the books that we’re gonna print and send it to people. Nathan: Yeah, gotcha. Interesting. Look, we have to work towards wrapping up a really honed in on the content piece,but let’s talk… I was speaking I caught up with the founder of BuzzSumo last week. Do you know that SAS? Des: Is that Noah? Nathan: No,no. It’s not it’s…I know what was… Des: I’m sorry. Nathan: Buzzsumo so it’s like it’s a really, really powerful SAS that they they’re constantly analyzing like all of the content like any blog posts they’re constantly analyzing blog posts everything every you know I’m talking about? Des: I think I know the product, yeah. They also have I believe a WordPress plugin right Nathan: No, no. I don’t think so. Anyways they’re like we use them a lot of Foundr because because we can see trending topics we can see topics that get the most shares we can see you know like if we do a little bit of SEO stuff where you know we have a we have a certain you know keywords that we’re obviously wanting to rank for and you know. We have a look at who’s at the top and where those links are coming from for those keywords and you know we do outreach and we do guest post linking back to the certain blog post . Anyways long story short, these guys we use BuzzSumo and they’ve analyzed you know tens hundreds you know billions of articles from their data. And when I was speaking to the founder he said to me that you know a lot of people say you know it is a quality game and I agree a 100% but he said, “At the end of the day there’s a fundamentally a volume game as well it’s all about quality content at scale.” So I’m curious, how much content do you guys produce and do you have an interest to up that content you know, per per you know per week per month that you etc. Des: Yeah, so we you know our current target these days is one post today and that’s that’s a challenge to keep up but like a where we’ve been hitting reliably for I guess two and a half three months. And I mostly agree with the with the sentiment that it’s photo quality and quantity game I think if you’re gonna drop by of those you should drop quantity first because I think a lot of noise is still noise. Whereas a small amount of great stuff is still a small amount of great stuff. But yeah I I’m fully agree with the sentiment like I… we shoot for one a day because we think that’s a bit all we’re capable of producing and while keeping the quality bar where it is. And we don’t ever want to be one of these blogs that degrades into publishing kind of shitty link baiter or like or like 11 like intro type of articles that like we kind of leave our user base behind. Nathan: Yeah totally get that. We’re the same we want to produce stuff that goes deep because that’s that’s the way you get cut through because there’s so much noise. Nathan: Yep absolutely. I think that the the quantity game genuinely came from a time when Google’s SEO algorithm wasn’t that smart when people used RSS feeds to check out their blogs and and all these other things like that basically had changed whereas like today like the majority of our traffic for our blog comes from Twitter because people share our stuff if it’s good but only if it’s good. So putting in a bad piece just means nothing happens anymore like we don’t you know for sure we publish a newsletter once a week which directs like you know a significant number of thousands of people or tens of thousands people towards content, but like that’s that doesn’t that’s not how we grow our audience that’s just how we address our audience. To grow our audience we need new qualities still shared by people to what all their followers or with all the people who who are influenced by their like sinking thoughts on the matter so look so yeah I really I fully believe in quality as a thing. We are at the same time considering a second blog which might be a more high velocity blog but it doesn’t mean the quality will drop it just means maybe we will we won’t stretch this far to do what we don’t maybe go for maybe we’ll see less 800 words essays and maybe more like 100 words we’re considering that at the moment but like yeah content has be you know what we’ve been doing has been working but like we’re keen to bring in another tactic or and there’s a strategy on top. Nathan: Yeah and look what we’re talking about this is all evergreen so someone listen to this you know two years from now this stuff one is not gonna change. Des: Yeah I mean I’m very influenced by that quote by Jeff Bezos which was relayed to me by Jason freed which is “Just focus on the things that don’t change.” Like in ten years time people aren’t gonna wish our content was worse they’re not gonna wish like that you know, that our product was slower that ad that our product was harder to use you know like there’s a few fundamental core variables of human desire in content and in product. And focusing on those will serve us a lot better than chasing trends. Nathan: Yeah I agree. So look we have to work towards wrapping up there’s been awesome conversation, man. Two last questions the first one is I’m curious around Intercom like we use intercom on the front end you know, people come to our site not our main sign or the foundr.com site but other products that we have of courses etc. and other premium products and with we see it we’ve seen a significant increase in sales just because we just asked you know, “Hey if you’ve got any questions please let us know.” And people actually have questions and only close and then we also use it on the back end around on-boarding and you know getting feedback around certain areas of force will tell the products. But what my question to you is is around that you know I was just amazed you know just just even just having it there on your site people want to talk to you you can automate stuff. Do you guys plan on doing A.I. type stuff where it’s you know, you can and I don’t know if you’ve already work on this I’m not sure but do you plan eventually that you know, when someone comes to the site site and the Intercom’s there and you have a people will have a series of like the the it’ll be a bots you know and it’l have a series of questions and if they are answered, if they’re not answered and and really use Intercom as a way to close sales when you you wouldn’t it would seem like it’s a real person? Des: So there’s a few like rights to that question though maybe I’ll just un-bundle it. The first one is like you know are we thinking about AI and machine learning and BOTS absolutely yes and we’ve dipped our toes in in various bits and pieces for example with our educate product if you ask a business a question and they’re using your knowledge base you will see suggested articles from a bot as part of the conversation and I’d like to think of that as like rather than artificial intelligence I consider like augmented intelligence that is like a bot sits in the conversation silently only speaking one of the things that can help and it’s very clearly a bot it’s very clearly not a human if you like. And then to the more general seem like I believe they you know human connection is still very important and I believe when you know, if it’s profitable for you to do so you should talk to your customers and I often like have to laugh when I see businesses spending like tens of thousands or hundreds of thousands of dollars on the hoping against hope that maybe one of these customers will reply and yet when the reply comes in they’re like holy shit it’s expensive to talk to these customers I don’t like how was it how was it affordable to send out that campaign but not affordable to talk to people who want to buy. So there’s some sort of paradox there all that said like I mean I sorry… I generally don’t believe like that the future is like forcing your customers down on a bot style IV or style like you know phone trees to like to complete a purchase. I think you know if if it’s profitable for you to talk to your customers which it generally should be unless your customers or like worked very, very little. I think having a human there to aid the conversation is useful. However, there are like a large chunk of programmatic tasks that actually can be completed such as like searching a knowledge base such as like upgrading an account or like you know giving feedback on a future we’re like I think BOTS can genuinely help and the key variables where a BOTS can help I think is like where there’s no opportunity to form a relationship. Most SAS businesses or most recurring revenue businesses they do want a relationship but there are times in a relationship isn’t the most obvious thing to do like. So an example might be like you know, if somebody just simply wants to like council a teammate after account or they want help getting a t-shirt to put deliver to the head or something like that. I think it’s easy in those cases to use a bot to inject structured automation on a workflow where it’s generally considered not something that was valuable to either party to have a human involved in I think that’s where BOTS can help and we definitely think about it from that perspective we don’t remember anything to announce or not in the short-term but for sure like you know BOTS and messaging or very much gone together hand-in-hand and we’re definitely following that trend Nathan: Yeah awesome yeah it’s oh it’s all moving – in-app yeah and yeah I have to ask one question. Sorry two more questions first one is do you think email will be around forever or do you think it’s going to move all to in-app. And two, where’s the best place people can find more about Intercom and yourself? Des: Sure. Email’s one’s deep in doubt like I think today email has been relegated as being like if you think about the emails you actually get and consume today it’s basically people who are addressing you in a formal business context but they don’t know you well know such that they wouldn’t like I messaged you or Slack you or whatever you know Slack has definitely like relegated email for work significantly emails more like a point of record in communications these days. Data is the means through which all conversations happen I think then that when you look at it from a marketing perspective I think there’s a generation of people growing up today for whom email is simply an identity verification for their Snapchat account or Facebook account or whatever. We’re like you need an email address because they ask you but like no one ever checks it and I think in that regard like the future of like email marketing it’s I’m not like I’m not down under it I think there is a good future for email marketing. I just think it might not be the best way to kind of you know grow top a funnel for the next generation of businesses that I think we’re gonna see like more and more people are spending their time in other products in your product elsewhere basically and I think in that regard you know email marketing will you know it’ll find a more of a nice use case than maybe it had five ten years ago and I think that’s just driven by the rise of messaging and the rise of workplace messaging like core chunks of where email was the best thing to do have been falling by the wayside consistently. But that said like you know I I don’t think it’ll go away I think we’ll all have email accounts in ten years I just think their purpose will be relegated further and further and further and then I guess on your more general question and how to follow up with Intercom or myself and best basis intercom.com. And our blog is at blog.intercom.com. And that’s you know I guess we spoke a little bit I’m Dez Treynor on Twitter. And that’s the way you can keep in touch with me. Nathan: Awesome, fantastic we’ll wrap there Des. But thank you so much for your time man it was a great interview.
Central venous catheter colonization with Staphylococcus aureus is not always an indication for antimicrobial therapy. Whether patients whose catheter tip grows Staphylococcus aureus but who have no concomitant bacteraemia should receive antimicrobials remains an unresolved issue. However, a proportion of patients with catheter tips colonized by S. aureus have no blood cultures taken because of low suspicion of sepsis and the meaning of this microbiological finding is unknown. We have analysed all catheter tips growing S. aureus during a 6-year period and have selected patients without blood cultures taken 7 days before or after central vascular catheter removal. Patient's evolution was classified into good and poor outcome. Poor outcome was defined as S. aureus infection within 3 months after catheter withdrawal or death in the same period with no obvious cause. Patients with good and poor outcomes were compared to assess whether antimicrobial therapy influenced evolution. Sixty-seven patients fulfilled our inclusion criteria and five (7.4%) had a poor outcome. The administration of early anti-staphylococcal therapy had no impact on the outcome of this population (p 0.99). The only factor independently associated with a poor outcome was the presence of clinical signs of sepsis when the catheter was removed (OR 20.8; 95% CI 2.0-206.1; p 0.009). Our data suggest that patients with central vascular catheter tips colonized with S. aureus should be closely monitored for signs and symptoms of ongoing infection, but if these are not present then antimicrobial therapy does not seem justified.
Governments save money with warm weather The Road Commission has $5 million budgeted for winter maintenance this season. Laughlin said they would crunch numbers in April to see where they stand, and would likely put any winter savings back into the county roads. County Road Commissioner Thomas Palarz said work will be spread out around the county, and it will be up to Road Commission officials to come up with a list of projects that could be tackled once it's determined how much money is available. “We haven’t done much on gravel roads, so another thing is to put new gravel down in various townships,” Laughlin said, adding other work includes asphalt and culvert work. Palarz said with a lighter winter, crews that would normally be in plow trucks are able to handle other tasks. This includes trimming trees, working on ditches and repairing guardrails. “We do know that, unless we get three weeks of severe weather, we’ll have some savings,” Laughlin said. The savings are also being noticed in local cities and villages. Grand Haven City Manager Pat McGinnis said the city has seen some cost savings as a result of the milder-than-normal winter weather. The city budgeted more than $150,000 for winter maintenance in 2011-12, and $40,000 was saved as a result of the warmer weather. Some of the savings has already gone into streets and utilities, McGinnis said. McGinnis noted that the warm weather allowed city public works crews to spend time on other tasks. “We did an awful lot of street work because the weather was so nice,” he said. “The ground never froze, so we were able to do underground work that we’ve never done before.” McGinnis said one of the unique projects was an underground valve replacement project that was done in February. Spring Lake Village Manager Ryan Cotton said there’s been a cost savings in the village as well. “It looks like we’re at a 10-percent savings in all of our winter maintenance budgets,” he said. Cotton said there was about $40,000 budgeted for local streets and $41,000 budgeted for major streets. Like other communities, Spring Lake Village is looking ahead to spring projects. “We’re already starting a month ahead of time on some of our other projects,” Cotton said. These projects include park enhancements at Tanglefoot Park and dock replacement at the Jackson Street dock.
// Copyright 2017 Google Inc. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. %module(directors="1") Callback %{ #include "pxr/base/work/loops.h" #include <functional> #include "diagnosticHandler.h" %} %feature("director") TaskCallback; %feature("director") DiagnosticHandler; %include "diagnosticHandler.h" %inline %{ class TaskCallback { public: virtual ~TaskCallback() {} virtual void Run(size_t start, size_t end) {} }; void TestCall(TaskCallback& cb, int start, int end) { cb.Run(start, end); } typedef std::vector<TaskCallback> TaskCallbackVector; void ParallelForN(size_t n, TaskCallback& dispatch) { WorkParallelForN(n, [&dispatch](size_t start, size_t end) { dispatch.Run(start, end); }); } %} %include "std_vector.i" namespace std { %template(TaskCallbackVector) vector<TaskCallback>; }
Case: 17-11076 Date Filed: 02/08/2018 Page: 1 of 7 [DO NOT PUBLISH] IN THE UNITED STATES COURT OF APPEALS FOR THE ELEVENTH CIRCUIT ________________________ No. 17-11076 Non-Argument Calendar ________________________ D.C. Docket No. 1:15-cr-00400-MHC-AJB-1 UNITED STATES OF AMERICA, Plaintiff - Appellee, versus DEAMONTE KENDRICKS, a.k.a. Deamonte J. Kendricks, Defendant - Appellant. ________________________ Appeal from the United States District Court for the Northern District of Georgia ________________________ (February 8, 2018) Before ROSENBAUM, JULIE CARNES, and HULL, Circuit Judges. PER CURIAM: Case: 17-11076 Date Filed: 02/08/2018 Page: 2 of 7 Deamonte Kendricks appeals his sentence of 57 months’ imprisonment imposed after he pled guilty to one count of being a felon in possession of a firearm. His sentence represents a six-month downward variance from the low end of the applicable sentencing range under the Federal Sentencing Guidelines Manual (“U.S.S.G.”). Kendricks argues that his sentence is substantively unreasonable because the district court refused to impose a greater downward variance to avoid unwarranted disparity in sentencing between himself and his state codefendant and to account for the fact that but for Kendricks’s honesty when questioned by police, his guidelines sentence would have been lower. For the reasons that follow, we affirm. I. On October 1, 2015, Kendricks and his passenger, Martinez Antwan Arnold, fled a traffic stop and engaged in a short, high-speed chase that ended after Kendricks ran a stop sign, lost control of the vehicle, and hit a guard rail. The police apprehended the men after they attempted to flee on foot. Upon searching the vehicle, the officers discovered a small amount of marijuana as well as a pistol and a semiautomatic rifle, both loaded. Later investigation revealed that both firearms were stolen. In subsequent interviews with police, Kendricks admitted that he owned the rifle and that he lent it to Arnold, who planned to use it that night to retaliate 2 Case: 17-11076 Date Filed: 02/08/2018 Page: 3 of 7 against a rival gang who beat him up. Kendricks also told police that the pistol was his and described an unrelated incident involving that firearm. Kendricks and Arnold were both charged in state court, but while Kendricks was also charged in the instant case in federal court, Arnold was not charged in federal court. After unsuccessfully moving in the instant case to suppress his statements to police,1 Kendricks pled guilty to being a felon in possession of a firearm. 2 Both Kendricks’s and Arnold’s state-court cases are still pending. II. The guideline for the count of conviction was U.S.S.G. § 2K2.1(a)(4)(B). For the sole count, the base offense level was 20 because the offense involved a firearm capable of accepting a large-capacity magazine (the semi-automatic rifle), and Kendricks was a prohibited person at the time he committed the offense. The court added two points under § 2K2.1(b)(4)(A) to the offense level because the pistol was stolen. And it added another four under § 2K2. 1 (b)(6)(B) because Kendricks knowingly allowed Arnold to possess the rifle to be used in connection with a planned drive-by shooting, another felony offense. Under § 3C1.2, two more points were added because Kendricks recklessly created a substantial risk of death or serious bodily injury to another person when he fled from arresting officers in a high-speed chase and ran a stop sign before crashing. As a result, 1 Kendricks does not challenge the denial of this motion on appeal. 2 Kendricks was previously convicted on May 15, 2015, of felony shoplifting and possession of drug paraphernalia. 3 Case: 17-11076 Date Filed: 02/08/2018 Page: 4 of 7 Kendricks’s adjusted offense level was 28, before the subtraction of three points under §§ 3E 1.1(a) and 3E1.1(b) because Kendricks accepted responsibility for the offense and assisted authorities by timely notifying them of his intention to enter a guilty plea. Thus, the total adjusted offense level was 25. Kendricks received one criminal history point for a prior conviction for shoplifting and possession of drug paraphernalia. Because Kendricks was on probation for that crime when he committed the instant offense, his criminal- history score was increased by two under U.S.S.G. § 4A1 .1(d), resulting in a total criminal-history score of three and establishing a criminal-history category of II. Based on a total adjusted offense level of 25 and a criminal-history category of II, the probation officer calculated Kendricks’s guideline range to be 63 to 78 months of imprisonment. The government asked the court to account in Kendricks’s sentence for certain conduct that Kendrick did not plead guilty to. For this part, Kendricks argued for a below-guidelines sentence of 37 months’ imprisonment, arguing that Arnold initiated the instant offense, his prior criminal history was for shoplifting only, and his offense level was higher because of his honesty during his interviews with police. Kendricks further contended that while he understood the court’s reason for denying his motion to suppress the statements, a downward variance 4 Case: 17-11076 Date Filed: 02/08/2018 Page: 5 of 7 was appropriate to account for the circumstances under which the statements were made. The court declined to consider conduct for which Kendricks was not convicted and specifically considered his age, history, association with gang activity, and the seriousness of the crime. It then imposed a 57-month sentence of imprisonment with three years of supervised release. III. We review all sentences, whether within or outside the guidelines, for reasonableness under an abuse-of-discretion standard. United States v. Irey, 612 F.3d 1160, 1186 (11th Cir. 2010) (en banc). The challenging party bears the burden of showing that the sentence is unreasonable in light of the record and the 18 U.S.C. § 3553(a) factors. United States v. Victor, 719 F.3d 1288, 1291 (11th Cir. 2013). The district court’s sentence must be “sufficient, but not greater than necessary to comply with the purposes” listed in § 3553(a)(2), including the need for the sentence to reflect the seriousness of the offense and to promote respect for the law, the need for adequate deterrence, the need to protect the public, and the need to provide the defendant with educational or vocational training, medical care, or other correctional treatment. 18 U.S.C. § 3553(a)(2); Victor, 719 F.3d at 1291. The court should also consider the nature and circumstances of the offense 5 Case: 17-11076 Date Filed: 02/08/2018 Page: 6 of 7 and the history and characteristics of the defendant, the kinds of sentences available, the guideline range, any pertinent policy statements of the Sentencing Commission, the need to avoid unwarranted sentencing disparities, and the need to provide restitution to victims. 18 U.S.C. § 3553(a)(1), (3)-(7). IV. Section § 3553(a)(6) of Title 18 specifically directs courts to consider “the need to avoid unwarranted sentence disparities among defendants with similar records who have been found guilty of similar conduct.” Well-founded claims of unwarranted sentencing disparity “assume[] that apples are being compared to apples.” United States v. Docampo, 573 F.3d 1091, 1101 (11th Cir. 2009) (quotation omitted). Thus, § 3553(a)(6) applies only when comparing sentences among defendants “who have been found guilty of similar conduct.” See United States v. Martin, 455 F.3d 1227, 1241 (11th Cir. 2006) (emphasis in original) (rejecting seven-day sentence imposed by the district court after that court considered that a more culpable individual had been found not guilty). Additionally, the need to avoid unwarranted sentencing disparities arises only in the context of federal sentencing. Docampo, 573 F.3d at 1102 (rejecting argument that federal defendant was entitled to a less severe sentence based on the sentences received by other defendants in state court). Comparing a state sentence with a federal sentence does not establish an unwarranted sentencing disparity. See id. 6 Case: 17-11076 Date Filed: 02/08/2018 Page: 7 of 7 Here, the district court did not abuse its discretion in sentencing Kendricks to a below-guidelines term of 57 months’ imprisonment. Kendricks argues that pursuant to 18 U.S.C. § 3553(a)(6), his sentence is disparately high in comparison to his state-court codefendant, Arnold. However, Arnold has not been charged or found guilty of similar conduct at the federal-court level. His only charges are pending in state court. So Arnold is not similarly situated to Kendricks for purposes of considering parity in sentencing. See Docampo, 573 F.3d at 1102. Kendricks also argues that his honesty with police during questioning, in which he knowingly and voluntarily waived his right to an attorney, resulted in an enhancement to his offense level under the sentencing guidelines. But while the district court did not expressly acknowledge its consideration of his acceptance of responsibility, the court reduced his offense level and sentenced him on the lower end of the sentencing range. So the extent that Kendricks’s honesty may have resulted in an increase in his offense level, it was accounted for under the guidelines via a reduction in offense level. Kendricks did not carry his burden of showing that the district court abused its discretion in considering the 18 U.S.C. § 3553(a) factors. V. For these reasons, we affirm Kendricks’s sentence. AFFIRMED. 7
Aetiology and treatment of neuroleptic malignant syndrome. The clinical triad of fever, movement disorder, and altered mentation known as NMS represents an infrequent yet highly lethal side effect of neuroleptic therapy. Although awareness and recognition are on the rise, underdiagnosis of the disorder may represent a neglected clinical problem of major proportions considering the number of patients treated with neuroleptics. The recognition of problems such as NMS and tardive dyskinesia point out the need for investigation of low-dose efficacy and neuroleptic serum levels. The idea that neuroleptics are free of severe side effects has created a clinical fallacy that high doses of high potency neuroleptics should be administered to acutely psychotic patients and that low doses of neuroleptics may be used for various diagnostic entities. The emphasis on NMS and its 20% mortality rate should point out that neuroleptics should only be used when clinically indicated to treat psychosis and should be given in the lowest possible dose that achieves antipsychotic effects. Although treatment strategies are still being formulated, aggressive medical care and specific drug therapies exist to reverse the symptoms of this syndrome. With proper education, psychiatrists and other specialists can recognize and treat NMS effectively and thus prevent its malignant outcome.
Secondary malignant lymphoedema after mastectomy in two dogs. This case report describes the diagnosis of secondary malignant lymphoedema in two dogs that had undergone a mastectomy. A remarkable severe oedematous lesion associated with lameness in the right hindlimb was observed in both cases. Diagnostic imaging examinations, including direct pedal lymphangiography (case 1) and lymphoscintigraphy (case 2), showed obstruction of lymph flow in the lymphatics of the right hindlimbs. Although the recommended medical management and physiotherapy had been applied to resolve the problems, oedema did not improve in the damaged region in both cases. Results of histopathological examinations suggested that the cause of the obstructed lymph flow was neoplastic cells in the lymphatics of the right hindlimb in both dogs.
Q: Hibernate cascade vs manual delete I am using Hibernate and a few times had to implement cascading DELETE operation from parent object to its children. I used the following two options. One option is to expose getChildren() on the parent object, add the child to the returned collection of children and allow Hibernate to cascade DELETE automatically. The disadvantage of this option is that getChildren() collection needs to be exposed even though it's only used to support Hibernate cascading. Another option is to look up and delete children manually in ParentDao.delete(parent). The disadvantage of this option is more custom code. However, this option may perform better if it uses batch delete statements. What approach do you mostly use? Do you see other pros and cons? A: What approach do you mostly use? Do you see other pros and cons? I use cascading when I have a real composition relation (and want to delete a relatively low number of records). However, I wouldn't introduce such a relation just to implement a delete, but use a query (bulk HQL DELETE or native SQL query). To my experience, the benefits outweigh the "cost" of the additional code required (which is small anyway).
Content The unit provides the framework for developing talent within organisations with a particular focus on knowledge, diversity and career management. The unit takes a theoretical and practical approach to human resource development in order to improve organisational performance. Assessment Written Assignment (Critical essay) 3000 words 40% Assignment (Workplace report) 60% Unit Fee Information Student Contribution Rate* Student Contribution Rate** Fee rate - Domestic Students Fee rate - International students $1283 $1283 $2962 $3697 * Rate for all CSP students, except for those who commenced Education and Nursing units pre 2010** Rate for CSP students who commenced Education and Nursing units pre 2010Please note: Unit fees listed do not apply to Deakin Prime students.
[Free iron in intracellular membrane structures]. Possibility of long-term existence of closed model membrane structures loaded with low molecular compounds of bivalent ferrum was shown. Liposomes prepared from phospholipids of egg yolk were used as a model. There were found similar effects of heating and detergents on the formation of nitrosyl complexes of non-heme iron (complexes 2.03) in the model system--liposomes loaded with ferrum on the one hand and mitochondria, hepatocytes and liver on the other. Heating and detergents intensify the formation of complexes 2.03, which is conditioned by iron liberation from the liposomes and closed intracellular membrane structures as a result of their destruction. A conclusion is drawn that free iron in animal liver cells is localized in the close membrane structures.
module Garner module Strategies module Context module Key class Base # Compute a hash of key-value pairs from a given ruby context, # and apply it to a cache identity. # # @param identity [Garner::Cache::Identity] The cache identity. # @param ruby_context [Object] An optional Ruby context. # @return [Garner::Cache::Identity] The modified identity. def self.apply(identity, _ruby_context = nil) identity end end end end end end
Dedicated to the conservation and restoration of nature, The Larch Company is a non-membership for-profit organization that represents species that cannot talk and humans not yet born. A deciduous conifer, the western larch has a contrary nature. Faster Than a Speeding Pronghorn—Not! There are no native antelope species in North America. Pronghorn (Antilocapra americana) may remind us of antelope in Africa, but they are taxonomically distinct. Pronghorn currently range in every western U.S. state save Washington, three Canadian provinces, and three Mexican states. Ranking very high on the charismatic-megafauna scale, pronghorn are unforgettable. With coats of tan and brown overlaid with liberal splashes of white (every one looks a little different), they can be seen across much of the Oregon Desert. Actually the size of a domestic sheep, pronghorn look much bigger because of their longer legs. They are named for the distinct prong, or fork, in their headgear, which serves to prevent permanent harm in fighting among males (mostly over females). Pronghorn commonly eat forbs in the spring and summer, but they often rely on sagebrush tips in the winter. They will most likely be found within 5 miles of water. Like deer, the males are called bucks, the females does, and the little ones are called either kids or fawns. They prefer low sagebrush as it allows them to better see and run. As social animals they are quite vocal (their call can often sound like a sneeze). They also communicate by the use of the white splotches on their rear. All these behaviors provide for the common defense. Historically, grizzly bears, cougars, and wolves were major predators of pronghorn, but today most of that work is done by coyotes, who along with golden eagles often prey upon kids. Robert Frost be damned, fences do not make us good neighbors to this species. Pronghorn can't jump! They never needed to. While pronghorn don't jump, they do swim (see Jordan Craters National Monument). Of most concern is competition with domestic livestock—particularly in drought years. Livestock eat forbs that would otherwise be available to pronghorn does. The result is the does go into the winter less healthy and have smaller fawns. Biologists estimate that 30 to 60 million pronghorn existed at the time of the European invasion. Today, even though the population is only around 1 million (about 23,000 in Oregon), the pronghorn is an excellent conservation success story. In 1915, only 10,000 to 15,000 of the animals remained. Market hunting, wanton slaughter, sod busting, and other kinds of habitat destruction brought the species to the edge of extinction. A federal excise tax on guns and ammunition provided money to state wildlife agencies to bring back the species. Pronghorn numbers increased 1,500 percent from 1924 to 1976 through the control of hunting, transplanting of herds on historical ranges, conservation, restoration of habitat, and other actions. Most amazing is the pronghorn's speed. Other mammals may have faster bursts of swiftness, but they can't maintain it for miles. Accurate measurements are rare, but speeds of between 50 and 60 miles per hour have been noted. It is difficult to get the test specimen to precisely parallel the test vehicle for adequate measurement periods, however. Pardon the anthropocentrism, but pronghorn run for fun. Numerous cases exist in the literature of pronghorn racing automobiles. Most memorable to this author were three bucks in Hawks Valley. The road was (relatively) straight and smooth, and the trio challenged the truck. The driver accelerated to pace them, all the while trying to stay on the road, avoid rocks, holes, and other animal life, and watch both the pronghorn and the speedometer (52 miles per hour!). After a few miles, the leader burst ahead and across the road in front of the truck and slowed again to the pace of the vehicle. It is obvious that this ruminant racer knew full well of the pitfall ahead—a road washout. Brakes were immediately applied, and later so were new shock absorbers. It was worth it. A once-in-a-lifetime experience (and the new shocks have a lifetime guarantee).
Harry Goodsir Henry Duncan Spens Goodsir (3 November 1819 – ) was a Scottish physician and naturalist who contributed to the pioneering work on cell theory done by his brother John Goodsir. He served as surgeon and naturalist on the ill-fated Franklin expedition. His body was never found, but forensic studies in 2009 on skeletal remains earlier recovered from King William Island in Canada suggest that they may be those of Harry Goodsir. Early life "Harry" Goodsir was born on 3 November 1819 in Anstruther, Fife, the son of Dr. John Goodsir, a medical practitioner. His paternal grandfather, also Dr. John Goodsir had been a medical practitioner in the nearby town of Lower Largo. Three of Harry's brothers became medical practitioners. John Goodsir, his elder brother, would become Professor of Anatomy at Edinburgh University and a pioneer of the doctrine that cells formed the basis of living organisms. His younger brother Robert qualified as a medical doctor from St Andrew's University, and Archibald studied in Edinburgh and Leipzig and qualified with membership of the Royal College of Surgeons of England. Career He studied medicine in Edinburgh and became a member of the Royal Medical Society. Having qualified as Licentiate of the Royal College of Surgeons of Edinburgh in 1840, he succeeded his brother John as Conservator of the Surgeons' Hall Museum in August 1843. He held this post until March 1845, when he left to join the Franklin expedition, and he was succeeded as conservator by his brother Archibald. Cell theory In 1845, he co-authored, with his brother John, Anatomical and Pathological Observations. This contained John's unpublished 1841–1842 lectures to which his brother had "added some of his own zoological, anatomical, and pathological observations." The three chapters supplied by Harry were seen by his brother as providing important confirmatory evidence to his cellular theory. It was this book that was to win John Goodsir international acclaim and led to Rudolph Virchow dedicating his epoch-making volume to him. Franklin expedition Rear Admiral Sir John Franklin, who had previously served on three expeditions to the Arctic, set off in what would prove Franklin's final expedition in 1845, commanding and . There were four medical officers: surgeon Stephen Stanley and assistant surgeon Harry Goodsir on HMS Erebus, and surgeon John Peddie and assistant surgeon Alexander MacDonald on HMS Terror. Goodsir's final communication was a paper entitled "On the anatomy of Forbesia", which was "... transmitted by the author from Disko Island in June 1845." This was published five years later, and is a comprehensive description of the insect species with 18 detailed illustrations. He is described as "Acting assistant surgeon on HMS Erebus". The expedition was last seen by Europeans one month later in July 1845. Goodsir's younger brother Robert joined two of the expeditions which attempted to find the Franklin expedition. In 1849, he joined the whaler Advice under the command of Scotsman William Penny, in what was the first of many unsuccessful attempts to find Franklin and his men. Robert Goodsir wrote an account of this voyage: An Arctic voyage to Baffin's Bay and Lancaster Sound: in search of friends with Sir John Franklin. He joined Penny again in 1850 as surgeon on the Admiralty-backed Franklin search expedition with the ships Lady Franklin and Sophia. Robert Goodsir graduated as a medical doctor from St. Andrews University in 1852 but rarely practiced medicine, travelling to New Zealand as a gold prospector and to Australia as a sheep farmer, before returning to Edinburgh where he died in 1895. He is buried in Dean Cemetery. Remains Between 1859 and 1949, skeletal remains representing at least 30 individuals were discovered on King William Island, and most were buried locally. In 1869, American explorer Charles Francis Hall was taken by local Inuit to a shallow grave on King William Island containing well-preserved skeletal remains and fragments of clothing. These remains were repatriated and interred beneath the Franklin Memorial at Greenwich Old Royal Naval College, London. The remains were thought to be those of an officer due to the remnants of a silk vest in which the body had been clothed and a gold tooth filling. After examination of the remains by the eminent biologist Thomas Henry Huxley, the Admiralty concluded that the remains were those of Henry Le Vesconte, a lieutenant on HMS Erebus. A subsequent examination in 2009 of the "well-preserved and fairly complete skeleton of a young adult male of European ancestry" included a facial reconstruction that showed "excellence of fit" with the face of Harry Goodsir, as portrayed in his 1845 daguerreotype. Strontium and oxygen isotope data from tooth enamel were consistent with an upbringing in eastern Scotland but not with Lt. Le Vesconte's upbringing in southwest England. A further clue suggesting these might be Goodsir's remains was a gold filling in a premolar tooth, unusual at that time. Goodsir's family were friendly with Robert Nasmyth, an Edinburgh dentist with an international reputation for such work. Harry's brother John had served as dental apprentice to Nasmyth. Analysis of the bones suggest that death was caused by an infected tooth. On , it was announced by the Canadian Government that the wreckage of a ship found in Queen Maud Gulf, west of O'Reilly Island had been positively identified as Erebus. In popular culture Harry Goodsir appears as a character in the 2007 novel, The Terror by Dan Simmons, a fictionalized account of Franklin's lost expedition, as well as the 2018 television adaptation, where he is portrayed by Paul Ready. References External links Category:19th-century explorers Category:19th-century Scottish medical doctors Category:1819 births Category:1840s deaths Category:Explorers of the Arctic Category:Franklin's lost expedition Category:Lost explorers Category:People from Anstruther Category:Scottish biologists Category:Scottish curators Category:Scottish polar explorers
Marie writes: As some of you may have heard, a fireball lit up the skies over Russia on February 15, 2013 when a meteoroid entered Earth's atmosphere. Around the same time, I was outside with my spiffy new digital camera - the Canon PowerShot SX260 HS. And albeit small, it's got a built-in 20x zoom lens. I was actually able to photograph the surface of the moon! So says professional killer Jackie Cogan at one point in Killing Them Softly, the third film by New Zealander Andrew Dominik - and considering the filmmaker's efforts to establish a connection between the events in the movie and the economic crisis started in the late 2000s thanks to the greed and lack of scruples of Wall Street, it is easy to see Cogan as an ordinary employee of any company complaining about the lack of vision of his bosses and, on the other hand, the big bankers as Armani-dressing versions of the violent mobsters who inhabit the crime section of the newspapers. More than that: fearful due to the financial disaster caused by their colleagues in Wall Street, the bad guys presented by Dominik are miles away from those gangsters who used to throw hundred dollar bills on the ground or distribute tips in exchange of a smile; instead, here they need to haggle prices with professional killers and negotiate with theirs superiors before approving a sum of a thousand dollars for framing someone. Today looks to be a day of renegades and gangsters from the start, with "Killing Them Softly" by Andrew Dominik, the second American film to premier in competition, first thing in the morning. The all-male cast is headlined by Brad Pitt, who also starred in the director's Oscar-nominated "The Assassination of Jesse James by the Coward Robert Ford. " This is a talky tough-guy movie that is heavy on long interchanges among thugs with odd accents and/or speech impediments. Talking like a tough guy means modifying every noun with the f-word (and I wonder what the grand total would be for this film). "Killing Them Softly" is set in New Orleans, although pains are taken to avoid any distinctly identifying landmarks. The grey, wet, boarded-up desolation of the landscape could only be the post-Katrina lower 9th Ward, and I found the film's fleeting glimpses of that more electrifying than the introduction of Frankie (Scoot McNairy) and Russell (Ben Mendelsohn), a pair of lowlifes setting up a robbery with Squirrel (Vincent Curatola). The two bumblers manage, just barely, to pull off the robbery of a high-stakes poker game, which makes it only a matter of time before they're marked men. It also makes Markie (Ray Liotta), the pudgy mid-level gangster who was running the game a suspect. Whatever higher authority these thugs answer to calls in its enforcer Jackie Cogan (Brad Pitt) to sort it out. The first and only woman, who is also the first and only black person in the story, makes her appearance one hour into the film. She's a prostitute who's treated like garbage in her approximately two minutes on the screen. This is not only a man's world, it's a white man's world. While revisiting David Michôd's "Animal Kingdom" (2010), I wondered what it was like for its passive teenager hero to live with his heroin addict mother at their small home. We can only assume that she definitely could not get the Mother of the Year award, considering the mundane but eerie opening sequence. It's around afternoon, and her son is watching some TV show, and she seems to be asleep next to him on couch - but we soon learn she died from an overdose. It's a wrap for the 2010 Muriel Awards, but although the winners have been announced, there's still plenty of great stuff to read about the many winners and runners-up. ('Cause, as we all know, there's so much more to life than "winning.") I was pleased to be asked to write the mini-essay about "The Social Network" because, no, I'm not done with it. (Coming soon: a piece about the Winkelvii at the Henley Gregatta section -- which came in 11th among Muriel voters for the year's Best Cinematic Moment.) You might recall that last summer I compared the editorial, directorial and storytelling challenges of a modest character-based comedy ("The Kids Are All Right") to a large-scale science-fiction spectacular based on the concept of shifting between various levels of reality/unreality -- whether in actual time and space or in consciousness and imagination. (The latter came in at No. 13 in the Muriels balloting; the former in a tie for No. 22.) My point was that, as far as narrative filmmaking is concerned, there isn't much difference. To illustrate a similar comparison this time, I've used a one-minute segment out of "The Social Network" (Multiple levels of storytelling in The Social Network). You might like one picture better than the other for any number of reasons, but I find their similarities more illuminating than their differences: Is the universe deterministic, or random? Not the first question you'd expect to hear in a thriller, even a great one. But to hear this question posed soon after the opening sequence of "Knowing" gave me a particular thrill. Nicolas Cage plays Koestler, a professor of astrophysics at MIT, and as he toys with a model of the solar system, he asks that question of his students. Deterministic means that if you have a complete understanding of the laws of physics, you can predict with certainty everything that will happen after (for example) the universe is created in the Big Bang.
diff --git a/tools/_copy.sh b/tools/_copy.sh index 7f71eba..c925a3a 100644 --- a/tools/_copy.sh +++ b/tools/_copy.sh @@ -1,4 +1,4 @@ -#!/bin/bash +#!/usr/bin/env bash # Copyright 2018 The Bazel Authors. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License");
In May, a newly formed group led by Okinawan university professors held a symposium on independence that drew 250 people. A tiny political party that advocates separation from Japan through peaceful means has been revived after decades of dormancy, though its candidates have fared poorly in recent elections. And on his blog, a member of Parliament from Okinawa recently went so far as to post an entry titled “Okinawa, It’s Finally Time for Independence From Yamato,” using the Okinawan word for the rest of Japan. “Before, independence was just something we philosophized about over drinks,” said Masahide Ota, a former governor of Okinawa, who is not a member of the movement. “Now, it is being taken much more seriously.” The independence movement remains nascent, with a few hundred active adherents at most. But Mr. Ota and others say it still has the potential to complicate Japan’s unfolding contest with China for influence in the region. That struggle expanded recently to include what appears to be a semiofficial campaign in China to question Japanese rule of Okinawa. Some analysts see the campaign as a ploy to strengthen China’s hand in a dispute over a smaller group of islands that has captured international headlines in recent months. Some Chinese scholars have called for exploiting the independence movement to say there are splits even in Japan over the legitimate ownership of islands annexed during Japan’s imperial expansion in the late 19th century, as Okinawa and the smaller island group were. Okinawa has long looked and felt different from the rest of Japan, with the islands’ tropical climate, vibrant musical culture and lower average incomes setting it apart. Strategically situated in the center of East Asia, the islands, once known as the Kingdom of the Ryukyus, have had a tortured history with Japan since the takeover, including the forced suicides of Okinawan civilians by Japanese troops during World War II and the imposition of American bases after the war.
Q: Matrix representation of the operator $\hat{S}_x$ in the standard basis I have recently been introduced to the idea of spectral decomposition of spin angular momentum operators in Quantum-Mechanics. Out of curiosity I was wondering if the the spin angular momentum operator $\hat S_x$ could be written (in the standard basis given) in matrix form, given by $|↑\rangle = $$\begin{pmatrix}1\\ 0\end{pmatrix}$ and $|↓\rangle = $$\begin{pmatrix}0\\ 1\end{pmatrix}$ I have the spectral decomposition of the $\hat S_x$ operator: \begin{align} \hat{S}_x=\frac{\hbar}{2}(|\downarrow\rangle\langle\uparrow|+|\uparrow\rangle\langle\downarrow|). \end{align} And the matrix in the given standard basis would be as follows: $$\begin{pmatrix} \langle\uparrow|\hat S_x|\uparrow\rangle & \langle\uparrow|\hat S_x|\downarrow\rangle\\ \langle\downarrow|\hat S_x|\uparrow\rangle\ & \langle\downarrow|\hat S_x|\downarrow\rangle\end{pmatrix}$$ Is this possible and if so if my representation of the matrix correct? A: From the form you've given of the operator $S_x$ and the basis vector you've given, you can easily calculate the matrix representation of it. Since $$ S_x = \frac{\hbar}{2}(|\uparrow\rangle\langle\downarrow|+|\downarrow\rangle\langle\uparrow|) $$ and $$|\uparrow\rangle = \left(\begin{matrix}1\\0\end{matrix}\right)\implies\langle\uparrow| = \left(1\;\;\;\;0\right)\\ |\downarrow\rangle = \left(\begin{matrix}0\\1\end{matrix}\right)\implies\langle\downarrow| = \left(0\;\;\;\;1\right) $$ using simple matrix multiplication you get $$ |\uparrow\rangle\langle\downarrow| = \left(\begin{matrix}1\\0\end{matrix}\right) \left(0\;\;\;\;1\right) = \left(\begin{matrix} 0&1\\ 0&0 \end{matrix}\right)\\ |\downarrow\rangle\langle\uparrow| = \left(\begin{matrix}0\\1\end{matrix}\right) \left(1\;\;\;\;0\right) = \left(\begin{matrix} 0&0\\ 1&0 \end{matrix}\right) $$ and so that the matrix representation of the operator is just $$ S_x = \frac{\hbar}{2}\left(\begin{matrix} 0&1\\ 1&0 \end{matrix}\right) $$ which is just one of the Pauli matrices. This also gives the result of elements of the matrix you gave which are indeed correct. Bonus matrix multiplication I find that many people don't get how to do matrix multiplication with simple vectors so i wanted to give an explanation to everybody that found this answer in a colorful way. I'll evaluate only one of the two matrices in the answer $$ |\uparrow\rangle\langle\downarrow| = \left(\begin{matrix}1\\0\end{matrix}\right) \left(0\;\;\;\;1\right) = \left(\begin{matrix} 0&1\\ 0&0 \end{matrix}\right) $$ The multiplication is done using row by column multiplication. First step we take the first row element of the first vector and multiply it by the first element of the first column of the second vector $$|\uparrow\rangle\langle\downarrow| = \left(\begin{matrix}\color{red}{1}\\0\end{matrix}\right) \left(\color{red}{0}\;\;\;\;1\right) = \left(\begin{matrix} \color{red}{0}&1\\ 0&0 \end{matrix}\right) \qquad \text{First row - first column}$$ and so on for the remaning elements $$|\uparrow\rangle\langle\downarrow| = \left(\begin{matrix}\color{green}{1}\\0\end{matrix}\right) \left(0\;\;\;\;\color{green}{1}\right) = \left(\begin{matrix} 0&\color{green}{1}\\ 0&0 \end{matrix}\right)\qquad\text{First row - second column}\\ |\uparrow\rangle\langle\downarrow| = \left(\begin{matrix}1\\\color{blue}{0}\end{matrix}\right) \left(\color{blue}{0}\;\;\;\;1\right) = \left(\begin{matrix} 0&1\\ \color{blue}{0}&0 \end{matrix}\right)\qquad\text{Second row - first column}\\ |\uparrow\rangle\langle\downarrow| = \left(\begin{matrix}1\\\color{orange}{0}\end{matrix}\right) \left(0\;\;\;\;\color{orange}{1}\right) = \left(\begin{matrix} 0&1\\ 0&\color{orange}{0} \end{matrix}\right)\qquad\text{Second row - second column} $$ Hope it'll be useful to somebody A: Yes,it is possible. Your representation is correct. It is easily to check using explicit form of $\hat{S}_x$: $$ \begin{pmatrix} \langle\uparrow|\hat S_x|\uparrow\rangle & \langle\uparrow|\hat S_x|\downarrow\rangle\\ \langle\downarrow|\hat S_x|\uparrow\rangle\ & \langle\downarrow|\hat S_x|\downarrow\rangle\end{pmatrix} = \frac{\hbar}{2} \begin{pmatrix} 0 & 1\\ 1& 0 \end{pmatrix} = \frac{\hbar}{2} \sigma_x $$
Related literature {#sec1} ================== The title compound is a dual ErbB-1/ErbB-2 tyrosine kinase inhibitor, see: Petrov *et al.* (2006[@bb6]). For bond-length data, see: Allen *et al.* (1987[@bb1]). Experimental {#sec2} ============ {#sec2.1} ### Crystal data {#sec2.1.1} C~13~H~9~ClFNO~3~*M* *~r~* = 281.66Monoclinic,*a* = 8.3290 (17) Å*b* = 12.640 (3) Å*c* = 11.875 (2) Åβ = 96.94 (3)°*V* = 1241.0 (4) Å^3^*Z* = 4Mo *K*α radiationμ = 0.32 mm^−1^*T* = 294 K0.30 × 0.20 × 0.10 mm ### Data collection {#sec2.1.2} Enraf--Nonius CAD-4 diffractometerAbsorption correction: ψ scan (North *et al.*, 1968[@bb5]) *T* ~min~ = 0.909, *T* ~max~ = 0.9682411 measured reflections2248 independent reflections1340 reflections with *I* \> 2σ(*I*)*R* ~int~ = 0.0283 standard reflections frequency: 120 min intensity decay: 1% ### Refinement {#sec2.1.3} *R*\[*F* ^2^ \> 2σ(*F* ^2^)\] = 0.052*wR*(*F* ^2^) = 0.149*S* = 1.012248 reflections172 parametersH-atom parameters constrainedΔρ~max~ = 0.16 e Å^−3^Δρ~min~ = −0.25 e Å^−3^ {#d5e378} Data collection: *CAD-4 Software* (Enraf--Nonius, 1989[@bb2]); cell refinement: *CAD-4 Software*; data reduction: *XCAD4* (Harms & Wocadlo, 1995[@bb4]); program(s) used to solve structure: *SHELXS97* (Sheldrick, 2008[@bb7]); program(s) used to refine structure: *SHELXL97* (Sheldrick, 2008[@bb7]); molecular graphics: *ORTEP-3 for Windows* (Farrugia, 1997[@bb3]) and *PLATON* (Spek, 2009[@bb8]); software used to prepare material for publication: *SHELXL97* and *PLATON*. Supplementary Material ====================== Crystal structure: contains datablocks global, I. DOI: [10.1107/S160053680903431X/hk2758sup1.cif](http://dx.doi.org/10.1107/S160053680903431X/hk2758sup1.cif) Structure factors: contains datablocks I. DOI: [10.1107/S160053680903431X/hk2758Isup2.hkl](http://dx.doi.org/10.1107/S160053680903431X/hk2758Isup2.hkl) Additional supplementary materials: [crystallographic information](http://scripts.iucr.org/cgi-bin/sendsupfiles?hk2758&file=hk2758sup0.html&mime=text/html); [3D view](http://scripts.iucr.org/cgi-bin/sendcif?hk2758sup1&Qmime=cif); [checkCIF report](http://scripts.iucr.org/cgi-bin/paper?hk2758&checkcif=yes) Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: [HK2758](http://scripts.iucr.org/cgi-bin/sendsup?hk2758)). Comment ======= The title compound is one kind of important pharmaceutical intermediates, which is dual ErbB-1/ErbB-2 tyrosine kinase inhibitior (Petrov *et al.*, 2006). We report herein its crystal structure. In the molecule of the title compound, (Fig. 1), the bond lengths (Allen *et al.*, 1987) and angles are within normal ranges. Rings A (C1-C6) and B (C8-C13) are, of course, planar and they are oriented at a dihedral angle of A/B = 41.23 (5)°. Atom C7 is -0.061 (3) Å away from the plane of ring A, while atoms Cl, O1, N and C7 are -0.007 (3), 0.001 (3), 0.018 (3) and 0.029 (3) Å away from the plane of ring B, respectively. In the crystal structure, intermolecular C-H···O interactions link the molecules in herring-bone arrangement along the b axis and π--π contact between the benzene rings, Cg1---Cg2^i^, \[symmetry code: (i) x, 1/2 - y, 1/2 + z, where Cg1 and Cg2 are centroids of the rings A (C1-C6) and B (C8-C13), respectively\] may further stabilize the structure, with centroid-centroid distance of 3.881 (1) Å. Experimental {#experimental} ============ For the preparation of the title compound, in the presence of sodium carbonate (10 g), 2-chloro-4-nitrophenol (1 mmol) and 1-(bromomethyl)-3-fluorobenzene (1 mmol) in acetonitrile (25 ml) were stirred at 313 K for 8 h. Sodium carbonate was filtered off and the filtrate was washed with brine. The organic phase was dried over anhydrous sodium sulfate, filtered and concentrated to give the crude product, which was crystallized from ethyl acetate to give the title compound. Crystals suitable for X-ray analysis were obtained by dissolving the title compound (0.1 g) in ethyl acetate (10 ml) and evaporating the solvent slowly at room temperature for 3 d. Refinement {#refinement} ========== H atoms were positioned geometrically with C-H = 0.93 and 0.97 Å for aromatic and methylene H atoms, respectively, and constrained to ride on their parent atoms, with U~iso~(H) = 1.2U~eq~(C). Figures ======= ![The molecular structure of the title molecule with the atom-numbering scheme. Displacement ellipsoids are drawn at the 50% probability level.](e-65-o2327-fig1){#Fap1} ![A partial packing diagram. Hydrogen bonds are shown as dashed lines.](e-65-o2327-fig2){#Fap2} Crystal data {#tablewrapcrystaldatalong} ============ ------------------------- ------------------------------------- C~13~H~9~ClFNO~3~ *F*(000) = 576 *M~r~* = 281.66 *D*~x~ = 1.508 Mg m^−3^ Monoclinic, *P*2~1~/*c* Mo *K*α radiation, λ = 0.71073 Å Hall symbol: -P 2ybc Cell parameters from 25 reflections *a* = 8.3290 (17) Å θ = 9--12° *b* = 12.640 (3) Å µ = 0.32 mm^−1^ *c* = 11.875 (2) Å *T* = 294 K β = 96.94 (3)° Block, yellow *V* = 1241.0 (4) Å^3^ 0.30 × 0.20 × 0.10 mm *Z* = 4 ------------------------- ------------------------------------- Data collection {#tablewrapdatacollectionlong} =============== ------------------------------------------------------ -------------------------------------- Enraf--Nonius CAD-4 diffractometer 1340 reflections with *I* \> 2σ(*I*) Radiation source: fine-focus sealed tube *R*~int~ = 0.028 graphite θ~max~ = 25.3°, θ~min~ = 2.4° ω/2θ scans *h* = 0→10 Absorption correction: ψ scan (North *et al.*, 1968) *k* = 0→15 *T*~min~ = 0.909, *T*~max~ = 0.968 *l* = −14→14 2411 measured reflections 3 standard reflections every 120 min 2248 independent reflections intensity decay: 1% ------------------------------------------------------ -------------------------------------- Refinement {#tablewraprefinementdatalong} ========== ------------------------------------- ----------------------------------------------------------------------------------- Refinement on *F*^2^ Primary atom site location: structure-invariant direct methods Least-squares matrix: full Secondary atom site location: difference Fourier map *R*\[*F*^2^ \> 2σ(*F*^2^)\] = 0.052 Hydrogen site location: inferred from neighbouring sites *wR*(*F*^2^) = 0.149 H-atom parameters constrained *S* = 1.01 *w* = 1/\[σ^2^(*F*~o~^2^) + (0.07*P*)^2^\] where *P* = (*F*~o~^2^ + 2*F*~c~^2^)/3 2248 reflections (Δ/σ)~max~ \< 0.001 172 parameters Δρ~max~ = 0.16 e Å^−3^ 0 restraints Δρ~min~ = −0.25 e Å^−3^ ------------------------------------- ----------------------------------------------------------------------------------- Special details {#specialdetails} =============== ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Refinement. Refinement of F^2^ against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F^2^, conventional R-factors R are based on F, with F set to zero for negative F^2^. The threshold expression of F^2^ \> 2sigma(F^2^) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F^2^ are statistically about twice as large as those based on F, and R- factors based on ALL data will be even larger. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å^2^) {#tablewrapcoords} ================================================================================================== ------ -------------- -------------- --------------- -------------------- -- *x* *y* *z* *U*~iso~\*/*U*~eq~ Cl 0.81771 (13) 0.51262 (7) 0.07511 (8) 0.0802 (4) F 0.4335 (3) 1.00189 (17) −0.29984 (19) 0.0996 (8) O1 0.6878 (3) 0.72025 (15) 0.07000 (17) 0.0582 (6) O2 1.1588 (3) 0.5447 (2) 0.4720 (2) 0.0846 (8) O3 1.0836 (4) 0.6867 (3) 0.5501 (2) 0.1058 (10) N 1.0804 (3) 0.6268 (3) 0.4700 (3) 0.0694 (8) C1 0.3405 (4) 0.8293 (3) −0.2781 (3) 0.0670 (10) H1A 0.2801 0.8325 −0.3492 0.080\* C2 0.4320 (4) 0.9129 (3) −0.2358 (3) 0.0629 (9) C3 0.5233 (4) 0.9117 (2) −0.1320 (3) 0.0550 (8) H3A 0.5858 0.9700 −0.1069 0.066\* C4 0.5210 (4) 0.8224 (2) −0.0653 (3) 0.0513 (8) C5 0.4296 (4) 0.7364 (3) −0.1061 (3) 0.0630 (9) H5A 0.4279 0.6756 −0.0621 0.076\* C6 0.3407 (4) 0.7401 (3) −0.2118 (3) 0.0707 (10) H6A 0.2801 0.6815 −0.2386 0.085\* C7 0.6137 (4) 0.8219 (2) 0.0507 (3) 0.0587 (9) H7A 0.6959 0.8767 0.0564 0.070\* H7B 0.5415 0.8358 0.1073 0.070\* C8 0.7811 (4) 0.7033 (2) 0.1694 (3) 0.0492 (8) C9 0.8086 (4) 0.7766 (2) 0.2567 (3) 0.0564 (8) H9A 0.7609 0.8431 0.2488 0.068\* C10 0.9064 (4) 0.7510 (3) 0.3551 (3) 0.0597 (9) H10A 0.9250 0.8002 0.4135 0.072\* C11 0.9755 (4) 0.6531 (2) 0.3662 (3) 0.0531 (8) C12 0.9495 (4) 0.5787 (3) 0.2808 (3) 0.0575 (8) H12A 0.9970 0.5122 0.2896 0.069\* C13 0.8532 (4) 0.6039 (2) 0.1836 (3) 0.0530 (8) ------ -------------- -------------- --------------- -------------------- -- Atomic displacement parameters (Å^2^) {#tablewrapadps} ===================================== ----- ------------- ------------- ------------- -------------- -------------- -------------- *U*^11^ *U*^22^ *U*^33^ *U*^12^ *U*^13^ *U*^23^ Cl 0.1074 (8) 0.0574 (6) 0.0736 (7) 0.0078 (5) 0.0026 (5) −0.0116 (4) F 0.135 (2) 0.0791 (15) 0.0789 (15) 0.0098 (14) −0.0083 (14) 0.0259 (11) O1 0.0683 (14) 0.0475 (12) 0.0568 (14) 0.0075 (11) −0.0005 (11) 0.0012 (10) O2 0.0635 (17) 0.094 (2) 0.0931 (19) 0.0152 (16) −0.0032 (14) 0.0228 (16) O3 0.109 (2) 0.124 (3) 0.0751 (19) 0.014 (2) −0.0258 (17) −0.0135 (19) N 0.0542 (18) 0.083 (2) 0.069 (2) −0.0014 (18) 0.0016 (16) 0.0110 (18) C1 0.061 (2) 0.083 (3) 0.055 (2) 0.011 (2) −0.0022 (17) −0.0035 (19) C2 0.067 (2) 0.060 (2) 0.062 (2) 0.0124 (19) 0.0055 (18) 0.0090 (18) C3 0.0521 (19) 0.0494 (18) 0.063 (2) 0.0055 (15) 0.0063 (16) −0.0019 (15) C4 0.0482 (18) 0.0511 (18) 0.0552 (19) 0.0061 (15) 0.0087 (15) 0.0009 (15) C5 0.067 (2) 0.057 (2) 0.065 (2) −0.0028 (18) 0.0075 (18) 0.0064 (17) C6 0.061 (2) 0.075 (2) 0.074 (3) −0.0052 (19) 0.001 (2) −0.008 (2) C7 0.068 (2) 0.0494 (19) 0.057 (2) 0.0020 (16) 0.0005 (17) 0.0031 (15) C8 0.0461 (18) 0.0499 (18) 0.0515 (18) −0.0016 (15) 0.0054 (15) 0.0030 (15) C9 0.062 (2) 0.0480 (17) 0.060 (2) 0.0025 (16) 0.0068 (17) 0.0020 (16) C10 0.060 (2) 0.063 (2) 0.056 (2) −0.0072 (17) 0.0062 (17) −0.0032 (16) C11 0.0446 (18) 0.0566 (19) 0.058 (2) −0.0012 (16) 0.0055 (15) 0.0078 (16) C12 0.049 (2) 0.057 (2) 0.068 (2) 0.0068 (16) 0.0107 (17) 0.0077 (17) C13 0.0534 (19) 0.0518 (19) 0.0548 (19) −0.0026 (16) 0.0096 (16) 0.0017 (15) ----- ------------- ------------- ------------- -------------- -------------- -------------- Geometric parameters (Å, °) {#tablewrapgeomlong} =========================== -------------------- ------------ ----------------------- ------------ Cl---C13 1.728 (3) C5---C6 1.379 (5) F---C2 1.359 (4) C5---H5A 0.9300 O1---C7 1.432 (3) C6---H6A 0.9300 O1---C8 1.350 (4) C7---H7A 0.9700 N---O2 1.225 (4) C7---H7B 0.9700 N---O3 1.214 (4) C8---C9 1.388 (4) N---C11 1.460 (4) C8---C13 1.394 (4) C1---C2 1.363 (5) C9---C10 1.380 (5) C1---C6 1.375 (4) C9---H9A 0.9300 C1---H1A 0.9300 C10---C11 1.366 (4) C2---C3 1.367 (4) C10---H10A 0.9300 C3---C4 1.381 (4) C11---C12 1.380 (4) C3---H3A 0.9300 C12---C13 1.361 (4) C4---C5 1.382 (4) C12---H12A 0.9300 C4---C7 1.496 (4) C8---O1---C7 118.3 (2) O1---C7---H7A 110.0 O2---N---C11 118.2 (3) C4---C7---H7A 110.0 O3---N---O2 123.5 (3) O1---C7---H7B 110.0 O3---N---C11 118.3 (3) C4---C7---H7B 110.0 C2---C1---C6 117.6 (3) H7A---C7---H7B 108.4 C2---C1---H1A 121.2 O1---C8---C9 124.9 (3) C6---C1---H1A 121.2 O1---C8---C13 116.3 (3) F---C2---C1 118.5 (3) C9---C8---C13 118.8 (3) F---C2---C3 118.1 (3) C10---C9---C8 120.3 (3) C1---C2---C3 123.3 (3) C10---C9---H9A 119.9 C2---C3---C4 118.7 (3) C8---C9---H9A 119.9 C2---C3---H3A 120.6 C11---C10---C9 119.5 (3) C4---C3---H3A 120.6 C11---C10---H10A 120.3 C3---C4---C5 119.2 (3) C9---C10---H10A 120.3 C3---C4---C7 119.4 (3) C10---C11---C12 121.3 (3) C5---C4---C7 121.4 (3) C10---C11---N 119.3 (3) C6---C5---C4 120.3 (3) C12---C11---N 119.4 (3) C6---C5---H5A 119.8 C13---C12---C11 119.2 (3) C4---C5---H5A 119.8 C13---C12---H12A 120.4 C1---C6---C5 120.8 (3) C11---C12---H12A 120.4 C1---C6---H6A 119.6 C12---C13---C8 120.9 (3) C5---C6---H6A 119.6 C12---C13---Cl 120.4 (3) O1---C7---C4 108.4 (2) C8---C13---Cl 118.6 (2) C6---C1---C2---F −180.0 (3) C13---C8---C9---C10 0.3 (5) C6---C1---C2---C3 0.3 (5) C8---C9---C10---C11 −0.2 (5) F---C2---C3---C4 179.1 (3) C9---C10---C11---C12 −0.1 (5) C1---C2---C3---C4 −1.2 (5) C9---C10---C11---N 179.3 (3) C2---C3---C4---C5 1.3 (5) O3---N---C11---C10 11.0 (5) C2---C3---C4---C7 −177.0 (3) O2---N---C11---C10 −170.0 (3) C3---C4---C5---C6 −0.5 (5) O3---N---C11---C12 −169.6 (3) C7---C4---C5---C6 177.7 (3) O2---N---C11---C12 9.5 (4) C2---C1---C6---C5 0.5 (5) C10---C11---C12---C13 0.2 (5) C4---C5---C6---C1 −0.4 (5) N---C11---C12---C13 −179.2 (3) C8---O1---C7---C4 178.5 (2) C11---C12---C13---C8 −0.1 (4) C3---C4---C7---O1 −140.2 (3) C11---C12---C13---Cl −179.9 (2) C5---C4---C7---O1 41.6 (4) O1---C8---C13---C12 −179.9 (3) C7---O1---C8---C9 1.5 (4) C9---C8---C13---C12 −0.2 (4) C7---O1---C8---C13 −178.8 (3) O1---C8---C13---Cl −0.1 (4) O1---C8---C9---C10 −180.0 (3) C9---C8---C13---Cl 179.6 (2) -------------------- ------------ ----------------------- ------------ Hydrogen-bond geometry (Å, °) {#tablewraphbondslong} ============================= ------------------ --------- --------- ----------- --------------- *D*---H···*A* *D*---H H···*A* *D*···*A* *D*---H···*A* C7---H7A···O2^i^ 0.97 2.49 3.423 (4) 162 ------------------ --------- --------- ----------- --------------- Symmetry codes: (i) −*x*+2, *y*+1/2, −*z*+1/2. ###### Hydrogen-bond geometry (Å, °) *D*---H⋯*A* *D*---H H⋯*A* *D*⋯*A* *D*---H⋯*A* ------------------ --------- ------- ----------- ------------- C7---H7*A*⋯O2^i^ 0.97 2.49 3.423 (4) 162 Symmetry code: (i) .
1. Field of the Invention This invention relates to the field of network processors, specifically network processors adapted to perform packet processing. 2. Description of the Related Art In the data networking field there exists a long felt need to provide faster packet processing using fewer system resources and more efficient hardware. Those of ordinary skill in the art have long realized that a programmable processing system can be readily adapted to provide packet processing. However, such systems are typically implemented in custom or semi-custom application specific integrated circuits (ASICs) which are difficult and costly to develop and produce. Furthermore, such ASICs are not readily changeable in the event that packet configurations, processing requirements, or standards change over time. What is needed is a rapidly adaptable packet processing system able to be easily configured to perform a wide range of packet processing tasks without redesign or reconstruction of the processor system hardware itself.
"[twig snapping]" " I got a bad feeling, mama." ""you be careful, boy." "Don't get yourself hurt none."" "Don't worry your pretty little head, mama." "Ain't nothing gonna happen to the finger." ""don't get cocky." ""you know he's out there." "He's watching you."" "Don't nag me, mama!" "I ain't no little kid no more!" "[bushes rustling]" "[sighs]" " [roars]" " ♪ teenage mutant ninja turtles ♪" "♪ teenage mutant ninja turtles ♪" "♪ teenage mutant ninja turtles ♪" "♪ heroes in a half shell, turtle power ♪" " ♪ here we go, it's a lean, green, ninja team ♪" "♪ on the scene, cool teens doing ninja things ♪" "♪ so extreme, out the sewer like laser beams ♪" "♪ get rocked with the shell-shocked pizza kings ♪" "♪ can't stop these radical dudes ♪" "♪ the secret of the ooze, made the chosen few ♪" "♪ emerge from the shadows to make their move ♪" "♪ the good guys win, and the bad guys lose ♪" "♪ ♪" "♪ leonardo's the leader in blue ♪" "♪ does anything it takes to get his ninjas through ♪" "♪ donatello is the fellow who has a way with machines ♪" "♪ raphael's got the most attitude on the team ♪" "♪ michelangelo, he's one of a kind ♪" "♪ and you know just where to find him when it's party time ♪" "♪ master splinter taught 'em every single skill they need ♪" "♪ to be one lean, mean, green, incredible team ♪" " ♪ teenage mutant ninja turtles ♪" "♪ teenage mutant ninja turtles ♪" "♪ teenage mutant ninja turtles ♪" "♪ heroes in a half shell, turtle power ♪" " There." "[music box plays melody]" " Too much?" " Do you really want my opinion?" " Only if you think it's perfect." " It's perfect!" " Wow, that is just super neat." "Thanks so much, donnie!" "[music box closes] well, got to go!" "Got some, uh, training to do... [chuckles] thanks again, donnie!" "[door opens, closes]" " I can't even imagine how you're feeling right now." " Devastated." " Aw, man!" "I was totally gonna say "devastated"!" "I should've just went for it." "I got to learn to trust my instincts!" "I mean..." "That's rough, bro." " [groans]" " Smooth move, genius." "It's never gonna happen, donnie." "We're mutants." "She's a girl." "You're a giant talking turtle." "The sooner you get used to it, the better." "You know what you need?" "Huh?" "Huh, buddy?" "You know what you need?" " You to leave?" " A little forest ninja training!" "Ha!" "Huh?" "Huh?" "Sounds like fun, right?" "Right?" " [whimpers]" " [grunting]" "Aw, you guys are rustier than the titanic's butt!" " No, we're" " You're not in the city anymore." "You need to get used to this new environment" "No buildings, no subways." " Lots of trees." " Ugh, tell me about it." " Well, what do you think?" " Hmm..." "Turtle hunt." " Turtle hunt." "Both:" "Turtle hunt?" " You gonna help out, leo?" " Yeah, I-- [groans]" "Sorry." "I guess I'm still not up to it." "Enjoy your little hunt, guys." " Dude!" " What do you mean by "turtle hunt"?" " You guys are gonna head into the forest." "I'll give you a five-minute head start," "Then I hunt you down." "If you can't stay hidden for at least an hour," "You'll have to..." "Clean out the chicken coop." " No way!" " Not the chicken coop, man!" " It smells like cheese fossils!" " It's got spiders so big, they play the banjo!" "[banjo music plays]" " We can't clean that thing!" " Then get moving." "[both yell]" " How do we hide in the woods, anyway?" "There's no doors!" " Hmm." "Ah!" "We could climb a tree." " That's the first place he's gonna look." "Could we burrow?" "Do turtles burrow in the wild?" " I don't know." "I've never been in the wild before." "Have you?" " Does jersey count?" "[distant roaring]" "What the heck was that?" " It's got to be raph messing with us!" "Right?" " We got to hide, now!" " Aw, we are so cleaning that coop." "Raph's gonna find us up here in, like, five seconds." " I don't think so." " Why not?" " [growling]" " Um..." "Hi?" "[both screaming]" " [roaring]" " What is it?" "Some kind of monkey man?" " It looks like the mythical sasquatch..." "Bigfoot!" "[both grunt]" "He's too big." "Run!" " [screaming]" " Come on, you're not even trying to hide!" "Both:" "Bigfoot!" " Yeah, right." "If you think you're gonna get out of cleaning that coop by-- [all screaming]" "Looks like we're gonna get some real training." " [roaring]" " Booyakasha!" "[yelling]" " Bigfoot got some skills." " Wah!" " [crying]" " Hey, are you" " Stop!" "Guys, he's hurt!" "I don't think he wants to fight us." "I think he's just scared." " Scared." " He can talk?" " Look at that sagittal crest." "It could be a paranthropus robustus," "A hominid long thought extinct!" " [muffled] can't breathe!" " Okay, okay, easy now..." "Easy." "I can fix your arm if you let me." "It's okay." "I want to help." "It's not bad, but I'd like to take you home" "Where we can clean it up and bandage it." "Is that okay?" " Hello?" "Anybody?" " No way!" "We can't bring a giant ape-man home with us!" " Of course we can." "Mikey?" " Injured woodland creature?" "Bring him home." " Two to one, raph." "Sorry." "Hi." "I'm donnie." "This is raph, and this is mikey." " I." " I?" "Okay, your name is I?" " I..." "No name." " Of course you have a name." "You're bigfoot!" " Bigfoot!" "[laughs]" "Bigfoot." " Will you come with us?" " [whimpers]" " It's okay." "You're gonna be safe with us." " Why would he be scared of us?" " Well, mama..." "Been hunting this hairy freak for years now." ""and now he goes and finds hisself some more freaks!"" "Settle down, mama." "Ain't no little green aliens" "Gonna save bigfoot from the finger." " Hello?" "Anybody home?" "We got company!" " What?" "You can't let anyone in." " Why is the doorway full of hair?" " And a giant butt?" "Come on, it's okay." " [gasps] - it's bigfoot!" " You brought bigfoot home?" " Why everyone know bigfoot name?" " Bigfoot, this is leo, casey, and April." " He's hurt!" "I'll get the first-aid kit." " You can't just bring home bigfoot!" " [chuckles awkwardly]" " He needs our help..." "Just until his arm gets better." " Come on, let's get that wound cleaned up." " Sure, but when I wanted to have a dog," "You guys were like like, "no way."" " You guys don't get it." "Bigfoot is the missing genetic link" "Between humans and the ape." "This discovery will change the face of science itself." " So it's bigger than talking turtles?" " [growls]" " There you go." "Try not to use it too much for the next few days." " Mm-hmm." "Bigfoot thank donnie." " No problem." " Bad man after bigfoot." "Name--the finger." " The finger?" " Very bad mans." "But donnie good mans." " Aw, thanks, bigfoot." " Donnie very good mans..." "So good." "Bigfoot..." " Whoa." " Ah!" " Love donnie!" " But, but, but" " Bigfoot am lady!" " That..." "Is..." "Great?" " That..." "Is great!" "[laughter]" " Bigfoot bring meat!" " Where did you" " Good meat." "[laughs]" "Tender, tender." " [screams, retches]" " Bigfoot make waste." " Uh, make waste?" "[foghorn blares]" "Aah!" "Bigfoot!" "Donnie and mikey need to learn to blend in out here" "Forest stealth." "If you're gonna stay with us, at least you can help out." "So show them some stuff, okay?" "I have to go clean out the tub." "[breathing deeply through mask]" " Please don't grab my head again." " Blend in." " I can still see him." " Mama, looks like bigfoot got him a couple" "Of little green alien buddies." "The finger is okay with that." "[weapon clicks]" " Huh?" "[gasps] [both screaming]" " What are you" " Ugh, you're squishing me!" " Darn it!" "Lost 'em, but not for long." " Come on, leo, give me your best!" " You'll gain no ground." " Oh, man, bigfoot is amaze-balls" "With the forest-stealth stuff!" " Bigfoot scared." " Aw." " Finger bad!" " Right, fingers are bad." "Hey, donnie, want to play the winner?" " Evening, bigfoot." "What are you making?" " Food!" " Aah!" " Donnie!" " Um, you realize..." "That meat has fur?" " Mm-hmm." "Hmm?" "[cat screeching]" " [screaming]" " [screeching]" " [screaming]" " [screeching]" " [screaming] - [screeching] [meows weakly]" " Bigfoot..." "Need..." "Help." " Sure, what bigfoot wa" "I mean, what do you want?" " Makeover." "Huh?" " Okay, and a little off the top here." "[humming]" "Well?" "Uh..." "[laughs]" "So what do you think?" " Eat!" " Gah!" "Did you..." "Wow--whoa." "Uh, yeah, um, uh..." "Thank you." "Got to go, bye!" " [whimpers]" " Ugh." "This is so uncomfortable." "Bigfoot follows me around everywhere" "Like a love-struck puppy." " Now you know how April feels." " [whimpers] [distant laughter]" " So she's wearing makeup now," "And she keeps making soup for donnie." " Do you think she's his type?" " Maybe after a shave!" "[both laughing]" " [whimpering]" " Bigfoot, wait!" " [crying]" " What is wrong with you guys?" " We didn't mean to hurt her feelings." " Bigfoot!" "Wait up!" " [crying]" " Mama, your boy's got 'em right where he wants 'em." ""iffen you play your cards right," "Maybe you'll get your own alien-hunting reality show!"" "Ooh, just like cousin fatback!" "Ohhhhh, doggie!" " [crying]" "[whimpering]" " How many years the finger been chasing you?" "Well, guess what." "Mama says the chasing's over." "Bigfoot, you gonna make mama and the finger rich!" ""I'm so proud of you, the finger."" "Aw, shucks, mama..." " What the" "Huh?" "[thud]" " Puppies..." "Too many puppies..." "Aah!" " Donnie, I just had the worst dream about" " I'm not sympathetic right now, mikey." "The finger has been debating whether to pickle us" "Or stuff us." " Ooh, pickle!" "Pickles taste so good!" " [sighs]" " Ain't got no room for these two on the cart, mama." "Looks like the finger's gonna have to stuff 'em right here." " I've got an idea." "Follow my lead." "Hey, finger!" "Your mama looks like a raisin!" " "what'd he say about me, the finger?"" " Uh..." "Your mama's so wrinkly, she looks like" "One of those little dogs..." "With all the wrinkles." " A shar-pei?" " Your mama's" "Uh, your mama's a shrunken head!" " Shut up!" ""destroy them."" " Bad move, the finger." " The finger got skills!" " Wah!" " [groans]" "Ha!" "Forest stealth!" " [growling]" "Oh, yeah!" "Oh, yeah!" "Oh, yeah, brother!" "Oh, yeah!" "Ha!" " [gasps] [classical music]" "♪ ♪" "This is gonna hurt!" "[groans]" " You can't beat the finger!" "He's too strong for you!" " Hey, finger!" " How many more explosive bolts you got in that quiver?" " 42." "[groaning]" "If the finger's going down, he's taking bigfoot with him!" "Bigfoot" "Bigfoot's a lady?" "The finger can't shoot no lady!" "The finger's sorry, mama!" "He didn't know!" "He would never hurt no lady!" "[crying]" " There, there..." "It am be okay." "Huh?" "[heart beating rapidly]" " [crying]" " Bigfoot take care sad mans." " [grunting]" " Hey, April." " What's up, donnie?" " I, um..." "Well, I just wanted to let you know" "I won't be bothering you with music boxes anymore." "I get it now." "Donnie is to April as bigfoot was to donnie." "I'm just..." "A mutant." " You're not just a mutant, donnie." "You're my mutant." "Mwah." " I understand nothing."
Abstract Many of the studies that explore the fascination audiences have with puppets have focused largely on the relationship between the operator and the object and the illusion engendered through performance. Those that attend to the issue of humour, such as Dina and Joel Sherzer’s Humour and Comedy in Puppetry in 1987, tend to address generic comic components of specific puppet practices, and only minimally engage with the more fundamental concerns about how the object may be viewed humorously by audiences. This article intends to bridge this gap in scholarship by exploring the similarities between spectatorship and humour in relation to puppet practices. Drawing links between the incongruities inherent within puppet forms, particularly those revealed through the juxtaposition of object and human operator, and theories of humour, I argue that there is amusement to be found in seeing the inanimate animated, which is similar to the pleasure found in incongruous humour. While not all puppets are used for comic purposes, my argument suggests that the fundamental collaboration required for an audience to appreciate a puppet performance lends the form a particular comic specialism which may help explain why, historically, puppets appear to thrive in comic contexts.
Every year, 343 has blessed me with a pass to E3 and attend one of the biggest gaming conventions in the states. Ever time around E3, announces are made, franchises and new games are added to Xbox/Windows 10 exclusives and of course the rest of our Halo family who attends. Major kudos to Bravo for pulling the strings to get tickets within a week of the expo starting. At E3, we learned some new things about the upcoming Halo Wars 2 DLC and were even fortunate enough to get a great interview with Executive Producer of Halo Wars 2, Barry Feather talking about the new Awakening the Nightmare trailer for upcoming DLC. Frag this: Twitter Facebook Tumblr Reddit
One month after devastating earthquake Haitians mourn their dead Thousands of Haitians prayed, wept and danced among tent shelters in the capital's main square on Friday as President Rene Preval asked his people to 'dry their eyes' and rebuild a month after the catastrophic earthquake that killed more than 200,000 people. Haitians joined in a national day of mourning and prayer amid the rubble a month to the day after the magnitude 7 quake wrecked the capital, Port-au-Prince, and surrounding towns and cities, and left 1 million people living in the streets. In his first live, nationally broadcast speech to the impoverished Caribbean nation since the quake, Preval said Haitians' courage had sustained their government as it looks for ways to relieve the suffering of some 300,000 injured and those living in hundreds of spontaneous tent encampments. People gather in Port au Prince to commemorate the one month anniversary of the earthquake that devastated Haiti Earthquake survivors raise their arms as they pray in commemoration of the January 12 earthquake in downtown Port-au-Prince 'Haitians, the pain is too heavy for words to express. Let's dry our eyes to rebuild Haiti," Preval said at a ceremony held on a flower-decked platform at the University of Notre Dame's nursing school in the capital. 'Haitian people who are suffering, the courage and strength you showed in this misfortune are the sign that Haiti cannot perish. It is a sign that Haiti will not perish," said Preval, wearing a black armband of mourning over his white shirt. The ceremony marked a brief pause in the government's recovery effort from Haiti's worst natural disaster. The quake killed about 212,000 people, according to the government, and Haitian officials, along with international aid groups, are struggling to house and care for those living outdoors. Earthquake survivors pray in front of the Government Palace in downtown Port-au-Prince. 1 million people have been left living in the streets Three Haitian women pray in a street after the quake that killed 217,000 people Thousands took part in the prayers and dancing in front of the wreckage of the National Palace and in the Champs de Mars, the main downtown square, which after the quake became a sprawling city of shanties, tents and shelters make of rope and bedsheets. Little girls dressed in their Sunday finest were a stark contrast to the squalor of the camps, where a woman tossed a blanket over her shoulders and bathed from a bucket as people prayed and danced around her. Preval, who has made few public appearances since the quake, joined Prime Minister Jean-Max Bellerive and government ministers for a somber ceremony at the university beginning a six-day period of national mourning for the quake victims. Senate President Kely Bastien, who was pulled from the rubble of the Parliament building and had surgery for a serious foot injury, hobbled in on crutches. Preval recalled his own experience the day of the quake. A survivor holds a portrait of her dead relative as a crowd gathers in front of a destroyed cathedral Another survivor cries as she prays 'When I went out in the streets the night of Jan. 12, in Bel-Air I was stepping over bodies in the streets. In the nursing school, I heard students who were calling for help under the concrete,' he said. 'I went downtown on the main street, throughout the city, all I could see was bodies, people who were under the concrete,' he added. 'My only answer to all the pain was and is to continue to look for relief, particularly abroad, to help ease the pain of those who are suffering,' he said. The leaders of the country's two main religions, Catholicism and voodoo -- Archbishop Joseph Lafontant, who took over after Archbishop Serge Miot died in the quake, and Max Beauvoir, Haiti's high priest of voodoo -- sat side by side. 'Never has a disaster stricken such a great number of Haitians at the same time,' Lafontant said. 'But as paradoxical as it could appear, today's prayer has turned us toward hope for life.' Preval also asked Haitians to pray for former U.S. President Bill Clinton, who left hospital in the United States on Friday after surgery to insert two stents for a blocked artery in his heart. Haiti's President Rene Preval speaks with U.S. Speaker of the House Nancy Pelosi in Port-au-Prince Thousands of Haitians attended the outdoor Mass Clinton, the United Nations special envoy to Haiti, was appointed along with former President George W. Bush by U.S. President Barack Obama to direct Haitian relief efforts. 'We are with his family in the same way he was with us through our misfortune,' Preval said. In Washington, the White House issued a statement saying the people of Haiti 'will continue to have a friend and partner in the United States of America.' 'Guided by the roadmap for cooperation and coordination developed by the government of Haiti, the United States will support our Haitian partners as they transition from emergency assistance to recovery and long-term reconstruction,' said a statement by Obama's press secretary Robert Gibbs. The Haitian government said the mourning would conclude Feb. 17 in a 'celebration of life' with a party in the Champs de Mars featuring artists and musicians.
/* * ----------------------------------------------------------------------- * Copyright © 2013-2015 Meno Hochschild, <http://www.menodata.de/> * ----------------------------------------------------------------------- * This file (GapResolver.java) is part of project Time4J. * * Time4J is free software: You can redistribute it and/or modify it * under the terms of the GNU Lesser General Public License as published * by the Free Software Foundation, either version 2.1 of the License, or * (at your option) any later version. * * Time4J is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public License * along with Time4J. If not, see <http://www.gnu.org/licenses/>. * ----------------------------------------------------------------------- */ package net.time4j.tz; /** * <p>Represents the component of a transition strategy how to handle gaps * on the local timeline. </p> * * @author Meno Hochschild * @since 2.2 * @see OverlapResolver * @see TransitionStrategy */ /*[deutsch] * <p>Repr&auml;sentiert die Komponente einer &Uuml;bergangsstrategie, wie * L&uuml;cken auf dem lokalen Zeitstrahl behandelt werden. </p> * * @author Meno Hochschild * @since 2.2 * @see OverlapResolver * @see TransitionStrategy */ public enum GapResolver { //~ Statische Felder/Initialisierungen -------------------------------- /** * <p>Default strategy which moves an invalid local time by the length * of the gap into the future. </p> * * <p>Example for the switch to summer time in the timezone * &quot;Europe/Berlin&quot;: * {@code 2015-03-29T02:30+01:00 => 2015-03-29T03:30+02:00} </p> */ /*[deutsch] * <p>Standardstrategie, die eine ung&uuml;ltige lokale Zeit um die * L&auml;nge der L&uuml;cke in die Zukunft verschiebt. </p> * * <p>Beispiel f&uuml;r die Umschaltung auf Sommerzeit in der Zeitzone * &quot;Europe/Berlin&quot;: * {@code 2015-03-29T02:30+01:00 => 2015-03-29T03:30+02:00} </p> */ PUSH_FORWARD, /** * <p>Alternative strategy which moves an invalid local time forward * to the first valid time after transition. </p> * * <p>Example for the switch to summer time in the timezone * &quot;Europe/Berlin&quot;: * {@code 2015-03-29T02:30+01:00 => 2015-03-29T03:00+02:00} </p> */ /*[deutsch] * <p>Alternative Strategie, die eine ung&uuml;ltige lokale Zeit auf die * erste g&uuml;ltige lokale Zeit nach dem &Uuml;bergang setzt. </p> * * <p>Beispiel f&uuml;r die Umschaltung auf Sommerzeit in der Zeitzone * &quot;Europe/Berlin&quot;: * {@code 2015-03-29T02:30+01:00 => 2015-03-29T03:00+02:00} </p> */ NEXT_VALID_TIME, /** * <p>Strict strategy which rejects an invalid local time by throwing * an exception. </p> */ /*[deutsch] * <p>Strikte Strategie, die eine ung&uuml;ltige lokale Zeit mit einer * Ausnahme verwirft. </p> */ ABORT; //~ Methoden ---------------------------------------------------------- /** * <p>Yields a transition strategy as combination of given overlap resolver * and this instance. </p> * * @param overlapResolver strategy how to handle overlaps on the * local timeline * @return transition strategy for handling gaps and overlaps * @since 2.2 */ /*[deutsch] * <p>Liefert eine &Uuml;bergangsstrategie als Kombination der angegebenen * &Uuml;berlappungsstrategie und dieser Instanz. </p> * * @param overlapResolver strategy how to handle overlaps on the * local timeline * @return transition strategy for handling gaps and overlaps * @since 2.2 */ public TransitionStrategy and(OverlapResolver overlapResolver) { return TransitionResolver.of(this, overlapResolver); } }
Jægersborg Dyrehave Dyrehaven (Danish 'The Deer Park'), officially Jægersborg Dyrehave, is a forest park north of Copenhagen. It covers around . Dyrehaven is noted for its mixture of huge, ancient oak trees and large populations of red and fallow deer. In July 2015, it was one of the three forests included in the UNESCO World Heritage Site inscribed as Par force hunting landscape in North Zealand. All entrances to the park have a characteristic red gate; one of the most popular entrances is Klampenborg gate, close to Klampenborg station. All the entrance gates have an identical gate house attached to them, which serve as the residences of the forest wardens. Dyrehaven is maintained as a natural forest, with the emphasis on the natural development of the woods over commercial forestry. Old trees are felled only if they are a danger to the public. It has herds of about 2100 deer in total, with 300 Red Deer, 1700 Fallow Deer and 100 Sika Deer. Dyrehaven is also the venue for the Hermitage road race (Eremitageløbet) and the yearly Hubertus hunt (Hubertusjagten) which is held on the first Sunday in November. In former times it was home to the Fortunløbet race, later known as Ermelundsløbet, but this race was discontinued in 1960. History In 1669 Frederik III decided that the wood of "Boveskov" ("Beech wood") should be fenced in and wild deer from the surrounding areas driven into the newly created park. Boveskov was already well known as the former property of Valdemar the Victorious as it had been recorded in his census (the Liber Census Daniæ) of 1231. The forest lay in the westerly and southerly part of the present Dyrehaven and encompassed land used by the farmers from the village of Stokkerup, which lay to the north. Fencing work consisted of excavating a ditch, the earth from which was built up in a bank on the outside walls of the ditch, on the opposite side to the centre of the park. On the top of the bank posts were driven into the ground and fences installed. This made it more difficult for the deer to leap the fence, as the rise between the ditch and the bank effectively increased the level of the fence. The ditch and bank can still be seen for a long stretch in the south-easterly part of the current park. The work was never finished, as Frederik III died in 1670. The design, however, is still on record, and the area for the scheme worked out at around 3 square kilometres. When Frederik's son, Christian V, became king, he laid out new and more ambitious plans for Dyrehaven. During his education Christian V had spent time at the court of Louis XIV in France. Here he had seen another type of hunting practice, parforce (hunting with dogs), that he wished to adopt. This style of hunting required a greater area of land for its practice, so Christian V increased the boundaries to include the fields up to the village of Stokkerup (the area known today as Eremitagesletten), as well as taking in the land that today is Jægersborg Hegn. The additional enclosure increased the size of the park to . The inhabitants of Stokkerup, whose village pond still lies within Eremitagesletten area, were ordered to tear down their houses and make use of the materials to rebuild the farms in the area that had stood empty since the Northern Wars. They were compensated for this by having a period of three years during which they were exempt from taxation. Areas in Dyrehaven Eremitagesletten Eremitagesletten is an area in the north of the park. Originally this area was the fields of the village of Stokkerup, but was enclosed when Christian needed it for hunting with dogs. Evidence of this can be clearly seen from the roads which are laid out in the classic star form that was typical for areas used in this form of hunting. In the middle of Eremitagesletten is the Hermitage, built during the reign of Christian VI. Eremitagesletten is encircled by forest. From Hjortekær to the north and the east there is a row of chestnut trees that make up the boundaries of the plain. This row of trees marked the northern extent of Dyrehaven until 1913, when the boundaries were extended north of Mølleåen. Mølleåen This is an area north of Eremitagesletten near Rådvad along the banks of the Millstream. Fortunens Indelukke Fortunens Indelukke is an area in the west of Dyrehaven. It is fenced in and is the only part of the forest that the deer do not have access to. There are a few other areas also fenced in, but only smaller and temporary to save the young trees. Ulvedalene Ulvedalene has Dyrehaven's hilliest terrain, which was created during the last ice age. Djævlebakken, a popular sledging run is found in this part of the park. Ulvedalsteateret (Ulvedal theatre) gave performances for 39 years in Ulvedalene. The first time was in the summer of 1910 with Adam Gottlob Oehlenschläger's Hagbart and Signe. The idea originated from the actor Adam Poulsen and producer Henrik Cavling. The architect Jens Ferdinand Willumsen created spaces for about 4,000 sitting and 2,000 standing spectators, which made it possible to lower the ticket price to an accessible level. The theatre survived up to 1949, and after a break of almost 50 years, the tradition was revived by Birgitte Price with an arrangement of Johan Ludvig Heiberg's Elverhøj in 1996 in a production supported by the Royal Danish Theatre, Lyngby-Taarbæk Kommune and Kulturby'96. Since then there have been further performances, the latest, Røde Orm, in 2018. Von Langens Plantage This is the southernmost part of Dyrehaven and the most visited. Dyrehavsbakken Dyrehavsbakken (colloquially Bakken and literally in English "The Deer Park's Hill") is the world's oldest existing amusement park. Kirsten Piils Kilde Kirsten Piils Kilde (Kirsten Piil's Spring) was discovered in 1583 by Kirsten Piil about whom little is known. Legend states Kirsten was a pious woman, who, through her devotion, gave the spring curative powers, which made it a place of pilgrimage for the sick who would come to drink the water. Peter Lieps Hus Peter Lieps Hus (Peter Liep's House) is now a well-known restaurant. It is named after Dyrehaven's first sharpshooter, Peter Liep. The house was originally called Kildehuset (Spring house) and is thought to have been to be built towards the end of the 18th century. In the 1860s a two-storey extension was added that gave the house a clumsy appearance. Peter Liep took over the building in 1888. In September 1915 the house burned to the ground, but it was reconstructed by 1916. In 1928 the house burned down again. It was rebuilt to a different design, basically as it can be seen today. After some years a pavilion and toilets were added. Visitor numbers consistently rose (the house had already achieved a good reputation as a restaurant by the end of the 19th century). In 1952 a fire again broke out in the house. The fire was extinguished before it did any major damage: a hole was burned in the thatched roof, but later the same day the extensions caught fire and burned down, and only the main farmhouse was able to be saved. The extensions were rebuilt in 1954 and a new pavilion added in 1960. All these buildings are today known under the collective name "Peter Lieps Hus", though the house is very different from the house Peter Liep lived in on the same spot. Fortunen Fortunen (The Fortune) is a former ranger station on the King's hunting road to Dyrehaven, named after the Roman goddess of luck Fortuna. It is now home to a hotel and restaurant. Annual events Eremitage race Day of the Kite Hubertus Hunt The Hubertus Hunt is a cross country horse race which takes place every year on the first Sunday in November, marking the end of the hunting season. First held in 1900, the event attracts about 160 riders and up to 40,000 spectators. The race always begins at Peter Liep's House and involves a break at the Hermitage Lodge. The race route is 13 km long and involves a total of 35 obstacles. The winner receives the Hubertus chain. Open air theatre The Royal Danish Theatre produces an annual theatre production at Ulvedalene in Jægersborg Dyrehave. Sources This article incorporates text translated from the corresponding Danish Wikipedia article as of 29 March 2007. The Parforce Hunting landscape in North Zealand UNESCO In Danish Jægersborg Dyrehave, Skov- og Naturstyrelsen, Vandreture i Statsskovene nr. 22 Dyrehaven, Mattssons Rideklub Lyngby-Bogen 1989 / red. Jeppe Tønsberg. Udgivet af Historisk-Topografisk Selskab for Lyngby-Taarbæk Kommune "Dyrehaven" af Torben Christiansen og Peter Lassen, Politikens Forlag, 2005. Ravnene (skulpturer fra scenen ved Ulvedalene) Meulengracht-Madsen, Jens: "Dyrehavens gamle ege, deres alder og vækst", Naturens Verden, nr. 11-12/1999, vol. 82, side 2-21. External links Category:Parks and open spaces in Gentofte Municipality Category:Parks and open spaces in Lyngby-Taarbæk Municipality Category:Forests of Greater Copenhagen Category:Urban forests in Denmark Category:Tourist attractions in the Capital Region of Denmark Category:Danish Culture Canon Category:World Heritage Sites in Denmark Category:Articles containing video clips
Hopefully Monday we'll have some stuff. Crowe on Leno and whatever bombshell elmayimbe is dropping. __________________ Wonder Woman will save saved the DCEU 2017! "I think that [Wonder Woman and Batman] are very Alpha-type." - Gal Gadot "I think [Wonder Woman] is the True North superhero of this posse." - Patty Jenkins "Wonder Woman is the Best Fighter in the DC Universe" - Geoff Johns
Q: HTML - Script - hide the "/index.html" file when going to domain.com/example for example: On appples website when going to apple.com/support I will come to apple.com/support/ and it should be to apple.com/support/index.html but the index.html is not shown. i want this on my site: www.jonathangurebo.se for example when going to jonathangurebo.se/contact i will come to jonathangurebo.se/contact/index.html and i dont want to show the index.html. How should i solve this? maybe add some line to my .htaccess file? because when going to jonathangurebo.se/mlkhgljfehgljfebgfgb it will come to my error 404 page and show the same link in my adress bar. and not the real link in this case jonathangurebo.se/error/404.html A: The only reason your website displays the index.html part is because you added it to your HTML page. Rather than making like like so: <a href="http://www.jonathangurebo.se/apps/index.html">Apps</a> You should make links like: <a href="http://www.jonathangurebo.se/apps/">Apps</a> And everything will work just like you expected! :)
Vacuum plasma generators can be an integral component of an alternating voltage gas discharge excitation arrangement that couples to a gas discharge device for treating a workpiece. Vacuum plasma generators can have different power classifications and different output signal forms. In vacuum glass coating, for example, medium frequency (MF) vacuum plasma generators are used that have an MF output signal with power levels of between 30 and 300 kW. The MF output signal is mostly a sinusoidal signal having frequencies of between 10 kHz and 200 kHz. The output voltages may be from several 100 V to above 1000 V. In order to ignite plasma in the gas discharge device, the output voltages of vacuum plasma generators are often much higher than during normal operation. In the plasma, brief and also longer-lasting spark-overs, so-called arcs, may occur, and such arcs are undesirable. An arc is generally identified by a break-up or a drop in the voltage at the vacuum plasma generator and an increase in the current at the vacuum plasma generator, for example, at the output of the vacuum plasma generator or at another location in the vacuum plasma generator. If an arc of this type is identified, it is extinguished or prevented from reaching a maximum level. For example, in DE 41 27 505 C2, the output of the alternating current generator is caused to short circuit when an arc is identified, which is a suitable method for low-power MF generators. The higher the power levels become, the higher the voltages and currents also become and a switch for short circuiting would have to be able to short circuit such higher currents. Components of such a switch are usually larger and more expensive to fabricate. In some cases, components of a short circuiting switch would be connected in series and/or in parallel in order to be able to switch off the high voltages and currents. When arcs occur, the MF generator should supply as little residual energy as possible into the gas discharge device. For example, in MF generators for Flat Panel Display (FPD) production, an arc may lead to pixel errors, and a single pixel error can have a significant influence on the quality of a large surface-area (for example, for 19″ thin film transistor monitors) and consequently can cause a comparatively high level of damage. In some designs, when an arc is identified, control to the vacuum plasma generator or to parts of the vacuum plasma generator is switched off so that no further energy flows into an output oscillating circuit of the generator. This procedure may not be sufficient for FPD production since there can still be too much residual energy in the output oscillating circuit of the generator, in the inductors of an output transformer that may be provided in the generator, and in the supply lines to the generator.
Increased membrane/nuclear translocation and phosphorylation of p90 KD ribosomal S6 kinase in the brain of hypoxic preconditioned mice. Our previous studies have demonstrated that hypoxic precondition (HPC) increased membrane translocation of protein kinase C isoforms and decreased phosphorylation of extracellular signal-regulated kinase 1/2 (ERK1/2) in the brain of mice. The goal of this study was to determine the involvement of p90 KD ribosomal S6 kinase (RSK) in cerebral HPC of mice. Using Western-blot analysis, we found that the levels of membrane/nuclear translocation, but not protein expression of RSK increased significantly in the frontal cortex and hippocampus of HPC mice. In addition, we found that the phosphorylation levels of RSK at the Ser227 site (a PDK1 phosphorylation site), but not at the Thr359/Ser363 sites (ERK1/2 phosphorylated sites) increased significantly in the brain of HPC mice. Similar results were confirmed by an immunostaining study of total RSK and phospho-Ser227 RSK. To further define the cellular populations to express phospho-Ser227 RSK, we found that the expression of phospho-Ser227 RSK co-localized with neurogranin, a neuron-specific marker, in cortex and hippocampus of HPC mice by using double-labeled immunofluorescent staining method. These results suggest that increased RSK membrane/nuclear translocation and PDK1 mediated neuron-specific phosphorylation of RSK at Ser227 might be involved in the development of cerebral HPC of mice.
Documentary theatre is still very much in the ascendant, with recent shows such as Stockwell, The Power of Yes, Katrine and The Girlfriend Experience all drawing on verbatim techniques. But at the Dublin theatre festival over the weekend, I saw Radio Muezzin, an astonishingly effective show from Stefan Kaegi and Rimini Protokoll. As in the work of Manchester-based company Quarantine (which, in shows such as White Trash and Susan and Darren, has produced an extraordinary body of work that allows ordinary people to present themselves on stage as they want to be seen) or the work of the superb Junction 25 theatre, Radio Muezzin puts real people on stage, not actors. In this instance, they are all Egyptian muezzins, the men who daily call Cairo's faithful to prayer. Because of advances in technology, we are told, the Egyptian authorities are planning to centralise the azan and broadcast it live over the radio. Just 30 specially chosen muezzins will take it in turns to make the call to prayer. What will this mean for the muezzins who are not among the chosen? Could a centralised live broadcast the same as the current situation where several voices join in one swelling call to prayer, creating a soundscape across the city? Radio Muezzin goes beyond the question of what 'live' means and addresses the question of representation itself. Rimini Protokoll's "experts in daily life" – in this instance, the muezzins who engage with the audience so directly and unaffectedly – are apparently just being themselves. But is it possible to be yourself on stage? Or can you only create a representation of yourself? What is the difference between acting and performing; performing and being? Which parts of myself am I prepared to show on stage? Not only does Radio Muezzin give you a direct conduit into other people's lives and another culture, this low-key piece also grapples with the very form of theatre itself – in particular, the issues of co-authorship, ownership and exploitation that arise in documentary theatre. The use of actors in most verbatim-style work allows directors to shape the material in a way that suits their dramatic purpose. But in the case of Radio Muezzin – just as in one of Quarantine's shows – the authors of the piece are present on stage: they present themselves. There is no intermediary. And yet, are they presenting the truth? Of course they are, although quite possibly only one version of many complex and interweaving truths. One of the interesting things about Radio Muezzin is its recognition that truth is a fabric that's full of holes: at one point we discover that one of the muezzins recently quit the show because of tensions between himself and the other performers. He was one of the chosen 30, the others were not. But was this the reason for his departure, or did being in the show change the people involved and affect their relationships? We will never know. The strange and rather exhilarating thing about Radio Muezzin is that the more it tells you, the more aware you become of how little you know. It raises more questions than it answers. Foremost among them is this: why it is that this piece of theatre feels far more real than any TV documentary?
Formation of cage-like hollow spherical silica via a mesoporous structure by calcination of lysozyme-silica hybrid particles. Calcination of lysozyme-silica hybrid hollow particles gives novel cage-like hollow spherical silicas with differently patterned through-holes on their shell structure.
Anything look different? The old default is still available here, if you’re curious. Text is finally dark on a light background. I was even having trouble reading the white–on–blue text of the last design, and that was one of the main driving forces behind this one. Let this be a word of advice: if you’re designing for large areas of text, always always always use a light background and dark type. Always. You might notice a horizontal scrollbar. I’ve scrapped any semblance of catering to 800x600, so this is now a ‘best viewed at 1024’ site. I think designing for 800x600 is pointless anyway, since most people run their browser windowed rather than full–screen even at that small size. Since I’m not selling anything and I don’t have quotas to hit, I can afford to stop catering to the absolute lowest common denominator and take a step or two up the ladder. I’ve also abandoned CSS positioning for now and went back to tables. The main reason for it isn’t a lack of faith in the former, more a sense of familiarity with the latter. This was a really complex design to build, and I made the decision from the start to just skip the headache. Mezzoblue now has a logo of sorts. It’s more effective as displayed in the bottom right corner, but I’m still not entirely sure about this particular pattern. I’ve been playing around with a variant which may find their way up here. “evil octopus” is what I’m calling the idea, if that helps any. Regrettably, I’ve had to change from .html to .asp extensions on archive files. Any links to existing archives are now broken, but this shouldn’t be a huge problem because I haven’t seen any referrals to them yet anyway. Anyway, there isn’t really any new content yet, but I think the organization of existing content is going to be far easier to manage. I’m also using Blue Spark for smaller content updates, so it will most likely start improving as I work out the kinks. Reader Comments Search this site: About This Entry: You are reading “Redesign Soft Launch”, an entry posted on 9 February, 2003, to the Pedestrian Street collection. See other posts in this collection.
--- abstract: 'In this paper we describe a new release of the PRESHOWER program, a tool for Monte Carlo simulation of propagation of ultra-high energy photons in the magnetic field of the Earth. The PRESHOWER program is designed to calculate magnetic pair production and bremsstrahlung and should be used together with other programs to simulate extensive air showers induced by photons. The main new features of the PRESHOWER code include a much faster algorithm applied in the procedures of simulating the processes of gamma conversion and bremsstrahlung, update of the geomagnetic field model, and a minor correction. The new simulation procedure increases the flexibility of the code so that it can also be applied to other magnetic field configurations such as, for example, encountered in the vicinity of the sun or neutron stars.' address: - 'H. Niewodniczański Institute of Nuclear Physics, Polish Academy of Sciences, ul. Radzikowskiego 152, 31-342 Kraków, Poland' - 'Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany' author: - 'P. Homola' - 'R. Engel' - 'A. Pysz' - 'H. Wilczyński' title: 'Simulation of Ultra-High Energy Photon Propagation with PRESHOWER 2.0' --- ultra-high energy cosmic rays ,extensive air showers ,geomagnetic cascading ,gamma conversion ,PRESHOWER Program Summary =============== -------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Program title: PRESHOWER 2.0 Catalog identifier: ADWG\_v2\_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ ADWG\_v2\_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: C, FORTRAN 77 Computer(s) for which the program has been designed: Intel-Pentium based PC Operating system(s) for which the program has been designed: Linux or Unix RAM required to execute with typical data: $<100$ kB CPC Library Classification: 1.1. External routines/libraries used: IGRF [@igrf-paper; @tsygan], DBSKA [@cernbess], ran2 [@numrec] Catalog identifier of previous version: ADWG\_v1\_0 Journal Reference of previous version: Computer Physics Communications 173 (2005) 71-90 Does the new version supercede the previous version?: yes Nature of problem: Simulation of a cascade of particles initiated by UHE photon in magnetic field. Solution method: The primary photon is tracked until its conversion into an $e^+e^-$ pair. If conversion occurs each individual particle in the resultant preshower is checked for either bremsstrahlung radiation (electrons) or secondary gamma conversion (photons). Reasons for the new version: 1\) Slow and outdated algorithm in the old version (a significant speed up is possible); 2) Extension of the program to allow simulations also for extraterrestrial magnetic field configurations (e.g. neutron stars) and very long path lengths. -------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Summary of revisions: A veto algorithm was introduced in the gamma conversion and bremsstrahlung tracking procedures. The length of the tracking step is now variable along the track and depends on the probability of the process expected to occur. The new algorithm reduces significantly the number of tracking steps and speeds up the execution of the program. The geomagnetic field model has been updated to IGRF-11, allowing for interpolations up to the year 2015. Numerical Recipes procedures to calculate modified Bessel functions have been replaced with an open source CERN routine DBSKA. One minor bug has been fixed. Restrictions: Gamma conversion into particles other than an electron pair is not considered. Spatial structure of the cascade is neglected. Running time: 100 preshower events with primary energy $10^{20}$ eV require a 2.66 GHz CPU time of about 200 sec.; at the energy of $10^{21}$ eV, 600 sec. ----------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Introduction ============ Identifying and understanding the sources of cosmic rays with energies up to $10^{20}$eV is one of the most important questions in astroparticle physics (see, for example, [@Bluemer:2009zf; @LetessierSelvon:2011dy; @Kotera:2011cp]). Knowing the fraction of photons in the flux of ultra-high energy cosmic rays is of particular importance as photons are unique messengers of particular source processes (acceleration vs. decay of super-heavy particles or other objects). They are also produced in interactions of charged cosmic ray nuclei of the highest energies with cosmic microwave background radiation, known as the Greisen-Zatsepin-Kuzmin (GZK) effect. If the energy of the cosmic ray particles exceeds the GZK energy threshold at the source, ultra-high energy photons are produced due to well-understood hadronic interactions with microwave photons and can be detected at Earth as a unique propagation signature. So far only upper limits to the photon flux at ultra-high energy exist, which have led to severe constraints on models for UHECR sources. However, depending on the primary cosmic ray composition, the sensitivity of the latest generation of cosmic ray detectors, i.e. the Pierre Auger Observatory [@auger] and the Telescope Array [@Tokuno:2010zz], should allow detection of GZK photons for the first time. The simulation of the propagation of photons before they reach the Earth’s atmosphere is important because of the preshower effect [@preshw-mcbreen] that may occur when a photon traverses a region where the geomagnetic field component transverse to the photon trajectory is particularly strong. As described e.g. in Refs. [@cpc1; @phot-rev], high energy photons in the presence of a magnetic field may convert into e$^{+}$e$^{-}$ pairs and the newly created leptons emit bremsstrahlung photons, which again may convert into e$^{+}$e$^{-}$ if their energies are high enough. As a result of these interactions, instead of a single high energy photon, a shower of particles of lower energies, the so-called preshower, reaches the atmosphere. The occurrence of the preshower effect has a large impact on the subsequent extensive air shower development and changes the predicted shower observables. In 2005 the program PRESHOWER [@cpc1] for simulating the showering of photons in the Earth’s magnetic field was released. It was shown that this initial version of the Monte Carlo code is in good general agreement with previous studies [@preshw-mcbreen; @karakula; @stanev; @billoir; @bedn; @vankov1; @vankov2]. In this paper we describe the changes of the PRESHOWER code relative to the initial version 1.0 [@cpc1]. The main feature of the new release is a much faster algorithm for calculating the distance at which a preshower interaction (gamma conversion or bremsstrahlung) occurs. In version 1.0, the calculations were done in constant steps along the particle trajectory and the step size was optimized for all possible trajectories. The constant step practically disabled studying the preshower effect along paths longer than several tens of thousands kilometers. Now, in version 2.0, the distance to the next interaction point is computed with an efficient veto algorithm, decreasing significantly the number of computing steps. The new method, being independent of the trajectory length, allows computations for arbitrarily long photon paths, e.g. simulations of preshower creation in the vicinity of a neutron star or active galactic nucleus. The new algorithm is described in detail in Section \[sec1\] of this article. Other important changes, i.e. the update of the geomagnetic field model and a code correction in version 1.0 are discussed in Section \[sec2\]. The results of testing PRESHOWER 2.0 are presented in Section \[sec3\] and conclusions are given in Section \[sec4\]. Following Ref. [@cpc1], all the results presented in the following are obtained for the magnetic conditions of the Pierre Auger Observatory in Malargüe, Argentina (35.2$^\circ$S, 69.2$^\circ$W). The shower trajectories are given in the local frame where the azimuth increases in the counter-clockwise direction and $\phi=0^\circ$ refers to a shower coming from the geographical North. The new sampling algorithm {#sec1} ========================== In PRESHOWER 1.0 the effect of precascading is simulated following the particle trajectories with a fixed step size. In each step the probability of conversion into e$^{+}$e$^{-}$ is calculated for photons and the probability of emitting a bremsstrahlung photon is computed for electrons. The step size has to be optimized for all possible trajectories and magnetic field configurations encountered along the particle trajectories. It has been found out that a step size of $10$km works well until the primary photon conversion and then the emission of bremsstrahlung photons as well as conversions of secondary photons are simulated in steps of $1$km. In this algorithm a typical simulation run consists of several thousands of steps. The number of simulation steps and hence the computing time can be significantly reduced by using a veto algorithm. The algorithm used in the following is commonly applied in physical situations where a probability of occurring of a certain process varies within a given spatial or temporal interval (see, e.g. Ref. [@veto]). The location or time of the occurrence of a physical process studied is found in few approximating jumps. An example of a process that can be treated with the algorithm is a radioactive decay considered in a certain interval of time. Photon conversion and bremsstrahlung probabilities to be found along spatial trajectories can also be computed with this veto algorithm. General description {#algorithm} ------------------- The theory behind the veto algorithm is based on a process of discrete events described by $$\frac{dN}{dt} = - f(t)\, N(t).$$ Then the probability $dP_A$ of occurrence of process $A$ in the time window $t \dots t +dt$ is given by $$d P_A = - \frac{1}{N(t)} \frac{dN}{dt} = f(t) dt\ .$$ Together with the probability of not having an occurrence of process $A$ in the time from $t_0$ to $t$ $$P_{{\rm no-}A} = \frac{N(t)}{N(t_0)}$$ one obtains for the probability $dP$ for having an occurrence of $A$ in the time window $t \dots t +dt$, provided that this process did not occur at an earlier time $t^\prime$ with $t_0 < t'<t$ $$\label{eq1} dP = P_{{\rm no-}A}\, d P_A = f(t)\, \frac{N(t)}{N(t_0)} \,dt = f(t)\, \exp\left\{-\int^t_{t_0} f(t')\,dt'\right\}\, dt\ .$$ If an analytic solution can be found for the integral of $f(t)$ $$F(t) = \int_{t_0}^t f(t^\prime)\, dt^\prime$$ one can sample the time $t$ of the next occurrence of $A$ after the previous occurrence at time $t_0$ using the inversion method $$\label{eq4} \int^t_{t_0} dP = \exp\{F(t)\} = \xi, \hspace*{2cm} t=F^{-1}(\ln \xi), \label{eq5}$$ with $\xi$ being a random number uniformly distributed in $(0,1]$. If $F(t)$ cannot be found or the inverse of it computed sufficiently easily one can use a function $g(t)$ such that $\forall t \geq 0: g(t) \geq f(t)$ and apply the following procedure 1. set the initial conditions: $i=0$, $t_0=0$; 2. $i=i+1$; 3. get a random number $\xi_i\in(0,1)$; 4. compute $t_i=G^{-1}(G(t_{i-1})-\ln \xi_i)$, $t_i>t_{i-1}$; 5. get another random number $\xi_i'\in(0,1)$; 6. if $\xi_i'\leq f(t_i)/g(t_i)$ then $t_i$ is the wanted result, i.e. the moment when process $A$ occurred, otherwise one has to go back to step 2 or, if the end of the interval is reached, end the procedure without occurrence of process $A$. The algorithm described above was mathematically proven to reproduce exactly the expected distributions [@veto-proof]. Implementation in PRESHOWER 2.0 {#veto_presh} ------------------------------- Following the general scheme described above, the application of the veto algorithm to simulations of the preshower effect is straightforward. Instead of the time variable $t$, the distance $r$ along the preshower trajectory is used. We consider two processes in parallel over an interval starting at $r_{start}$ and ending at $r_{end}$, namely gamma conversion and bremsstrahlung of electrons. Following the physics notation introduced in Ref. [@cpc1] (see also the Appendices A and B for all the required physics formulas and symbols) the probability functions are defined as $$p_{conv}(r) \equiv \alpha(\chi(r)) \label{pconv}$$ (see Eqs. \[npairs\]-\[pconv2\]) for gamma conversion and $$p_{brem}(r) \equiv \int^E_0 I(B_\bot(r),E,h\nu)\frac{d(h\nu)}{h\nu} \label{pbrem}$$ (see Eqs. \[daug\]-\[bremprob\]) for magnetic bremsstrahlung. The function $f(t)$ is then replaced by $p_{conv}(r)$ or $p_{brem}(r)$, depending on the process to be simulated. Since finding the antiderivatives of $p_{conv}(r)$ or $p_{brem}(r)$ is not straightforward, simple functions limiting $p_{conv}(r)$ and $p_{brem}(r)$ are used. We define $$\begin{array}{c} g_{conv}(r) \equiv p^{max}_{conv}=const,\: \forall r \in (r_{start},r_{end}): p^{max}_{conv} \geq p_{conv}(r),\\ g_{brem}(r) \equiv p^{max}_{brem}=const,\: \forall r \in (r_{start},r_{end}): p^{max}_{brem} \geq p_{brem}(r), \end{array}$$ which replace $g(t)$ and for which the antiderivatives are $$\begin{array}{c} G_{conv}(r) = p^{max}_{conv} \cdot r,\\ G_{brem}(r) = p^{max}_{brem} \cdot r. \end{array}$$ With these substitutions the algorithm of Sec. \[algorithm\] is applied in PRESHOWER 2.0 for the two interaction processes. The determination of $p^{max}_{conv/brem}$ is crucial for computing time optimization. Too large $p^{max}_{conv/brem}$ increases the total number of steps to be executed in the procedure. Determination of $p^{max}_{conv/brem}$ -------------------------------------- The functions $p_{conv/brem}(r)$ depend on $B_\bot(r)$, which is computed with a numerical model. Hence finding the absolute maxima $p^{max}_{conv}$ and $p^{max}_{brem}$ is done numerically. Moreover, through the dependence on $B_\bot(r)$, both $p_{conv}(r)$ and $p_{brem}(r)$ depend on the primary arrival direction and the geographical location of the observatory. As can be seen in Figs. \[alpha\_conv\] and \[alpha\_brem\], the values of $p^{max}_{conv}$ and $p^{max}_{brem}$ may be significantly different for various arrival directions. ![ Examples of $p_{conv}$ functions along different trajectories at the location of the Pierre Auger Observatory in Malargüe (Argentina). A minimum value for one of the curves is related to the small value of $B_\bot$ for this specific arrival direction and altitude. See text for further details.[]{data-label="alpha_conv"}](alpha_conv.eps){width="100.00000%"} ![ Examples of $p_{brem}$ functions along different trajectories at the location of the Pierre Auger Observatory in Malargüe (Argentina). A minimum value for one of the curves is related to the small value of $B_\bot$ for this specific arrival direction and altitude. See text for further details.[]{data-label="alpha_brem"}](alpha_brem.eps){width="100.00000%"} This indicates that the computation of $p^{max}_{conv}$ and $p^{max}_{brem}$ should be performed for each trajectory separately, otherwise one would have to apply upper limits of these values which would be universal but excessively large for most directions. Accepting the excessive values of $p^{max}_{conv/brem}$ would increase enormously the number of steps in the veto algorithm and might result in an unacceptable increase of the computing time. Typically, the functions $p_{conv}(r)$ and $p_{brem}(r)$ reach their global maximum at the top of the atmosphere, i.e. at the end of preshower simulations, assumed here to be at the altitude of $112$km. This is the point closest to the Earth’s surface and $B_\bot(r)$ typically reaches the maximum value. However for certain classes of trajectories $B_\bot(r)$ might start to decrease with approaching the geomagnetic field source when the trajectory direction approaches a tangent to the local field lines. If this decrease happens to be close to the Earth surface, preshower particles are exposed to the maximum $B_\bot(r)$ somewhere before reaching the atmosphere. Examples of $p^{max}_{conv}$ and $p^{max}_{brem}$ with local extrema are plotted in Figs. \[alpha\_conv\] and \[alpha\_brem\]. The positions of the minima of $p_{conv/brem}(r)$ are closely correlated with the minima of $B_\bot(r)$. In case of $p_{conv}(r)$, it has been checked that its global maximum is well reproduced by computing the function value along the trajectory in a simple loop with steps of 1000 km. This procedure is performed for the primary photon energy and the trajectory of interest. It has been checked that $p_{conv}(r)$ decreases with energy, so $p^{max}_{conv}$ found for the primary photon energy will work also for secondary photons of lower energies. In case of electrons, $p_{brem}(r)$ may increase with decreasing electron energy, so one has to compute $p^{max}_{brem}$ for energies within the entire energy range of the simulated particles. Here $p^{max}_{brem}$ is computed in two loops. The external loop along the trajectory is done in steps of 1000 km and the internal loop over energies decreases the energy by one decade in each step. The steps in both procedures can be adjusted by the user if necessary. There is also an alert in the program that gets triggered when the actual values of $p_{conv/brem}(r)$ happen to exceed $p^{max}_{conv/brem}$. The above method of finding the absolute maxima of $p_{conv/brem}(r)$ is fast and efficient. However, it is optimized only for specific simulation conditions: preshowering in the geomagnetic field. In other environments, involving more irregular shapes of $B(r)$, one has to reconsider the procedure of finding the absolute maxima of $p_{conv/brem}(r)$. Other modifications and corrections {#sec2} =================================== Other modifications and changes applied in the new release of the PRESHOWER program are briefly listed below. 1. The IGRF geomagnetic field model has been updated to the year 2010 and the most recent IGRF-11 coefficients have been applied (Ref. [@igrf11]). In the updated model, the highest order of spherical harmonics has been increased from 10 to 13. The geomagnetic field can be extrapolated up to the year 2015 with the new model. The differences between the field strength and direction in these two models are not larger than 0.001%. 2. Procedures to calculate modified Bessel functions have been replaced with an open source CERN routine DBSKA. 3. A minor problem has been found in the auxiliary function `kappa(x)` used for calculation of bremsstrahlung probability. The interpolation performed in this function failed for the rare case of $x=10.0$. This happened because of a faulty definition of the last interval where the interpolation was done. As a result of this bug the input value $x=10.0$ was excluded from the computations. This bug has been fixed in the new release of the program. 4. Since for some cases the number of preshower particles can be very large, the size of the array `part_out[50000][8]`, which stores the output particle data, has been increased from 50000 to 100000 entries. 5. The code of the program was reorganized and more clearly structured. The main change here was moving the auxiliary functions and routines to a separate file. A list of the new and modified files with basic explanations can be found in the Appendix C. Validation of the new version {#sec3} ============================= The new release has been intensively tested. Below we show some examples to illustrate the performance. In Fig. \[maps\] a comparison of conversion probability obtained with PRESHOWER 1.0 and PRESHOWER 2.0 is presented for different arrival directions and primary energy of $7\times 10^{19}$eV. ![ Total probability of $\gamma$ conversion for the primary energy of $7\times 10^{19}$eV for different arrival directions as computed by PRESHOWER 1.0 (lines) and PRESHOWER 2.0 (points). PRESHOWER 1.0 values were obtained by numerically integrating the conversion probability in the loop over trajectory. The points represent the fractions of events with gamma conversion simulated by PRESHOWER 2.0 with the new veto algorithm. Each fraction is the average for 10000 primary photons. Computations have been done for magnetic conditions at the Pierre Auger Observatory in Argentina. The azimuth $0^\circ$ refers to showers arriving from the geographic North.[]{data-label="maps"}](maps2.eps){width="100.00000%"} The lines represent conversion probabilities obtained by numerical integrations of the expression (\[pconv2\]) along trajectories a nd with steps as given in PRESHOWER 1.0. The points are plotted to show fractions of events with gamma conversion simulated by PRESHOWER 2.0. Each fraction was calculated after 10,000 simulation runs. Simulations of gamma conversion probabilities for other primary energies has also been checked and in all cases an excellent agreement between the results of the two PRESHOWER versions has been found. A cross-check of the procedures responsible for simulation of bremsstrahlung is shown in Fig. \[20strong\_profiles\], in which the energy distribution of secondary particles for a primary photon of $10^{20}$eV and an arrival direction along which the transverse component of the geomagnetic field is particularly strong (“strong field direction”) are compared. ![Energy distribution of photons (top left) and electrons (bottom left) in 500 preshowers initiated by $10^{20}$ eV photons arriving at the Pierre Auger Observatory in Argentina from the strong field direction. The spectra weighted by energy are plotted to the right. The dashed histograms were obtained with PRESHOWER 1.0 and the solid ones with PRESHOWER 2.0.[]{data-label="20strong_profiles"}](fig6-2011.eps){width="100.00000%"} Plotted are the summed distributions of energies of secondary photons and electrons together with the relevant histograms weighted by the energies. The summations are done for 500 simulation runs. The results obtained with the two program versions are in very good agreement. Further tests for the same set of simulations are shown in Figs. \[cor20nfpp\] and \[cor20efpp\]. ![Number of particles in the preshower for different altitudes of the first $\gamma$ conversion simulated with PRESHOWER 1.0 and PRESHOWER 2.0. Plotted are the preshowers initiated by $10^{20}$eV photons arriving from the strong field direction. The two points somewhat higher than those of the general trend are cases where one of the bremsstrahlung photons again converted in the magnetic field to produce an electron-positron pair which emitted the additional photons. See also Fig. \[cor20efpp\].[]{data-label="cor20nfpp"}](fig8-preold-prenew.eps){width="100.00000%"} ![Energy carried by preshower electrons at the top of the atmosphere vs. the altitude of the first $\gamma$ conversion for a primary photon energy of $10^{20}$ eV in the strong field direction. The two points in excess of the general trend are two rare cases where the first bremsstrahlung photon converted again into an electron-positron pair which increased the total energy carried by leptons. See also Fig. \[cor20nfpp\].[]{data-label="cor20efpp"}](fig9-preold-prenew.eps){width="100.00000%"} These are the number of preshower particles and the total energy carried by the preshower electrons. Both observables are calculated at the top of the atmosphere and both are plotted versus the altitude of primary photon conversion. In both figures a comparison is made between the results obtained with PRESHOWER 1.0 and PRESHOWER 2.0. Again, the agreement between the results of the two PRESHOWER versions is very good. One of the main aims of the new release of PRESHOWER was to reduce the computing time. The results of the CPU time comparison are summarized in Table \[table-times\]. [rcp[2.0cm]{}p[1.7cm]{}p[1.7cm]{}]{} $E_0$ \[eV\]& direction & fraction of converted & time old \[sec.\] & time new \[sec.\]\ 7$\times$10$^{19}$ & $\theta=0^o$ & 0/1000 & 79 & 7\ 7$\times$10$^{19}$ & $\theta=70^o$, $\phi=0^o$ & 0/1000 & 76 & 8\ 10$^{20}$ & $\theta=60^o$, $\phi=177^o$ & 92/100 & 1195 & 209\ \[table-times\] The computing time is more than a factor 5 shorter in case of simulations with PRESHOWER 2.0 than in the case of PRESHOWER 1.0. This reduction is seen both in computation of gamma conversion (speed up by nearly factor 10) and in more time consuming bremsstrahlung routines. Summary {#sec4} ======= The program PRESHOWER is a tool designed for simulating magnetically induced particle cascades due to ultra-high energy photons. It can be linked with air shower simulation packages such as CORSIKA [@corsika] to calculate complete photon-induced particle cascades as they are searched for with cosmic ray observatories. A new version of the PRESHOWER program, version 2.0, has been released and its features are presented in this article. An efficient veto algorithm has been introduced to sample the locations of individual interaction processes. Other modifications include the update of the geomagnetic field model, correcting a rare exception, and reorganizing the program code. The results obtained with the new release of PRESHOWER agree very well with those calculated with the previous version. The new algorithm not only speeds up the program by more than a factor 5, but also allows additional applications due to the increased flexibility of the sampling of interaction points. For example, the preshower effect can now be studied not only in the geomagnetic field, but also close to extended astrophysical objects like neutron stars and active galactic nuclei. An application of PRESHOWER 2.0 in the conditions other than the geomagnetic field require only small changes in the program. The magnetic field model has to be replaced and start and end points of simulations have to be adequately adjusted. Acknowledgements {#acknowledgements .unnumbered} ================ We thank N.A. Tsyganenko for valuable remarks on the application of the IGRF model. We are also thankful to Carla Bleve whose update of the IGRF coefficients in Tsyganenko’s subroutine has been used.\ This work was partially supported by the Polish Ministry of Science and Higher Education under grant No. N N202 2072 38 and by the DAAD (Germany) under grant No. 50725595. Magnetic pair production: $\gamma \rightarrow e^+e^-$ {#magneticpp} ===================================================== The number of pairs created by a high-energy photon in the presence of a magnetic field per path length $dr$ can be expressed in terms of the attenuation coefficient $\alpha(\chi)$ [@Erber]: $$\label{npairs} n_{pairs}=n_{photons}\{1-\exp[-\alpha(\chi)dr]\},$$ where $$\label{alpha} \alpha(\chi)=0.5(\alpha_{em} m_ec/\hbar)(B_\bot/B_{cr})T(\chi)$$ with $\alpha_{em}$ being the fine structure constant, $\chi\equiv0.5(h\nu/m_ec^2)(B_\bot/B_{cr})$, $B_\bot$ is the magnetic field component transverse to the direction of the photon’s motion, $B_{cr}\equiv m_e^2c^3/e\hbar=4.414\times 10^{13}$ G and $T(\chi)$ is the magnetic pair production function. $T(\chi)$ can be well approximated by: $$\label{tcentral} T(\chi)\cong0.16\chi^{-1}{K^2}_{1/3}(\frac{2}{3\chi}),$$ where $K_{1/3}$ is the modified Bessel function of order $1/3$. For small or large arguments $T(\chi)$ can be approximated by $$\label{tlimits} \begin{array}{c} T(\chi)\cong\left\{ \begin{array}{ll} 0.46\exp(-\frac{4}{3\chi}), & ~~\chi \ll 1;\\ 0.60\chi^{-1/3}, & ~~\chi \gg 1. \end{array} \right. \end{array}$$ We use Eq. (\[npairs\]) to calculate the probability of $\gamma$ conversion over a small path length $dr$: $$\label{pconv2} p_{conv}(r)=1-\exp[-\alpha(\chi(r))dr]\simeq\alpha(\chi(r))dr.$$ Magnetic bremsstrahlung {#a2} ======================= After photon conversion, the electron-positron pair is propagated. The energy distribution in an $e^+e^-$ pair is computed according to Ref. [@ppdaugherty]: $$\frac{d\alpha(\varepsilon,\chi)}{d\varepsilon}\approx\frac{\alpha_{em}m_ec B_\bot}{\hbar B_{cr}} \frac{3^{1/2}}{9\pi\chi}\frac{[2+\varepsilon(1-\varepsilon)]}{\varepsilon(1-\varepsilon)} K_{2/3}\left[\frac{1}{3\chi\varepsilon(1-\varepsilon)}\right], \label{daug}$$ where $\varepsilon$ denotes the fractional energy of an electron and the other symbols were explained in the previous chapter. The probability of asymmetric energy partition grows with the primary photon energy and with the magnetic field. Beginning from $\chi>10$, the asymmetric energy partition is even more favored than the symmetric one. Electrons traveling at relativistic speeds in the presence of a magnetic field emit bremsstrahlung radiation (synchrotron radiation). For electron energies $E \gg m_ec^2$ and for $B_\bot \ll B_{cr}$, the spectral distribution of radiated energy is given in Ref. [@sokolov]: $$f(y)=\frac{9\sqrt{3}}{8\pi}\frac{y}{(1+\xi y)^3}\left\{\int^\infty_yK_{5/3}(z)dz+ \frac{(\xi y)^2}{1+\xi y}K_{2/3}(y)\right\}, \label{fy}$$ where $\xi =(3/2)(B_\bot/B_{cr})(E/m_ec^2)$, $E$ and $m_e$ are electron initial energy and rest mass respectively, $K_{5/3}$ and $K_{2/3}$ are modified Bessel functions, and $y$ is related to the emitted photon energy $h\nu$ by $$y(h\nu)=\frac{h\nu}{\xi (E-h\nu)} \;; \qquad \qquad \frac{dy}{d(h\nu)}=\frac{E}{\xi(E-h\nu)^2}. \label{yhv}$$ The total energy emitted per unit distance is (in CGS units) $$W=\frac{2}{3}r_0^2B_\bot^2\left(\frac{E}{m_ec^2}\right)^2\int^\infty_0f(y)dy \label{W}$$ with $r_0$ being the classical electron radius. For our purposes we use the spectral distribution of radiated energy defined as $$I(B_\bot,E,h\nu)\equiv\frac{h\nu dN}{d(h\nu)dx}~~, \label{Idef}$$ where $dN$ is the number of photons with energy between $h\nu$ and $h\nu+d(h\nu)$ emitted over a distance $dx$. From Eqs. (\[fy\]), (\[yhv\]), (\[W\]), and (\[Idef\]) we get[^1] $$I(B_\bot,E,h\nu)=\frac{2}{3}r_0^2B_\bot^2\left(\frac{E}{m_ec^2}\right)^2f(h\nu)\frac{E} {\xi(E-h\nu)^2}~~. \label{brem}$$ Provided $dx$ is small enough, $dN$ can be interpreted as a probability of emitting a photon of energy between $h\nu$ and $h\nu+d(h\nu)$ by an electron of energy $E$ over a distance $dx$. In our simulations we use a small step size of $dx=1$ km. The total probability of emitting a photon in step $dx$ can then be written as $$P_{brem}(B_\bot,E,h\nu,dx)=\int dN=dx\int^E_0 I(B_\bot,E,h\nu)\frac{d(h\nu)}{h\nu}~~. \label{bremprob}$$ The energy of the emitted photon is simulated according to the probability density distribution $dN/d(h\nu)$ obtained from Eq. \[bremprob\]. Details on the new and modified files included in the PRESHOWER 2.0 package =========================================================================== The files included in PRESHOWER 2.0 package but not existing in the previous release are : `IGRF-11.f` : An external routine generating the geomagnetic field components according to the IGRF-11 model [@tsygan]. This file replaces `igrf.f` of the previous release of PRESHOWER. `cernbess.f` : External procedures calculating the sequence of modified Bessel functions [@cernbess]. These open source procedures replace previously used functions from Numerical Recipes. `veto.c` : This file contains functions and procedures called by the veto algorithm. `veto.h` : The header file for `veto.c`. `utils.c` : This file contains auxiliary functions and procedures used within the program. In the previous version of PRESHOWER the auxiliary functions were placed in `preshw.c`, now it is more convenient to have them in a separate file. `utils.h` : The header file for `utils.c`.\ The list of files which existed in the previous release of PRESHOWER but have been modified for PRESHOWER 2.0 include: `preshw.c` : contains the main procedure `preshw_veto` generating preshowers with the veto algorithm, `prog.c` : reads input parameters and calls `preshw_veto`, `Makefile` : modified to account for the new files. [00]{} C. C. Finlay et al., Geophys. J. Int., 183, (2010), 1216, doi: 10.1111/j.1365-246X.2010.04804.x, http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html N. A. Tsyganenko, Institute and Department of Physics, Saint-Petersburg State University, Russia, private communication; http://geo.phys.spbu.ru/$\sim$tsyganenko/Geopack-2008.html http://wwwasdoc.web.cern.ch/wwwasdoc/shortwrupsdir/c341/top.html Numerical Recipes, http://www.nr.com J. Bl[ü]{}mer, R. Engel, and J. R. H[ö]{}randel, Prog. Part. Nucl. Phys. [**63**]{} (2009) 293 A. Letessier-Selvon and T. Stanev, Rev. Mod. Phys. [**83**]{} (2011) 907 K. Kotera, and A. V. Olinto, Ann. Rev. Astron. Astrophys. 49 (2011) 119, arXiv:1101.4256 J. Abraham [*et al.*]{} (Pierre Auger Collaboration), Nucl. Instrum. Meth. A 523 (2004), 50 H. Tokuno [*et al.*]{}, (TA Collaboration) AIP Conf. Proc. [**1238**]{} (2010) 365 B. McBreen and C. J. Lambert, Phys. Rev. D [**24**]{}, (1981) 2536 P. Homola et al., Comp. Phys. Comm. 173 (2005) 71 M. Risse and P. Homola, Mod. Phys. Lett. A 22 (2007), 749 S. Karakula and W. Bednarek, Proc. $24^{th}$ Int. Cosmic Ray Conf., Rome, (1995) 266 T. Stanev and H. P. Vankov, Phys. Rev. D [**55**]{} (1997) 1365 X. Bertou, P. Billoir and S. Dagoret-Campagne, Astropart. Phys. [**14**]{} (2000) 121 W. Bednarek, New Astron. [**7**]{} (2002) 471 H. P. Vankov, N. Inoue, and K. Shinozaki, Phys. Rev. D [**67**]{} (2003) 043002 H. P. Vankov et al., Proc. 28$^{th}$ Int. Cosmic Ray Conf., Tsukuba, (2003) 527 T. Sjöstrand et al., arXiv:hep-ph/0308153v1 (2003) T. Stanev et al., Phys. Rev. D62, 093005 (2000) http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html D. Heck, J. Knapp, J. N. Capdevielle, G. Schatz, and T. Thouw, Report FZKA 6019, Forschungszentrum Karlsruhe, 1998 (available at www-ik.fzk.de/\~heck/corsika/) T. Erber, Rev. Mod. Phys. [**38**]{} (1966) 626 J. K. Daugherty, A. K. Harding, Astrophys. J. [**273**]{} (1983) 761 A. A. Sokolov, I. M. Ternov, Radiation from Relativistic Electrons, Springer Verlag, Berlin, 1986 [^1]: Expression (\[brem\]), valid for all values of $h\nu$, is equivalent to Eq. (2.5a) in Ref. [@Erber]. A simplified form of distribution (\[brem\]) is given by Eq. (2.10) in Ref. [@Erber], however it can be used only for $h\nu \ll E$.
## Information Layer Map Polygon The RadMap control provides you with a set of shape objects, which are specifically designed to work with the RadMap. This example demonstrates MapPolygon. [//]: <keywords:CaptionTemplate, HotSpot, MapMouseLocationIndicator>
A widely used method for affixing toner materials to a receiver sheet is by the application of high temperature and pressure in the fusing subsystem of a photocopying machine. A common configuration for a fusing subsystem is to place a pair of cylindrical rollers in contact. The roller that contacts the side of the receiver sheet carrying the unfixed or unfused toner is known as the fuser roller. The other roller is known as the pressure roller. The area of contact is known as the nip. A toner receiver sheet containing the unfixed or unfused toner is passed through the nip. A soft coating on one or both of the rollers allows the nip to increase in size relative to the nip which would have been formed between two hard rollers and allows the nip to conform to the receiver sheet, improving the fusing quality. Typically, one or both of the rollers are heated, either through application of heat from the interior of the roller or through external heating. A load is applied to one or both rollers in order to generate the higher pressures that are necessary for good fixing or fusing of the toner to the receiver sheet. The application of high temperature and pressure as the receiver sheet passes through the nip causes the toner material to flow to some degree, increasing its contact area with the receiver sheet. If the cohesive strength of the toner and the adhesion of the toner to the receiver sheet is greater than the adhesion strength of the toner to the fuser roller, complete fusing occurs. However, in certain cases, the cohesive strength of the toner or the adhesion strength of the toner to the receiver is less than that of the toner to the fuser roller. When this occurs, some toner will remain on the roller surface after the receiver sheet has passed through the nip, giving rise to a phenomenon known as offset. Offset can also occur on the pressure roller. Offset is undesirable because it can result in transfer of the toner to non-image areas of succeeding copies and can lead to more rapid contamination of all machine parts in contact with the fusing rollers and to increased machine maintenance requirements. It can also lead to receiver (e.g. paper) jams as the toner-roller adhesion causes the receiver sheet to follow the surface of the roller rather than being released to the post-nip paper path. It is common in some machines to apply release oil externally to the roller in the machine as it is being used. The release oil is typically poly(dialkylsiloxane) (PDMS) oil. PDMS oil does an excellent job in its role as release agent; however, there are associated disadvantages. The release agent's compatibility with PDMS-based roller materials result in swelling of the rollers. This swelling cannot be easily compensated for, since it is generally non-uniform. Paper passing over the rollers can wick away some of the release oil within the paper path, resulting in a differential availability of the release oil to roller areas within and outside the paper path. This causes differential swell of the roller inside and outside the paper path so that a "step pattern" is formed in the roller. This can cause problems when different size papers are used and can lead to increased wear and decreased roller life as described in U.S. Pat. No. 5,753,361. This wear can also lead to an uneven pressure distribution between the two rollers of the fusing assembly resulting in poor print quality as described in U.S. Pat. No. 5,035,950 and as is well known in the art. Another associated problem is the tendency of a silicone layer to soften as it swells with the polydimethylsiloxane release fluids and its subsequent debonding as described in U.S. Pat. No. 5,166,031. Here the suggested solution to the problems of the silicone fuser member coating was to develop fluoroelastomer analogs to replace the silicone. However, the toner's tendency to offset is sacrificed. In applications using a donor roller oiling system, the use of a silicone based outer layer and its subsequent swell by the polydimethylsiloxane release fluid results in excessive swelling leading to failure of the roller to provide a uniform layer of release fluid as described in U.S. Pat. No. 4,659,621. Here the suggested solution to the problems of the silicone fuser member coating was to develop fluoroelastomer analogs to replace the silicone. However, the toner's tendency to offset is sacrificed. There continues to be a need for improved fuser and pressure rollers with improved fusing performance, e.g., reduced impact of swell on wear resistance without reducing the toner releasability as well as improved mechanical properties and thermal conductivity.
As Mega Millions & Powerball Lotteries Hit $1 Billion, 12 Serious Things Not to Do If You Win Winning the lottery has become the new version of the American Dream. While the odds of winning are astronomical, it’s easy to see what is drawing so many people each week to buy lottery tickets. The Powerball lottery drawing for Wednesday, October 17, was last seen at $345 million, with a cash value of $199 million. The Mega Millions drawing for Tuesday, October 16, was last seen at $654 million, and its cash option value was $312 million. That’s $999 million in annuity value or $511 million in cash up for grabs during the first half of this week. Either one of these lottery drawings is enough to make many new filthy-rich people. The cash options are lower on the lump-sum cash basis, but it’s still empire-making money and multi-generational wealth. That old boring version of the American Dream requires a lifetime of hard work, planning and advancing in careers without much distraction. Winning the lottery changes all that in an instant and without the need for any skills at all. There’s no hard work required, and no qualifications are needed other than going to buy a ticket. Despite even a fraction of this sum landing the lotto winners in the one-percenters club, there is a dark side to winning the lottery. It turns out that many lottery winners somehow manage to go broke after becoming vastly wealthy. And even worse, some have gone broke in just a few years. 24/7 Wall St. does not want to see anyone go broke. That’s why we have created a self-help lesson of 12 things not to do if you win the lottery. With odds of roughly one in 300 million, lottery players should consider that they have a better chance of being struck by lightning on a sunny day. Still, this new American Dream is just too alluring for millions of Americans to pass up. Now for the hard part. It’s imperative to have a game-plan in place for if you ever become filthy rich out of the blue. And these lessons do not have to be for lottery winners only. The same things can be applied to those who unexpectedly inherit millions of dollars, those who win a big legal judgment, business owners who sell their companies for millions of dollars, and even stock-option millionaires. Keeping a lifetime of wealth requires planning, and it even requires some sacrifices. One lesson should hold true no matter where or how you grew up: No one should ever have to get rich twice. Lottery winners have to act fast, and they need to avoid the endless temptations that can rob unwitting people of their new-found wealth. Purchasing an endless number of belongings, cars and homes can erode your wealth in a hurry. And then having to keep paying for those things, followed by poor decisions, and falling under the influence of friends and family are just some of the pitfalls for those who become vastly wealthy. There are predators and other considerations that have to be avoided at all costs. Imagine what happens if people you know find out that you just became filthy rich. You could become a mark and a target. It’s sad to say, but there have been some unlucky lottery winners who have even lost their lives after winning. Bragging about getting filthy rich could get you killed. Most lottery winners will take a lump sum cash option to have instant and vast wealth rather than to get paid out over a lifetime. It’s actually easy to understand why. It’s an instant empire-making sum without any wait at all. Spending endlessly is a sure way to go broke. It’s not just the yachts and the jets and the homes. What can nail anyone is the upkeep costs, the cost of the people needed to maintain them, the insurance and the occasional calamity. All this glitz and glam will require immediate financial planning, budgeting, learning about taxes and investments, and a slew of other actions for anyone who comes into it to keep their newfound wealth. If this seems ludicrous, go ask the dozens and dozens of well-known movie stars and musicians who have risen from nothing into the stratosphere but slid back down to being broke. Do not let this happen to you! It’s hard enough to get rich as is. And imagine the ridicule you would have to endure from your friends and family if you went from being filthy rich in an instant to being broke all over again. Again, no one should ever have to get rich twice. Take this one lesson to heart: If it sounds silly that you need to set up a strict financial plan and if it sounds silly that you need to put up safeguards to protect your new empire-money, then you are already at severe risk of going broke if you ever become filthy rich.
#!/bin/bash for f in *.json do gzip -k -f $f done
Head wounds The back of the skull shows dramatic injuries. One consists of a hole near the spine, where a large piece of bone has been sliced away by a heavy bladed weapon such as a halberd. This, along with a smaller wound opposite, may well have been a fatal injury. A smaller dent which cracked the inside of the skull, is thought to have been caused by a dagger. There are a further five wounds on the skull, all inflicted around the time of death.
//// Copyright 2017 Peter Dimov Distributed under the Boost Software License, Version 1.0. See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt //// [#mp11] # Convenience Header, <boost/mp11.hpp> :toc: :toc-title: :idprefix: The convenience header `<boost/mp11.hpp>` includes all of the headers listed previously in this reference.
Acute D2 receptor blockade induces rapid, reversible remodeling in human cortical-striatal circuits. Structural remodeling has been observed in the human brain over periods of weeks to months, but the molecular mechanisms governing this process remain incompletely characterized. Using multimodal pharmaco-neuroimaging, we found that acute D2 receptor blockade induced reversible striatal volume changes and structural-functional decoupling in motor circuits within hours; these alterations predicted acute extrapyramidal motor symptoms with high precision. Our findings suggest a role for D2 receptors in short-term neural plasticity and identify a potential biomarker for neuroleptic side effects in humans.
In the mid-1950’s the General Secretary of the Communist Party of the Union of Soviet Socialist Republics Nikita Khrushchev had just completed his takeover of the levers of power following the death of the Leader of Nations, none other than Joseph Stalin two years earlier. The vast empire he fought so hard to lead lay in ruins. Almost nothing had been rebuilt after the devastation of WWII and Stalin’s scorched earth policies of forced collectivization by mass starvation and the purges of the late 1930’s, to say nothing of the nearly twenty million killed directly and indirectly by the invading Germans, left the country in a profound state of post traumatic disorder. Khrushchev knew that his erstwhile allies and new adversaries in the West will not long wait to test the mettle of the new and poorly known leader. After all, he won power by keeping himself well in the shadow of the Great Leader and letting all the better-known players be purged or succumb to internal court intrigue. He also knew that his denunciation of Stalin’s “cult of personality” and rehabilitation of many political prisoners could be interpreted as a sign of weakness, both internally and more importantly externally by the West and the newly created or recreated Eastern European countries, which had been undergoing a rather ruthless process of de-Nazification and forced conversion to communist dictatorship. The Hi-Tech Traditionalist: Motherhood And Western Fascist Feminism He didn’t have to wait long. In 1956 Hungary attempted a rebellion against the Soviet Union with an eye to establishing neutral independence with a Western leaning foreign policy. Knowing that this would be his do or die test, Khrushchev didn’t hesitate; sending massive columns of battle-hardened Red Army armor into the center of Budapest, his suppression of the uprising was as brutal as it was quick. The world was waiting with baited breath for President Eisenhower’s response; would the US allow a democratically elected government supported by students and forward-looking intelligentsia be crushed under the weight of communist armor? In the Kremlin, nobody was less sure of his gambit than Khrushchev. Fully aware of his military’s capabilities, especially the air force and the nascent strategic missile command, he knew that they would be no match to the American Air Force and that Soviet nuclear retaliation was out of the question. He knew that American air cover for the Hungarian rebel forces could spell either a humiliating defeat for the Soviets, or a new world war for which the USSR was exceedingly ill prepared. He also knew what these outcomes would mean for him; a secret trial followed by a quick execution in Lubyanka Prison basement. But Eisenhower, the highest-decorated and most experienced military commander to occupy the oval office since Ulysses S. Grant, hesitated. The swiftness and brutality of the Soviet occupation forces in Hungary played out as Khrushchev had planned; the moment for America action came and went in the blink of an eye. Khrushchev faced his first real external policy test and prevailed. Image by ANRM, Fototeca, 35684 This triumph was a highly teachable moment for Khrushchev and his successor all the way down to the present day. The West, they found, led by the United States, was susceptible to puffery and grandstanding. This susceptibility was in many ways the result of the decisive victory the KGB had scored after the War over its Western counterparts, the newly-formed CIA and MI5. Both services had a penchant, one that they still possess, of hiring known communists and placing them in the highest positions. Amazingly, John Brennan, president Obama’s CIA chief was a member of the American communist party before he was hired by the CIA as a junior operative, a fact that he had not bothered hiding. The British MI5 was rife with Oxford and Cambridge educated communists and thus was 100% transparent to the Russians. Conversely, the KGB had excellent counter-intelligence operations that were yet uncompromised by disillusioned officers as they would become in the 1970’s. Eisenhower, betrayed as he was by his own intelligence services, did not have anything remotely approaching an accurate picture of the Soviets’ actual military capabilities, which were in matter of fact much weaker than he believed them to be. As often happens, Khrushchev’s reliance on puffery finally spelled his doom when he pushed the boundaries with his much ridiculed speech to the UN in which he banged his shoe on the podium and promised to “bury” America and finally when he pushed JFK one step too far in the Cuban missile crisis. Judged dangerously unstable by his Politburo, Khrushchev was replaced in a quiet coup d’état shortly thereafter. What followed was a period of stagnation, decline, and finally disintegration, a period that the current Russian president Vladimir Putin well remembers and which formed his entire world view. Today, president Putin finds himself in much the same predicament as Khrushchev over half a century ago, but without the excuse of having just won a world war against the greatest military force the world had ever seen. The Russian Federation has half the population of the Soviet Union and less than half of its industrial and agricultural heartlands. Its economy is based on the export of hydrocarbons, a market which is already saturated, vulnerable to emerging technologies, and reliant on self-restraint by America, self-restraint which president Trump is reluctant to exercise. It is a true testament to the dire straits that Russia finds itself in that Putin has recently had to embark on the extraordinary course of nuclear doomsday puffery, in effect promising to annihilate the entire Planet Earth if Russia’s geopolitical interests are infringed upon much further. Image by Kremlin.ru Putin’s somewhat pathetic insistence that Russia has just developed a qualitatively new doomsday weapon capable of destroying the US has received very little in the way of response from the Pentagon beyond a few bemused smiles by retired generals on the cable news networks. The reasons for this are multiple. Perhaps the principal reason is that nobody in the Pentagon has ever doubted Russia’s ability to annihilate America, just as they do not doubt that they can deliver a similarly devastating attack on Russia, even as second strike. America’s anti-missile capabilities were never intended (and this was made clear to the Russians) to defeat an all-out nuclear strike by Russia. Rather they are intended to intercept a single ballistic missile launched by a rogue technologically inferior regime such as North Korea or Iran. Thus the somewhat ridiculous Russian insistence on having developed an operational hypersonic cruise missile capable of reaching anywhere in America undeterred by American defenses is more of a sign of weakness than of strength. The Hi-Tech Traditionalist: The Radical Infantilization of America America does not need sophisticated humint (human intelligence) as it did in the Cold War days to know with certainty that no hypersonic cruise missiles are currently operational. The reasons for this are very simple; anything that flies anywhere around the globe is tracked by American satellites. A hypersonic cruise missile flying at low altitude as it would need to do to avoid radar detection and possible interception would generate a shockwave that would certainly be strong enough to trigger the many seismic sensors located around the globe. Israeli jets often make a point to the Hezbollah in Lebanon by “breaking the sound barrier”, i.e. flying at low supersonic speeds and at low altitudes above those Beirut neighborhoods in which the Hezbollah are concentrated. The shockwaves resulting from supersonic flight at low attitudes easily shatter windows and rattle high-rise buildings. Since the pressure drop across a shockwave is proportional to the Mach number (the multiple by which the aircraft speed exceeds the speed of sound), a large hypersonic cruise missile with multiple nuclear warheads, traveling at over Mach 5.0, would generate a very significant shockwave that would not fail to be picked up by terrestrial and maritime seismic arrays. In addition, the fallout of radioactive isotopes from a missile that has, as the Russians have boasted, nuclear propulsion, would be considerable. The reason for that is also simple: missiles are propelled by expelling gases from their engines at speeds that exceed their own speed of travel. No other method exists or can exist within known physics to propel objects at anything approaching supersonic, let alone hypersonic speeds. A nuclear-powered missile would use nuclear reaction to heat and expand air that is rammed into the missile’s engine from its front intake nozzle, adding energy to it and expelling it from the rear of the missile at much higher velocity. There can be no doubt that significant markers in the form of radioactive isotopes would be mixed in with the missile’s exhaust stream and that such elements world be picked up by the very well developed global network of radiation sensors. It is no wonder then, that the official response to Putin’s grand presentation, delivered not by Secretaries Tillerson or Mattis, but by Heather Nauert, the State Department spokeswoman, openly ridiculed any assertions of radically new weaponry, referring instead to the inadvisability of reliance on “cheesy videos” as a means of diplomacy. The unintimidated American response had not gone unnoticed in Russia and, it is fair to say, did not help Putin accomplish even the minimal goal of internal propaganda in support of Putin’s reelection bid. Screenshot RT There’s an underlying hard truth behind Putin’s puffery; his hand is weaker than Khrushchev’s had ever been. The Russian military has not engaged in a meaningful ground war against an organized enemy since 1945. Their engagements in Afghanistan, Chechnya, and Georgia have been against small forces of insurgents, and even then they did not distinguish themselves. In contrast, American military fought and won major engagements against a relatively well-equipped and very experienced adversary, Saddam’s Iraqi army, in 1990 and again in 2003. In its ongoing operations in Syria, Russia’s air force lost at least two aircraft to accidents flying off their now decommissioned sole aircraft carrier, one to a Turkish F-16, and another to ground fire by ill-equipped and ill-trained Syrian rebel forces. In comparison, the F-16 recently lost by Israel was the target of 26 enemy missiles and was the first aircraft Israel had lost to enemy fire in over a quarter century of constant warfare. On the ground, Russian special forces disguised as mercenaries were annihilated just recently by the American forces in Syria with a loss of nearly the entire force, upwards of a hundred men. The recesses of Russian language Twitter are still reverberating with frank accounts of this drubbing by the families and friends of those who managed to survive or witness it. In the 1950’s and the 1960’s when the Russian puffery doctrine was first developed and perfected, Russia had its empire intact, an overwhelming advantage in intelligence, and massive battle-hardened armored columns located only a few kilometers from Vienna and Berlin. Russia’s military industries were self-sufficient, their supply chains never reaching beyond the iron curtain. Today, Russia’s military is largely untested, and where it has been tested, it had often come up short. Intelligence is dominated by sigint (signal intelligence), an area in which the US has unparalleled superiority, though it is no better served by its atrocious personnel choices than it had been in the past. Today, Russia’s military industries rely on electronic components that originate well outside Russia sphere of influence, leaving it vulnerable to wartime shortages. Image by The Russian Presidential Press and Information Office What Russia urgently needs is a Winston Churchill, a leader who can tell the truth to the Russian people just like Sir Winston told the British people in the immediate aftermath of WWII: Russia’s imperial days, its days as a global superpower are well behind her. She now needs to regroup, focus on the plight of its long-suffering population, redirect resources to the development of advanced industry and agriculture, healthcare and education. In Eurasia, a new power has risen, and it is Emperor Xi, not Czar Vladimir that is going to challenge the American hegemony from this day forward.
Q: PHP: Output the watermarked image in tag I have successfully watermarked the image as mentioned in the PHP Manual. It uses header('Content-type: image/png'); to output the image. What I need to do is to output the image in html <img> tags. how can I accomplish that? any idea? A: <img src="/url/to/your/image-making-file-goes-here.php" alt="Image created by php" /> You should probably use some cache for this too.
Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected. The Irôqàkîné Myths is a book, in its first volume, which starts with the Çelvre Arbre myth. It was written on 18 July 2009, by Guillaume Unum of Sandus. These myths are used by the Irôqàkîné tribe, which Guillaume Unum of Sandus is part of, to tell other believers and followers of the Gods of the Irôqàkîné. Enjoy, In the middle of Bélkà lives a family of gods. They were born of Tréntoî, the Father and Lord of those gods. Tréntoî was born a magnificent tree, a silver tree. He, one day, found his sister, Tréntaî, a willow. Tréntoî, his soul, became a spirit and went on the search to find his other friends and siblings. On the first day, he found Àtémne, another soul. Tréntoî and Àtémne became close friends. The next day, Tréntoî journeyed farther down the sacred river Trén. He found Àréçe, a warful fellow. Though, Tréntoî was peaceful, they became friends. The 3rd day, Tréntoî found Mérumne. Mérumne loved the river, and she would spend hours by it. They, too, became friends. On the 4th and last day, Tréntoî went upstream. He found the demon Orémnëdô. Orémnëdô, a cannibal, tried to eat Tréntoî. Following Tréntoî’s journeys, he returned to Tréntaî. “Sister, I have found other people”, he stated. She replied, “Tréntoî, father said not to journey on to others lands.” Tréntoî has never heard of his father before in his life. “Father? Who is that?”, questioned the impatient Tréntoî. “Father, the creator of this forest and river, has turned from us. He no longer looks or cares for us.” Tréntoî, knowing outside the lands of Bélkà lied danger and death, devised a plan to take Bélkà from His Father. Tréntoî gathered everyone he had met around, stating his plan to them. They all agreed. The gods of the Southern Lands of Nôréà did not wish to leave. Those gods left the council of the Gods of Bélkà, and so started the feud between the Bélkàns and Nôréàqi today. Lôrum, the Middle lands between Nôréà and Bélkà, stayed neutral, but always wanted to defend their closer friends of Nôréà. Bélkà would wage war against the Creator. And to this day, peace has not come, but all of Bélkà remains blessed by the Gods. Mérumne keeps the enemies from crossing into the mainland and sacred lands of Bélkà from the Trén. Témpus, the god of Weather, controls the sun, stars, moon, and seasons to protect Bélkà. Aréçe openly defends Bélkà’s frontiers. Tréntoî, the Lord God, defends the throne of the Lord of Bélkà. Today, every man and woman of Bélkà openly thanks and welcomes the Gods to their homes, of which they have opened their lands to them. Today, the people of Bélkà are blessed by the gods, and can not die on Bélkàn lands. When the Creator and Father found the gods again, he let them be, but stated that no more of his land shall be taken by the Irôqàkîné. Today, the Hoplite Legions of Bélkà police the River Trén for the Gods. No relationship has been as open and free as the relationship between the gods of the Irôqàkîné and the common Man. Today, God and Man live and sleep as neighbors. By this, the People of Bélkà are blessed higher than any people on Earth, and the Gods protect them from every danger. But on year, the Nôréàn god Gàmnes attacked Bélkà. Gàmnes, the Lord God of Nôréà, was strong enough that men died in Bélkàn flanks. The Gods of Bélkà, amazed that a simple god of the simple land and people of Nôréà could kill and undermine the blessings of them, fought back. The Holiest Legions of Bélkà invaded, losing no man. Nôréà was crippled. Lôrum declared war, and the Gods of the Woodfolk of Lôrum were thrown into the mess. Farther north, the Land of Orémnëdô was, too, invading Bélkà. The People of Bélkà were thrown into chaos, seeing their homes and farms being destroyed. The Lowlanders fled up the hill to Aîgu, the capital of Bélkà. The Lord of Bélkà, Gérmné, fought the invaders. The gods of Bélkà, seeing this happen before them, blessed him. After the first battle, Gérmné was invited to the home of Tréntoî. There, the gods made him one of their own; a living god. This, though, did not stop the invaders. Days, and months, passed, and Bélkà, which was at full gear against the invaders, pushed back tremendously. The people of Bélkà spread the lands of their home, they, too, defended them. They invaded the Northern Land of Orémnëdô. The Demon, seeing this happen, was amazed. He asked for peace. Farther South, Nôréà and Lôrum, too, were loosing land. They too asked for peace. Bélkà saw their weakness, and pressed no further. The People of Bélkà, once, too, a simple folk, which had speratic settlements in their lands, now covered a larger potion of the Lands of the Irôqàkîné." Àmnëîetîetàmne es l' HérôëBélkà Myth (BM.: Àmnëîetîetàmne is the Hero of Bélkà Myth; a myth from Bélkà) "Àréçe, rémnemnedés moî e l' pàmnëyurfélçe, Àmnëîetîetàmne. Àmnëîetîetàmne was born in the beginning of the Great War. His father was Àréçe, god of War, and his mother was Éréçe, a woman from humble, yet protective, roots in the West of Bélkà. When he was young, he was blessed by His father, Àréçe. During the middle of the War, he was drafted into the Legions of Gérmné. Àmnëîetîetàmne and Gérmné became very close and respected leaders and friends. Àmnëîetîetàmne soon joined the ranks of Generals. During the Fall of that year, when the trees loose their leaves, He commanded his troops north into the Lands of the Demon: where even Tréntoî was tricked and scared out. He came to the village of Emanations of the Demon Orémnëdô, where he and his men were attacked by the visions of themselves burning. It fooled their minds, so they believed they were burning and dying. All his men had given up, and had laid down, thinking they were dying. Only Àmnëîetîetàmne kept moving, the Emanations stunned. Àmnëtîetàmne came to the house of Orémnëdô and cried, “Demon, even Tréntoî was fooled and scared out, harmed! I will show my people you are Demon, and that I shall even die for, to protect my people!” Orémnëdô, hearing this, stumbled his way out of his House. Àmnëîetîetàmne drew his hàstà, the blessed tool of all Bélkàn men, and defeated Orémnëdô. But he can not pass away, for Orémnëdô can not die. Àmnëîetîetàmne, overcome by the flames of the trickery of Orémnëdô, gave his life for his men. Tréntoî bestowed the honor upon Àmnëtîetàmne by becoming a spirit. And, because of his Selflessness, the men of his Legion returned home, safe, but never forgetting the tale of Àmnëîetîetàmne, the Great Hero of Bélkà." Jôrëlômneà Myth Tréntoî was displeased, upset and shaken by the death of so many during the Great War, that he traveled up to the moon. During his time there, a single tear came from his eye. This was the tear the size of he, shed on the light side of the moon. The tear became infused with the life bringing qualities of Tréntoî, and so was born Nàcht, god of Night. Nàcht worked on making people happy, during the night his peace would spread over the land. He would work his magic to make sure the crops did not fail the night, to afflict the day. To this day, he ensures our nights our peaceful, and that we sleep well. We give thanks to He who brings the bountiful peace from the chaos of War."
To enhance autocompleter operation, I thought I should add a few hooks: 'get_line_idx' => int return the position of "cursor" on current line. Alternative names: get_caret_pos, get_line_buf_idx. 'get_current_line_buffer' => string return the current line. This is for the benefit of custom completers in gui's (and emacs). Currently, these are directly provided by readline, and you can monkeypatch your way around this requirement, but that's obviously not something to like... See test/test_completer.py for monkeypatching alternative. -- Ville M. Vainio http://tinyurl.com/vainio
Q: How to select different type of columns dynamically to use in where clause? I have a following table in sqlalchemy: class FieldType(enum.Enum): INT_FIELD = 0 FLOAT_FIELD = 1 STRING_FIELD = 2 class EAVTable(Base): __tablename__ = 'EAVTable' field_name = Column(Stirng, primary_key=True) field_type = Column(Enum(FieldType)) int_field = Column(Integer) float_field = Column(Float) string_field = Column(String) This is to model the EAV model which fits my business purpose. Now to use it easily in the code I have the following hybrid_property. @hybrid_propderty def value(self): if self.field_type == FieldType.INT_FIELD: return self.int_field ... @value.setter def value(self, value): if type(value) == int: self.field_type = FieldType.INT_FIELD self.int_field = value ... This works fine when I try to get and set the fields in Python code. But I still have a problem: session.query(EAVTable).filter(EAVTable.value == 123) This does not work out of the box but I had an idea of using hybrid.expression where we use a case statement: @value.expression def value(cls): return case( [ (cls.field_type == FieldType.INT_FIELD, cls.int_field), (cls.field_type == FieldType.FLOAT_FIELD, cls.float_field), ... ] ) This in theory works, for example, the SQL generated for query session.query(EAVTable.value = 123 looks like: select * from where case when field_type = INT_FIELD then int_field when field_type = FLOAT_FIELD then float_field when field_type = STRING_FIELD then string_field end = 123; Which semantically looks like what I like, but later I find that the case expression requires all the cases have the same type, or they are cast into the same type. I understand this is a requirement from the SQL language and has nothing to do with sqlachemy, but for more seasoned sqlalchemy user, is there any easy way to do what I want to achieve? Is there a way to walk around this constraint? A: You could move the comparison inside the CASE expression using a custom comparator: from sqlalchemy.ext.hybrid import Comparator class PolymorphicComparator(Comparator): def __init__(self, cls): self.cls = cls def __clause_element__(self): # Since SQL doesn't allow polymorphism here, don't bother trying. raise NotImplementedError( f"{type(self).__name__} cannot be used as a clause") def operate(self, op, other): cls = self.cls return case( [ (cls.field_type == field_type, op(field, other)) for field_type, field in [ (FieldType.INT_FIELD, cls.int_field), (FieldType.FLOAT_FIELD, cls.float_field), (FieldType.STRING_FIELD, cls.string_field), ] ], else_=False ) class EAVTable(Base): ... # This replaces @value.expression @value.comparator def value(cls): return PolymorphicComparator(cls) This way the common type is just boolean.
Q: WordPress edit_user() and Cimy User Extra Fields I created a page to update a user's WordPress profile. The site uses cimy user extra fields, and adds several custom fields to the users profile. The page gets all of the users data and enters into the appropriate field in the form. Users can make whatever changes they need then click submit. When the form is posted is calls edit_user(). It returns errors that all of the extra fields are empty. I can echo out the $_POST variables and they are all there. The names match what they are named in the cimy settings panel. The standard WordPress fields are fine. I am not sure if I need to switch to something other than edit_user(), I have looked at a couple of others methods but nothing looks any better than edit_user(), or is there something else I need to do to be able to update the cimy extra fields. Here is the code: if(!empty($_POST['action'])){ require_once(ABSPATH . 'wp-admin/includes/user.php'); require_once(ABSPATH . WPINC . '/registration.php'); check_admin_referer('update-profile_' . $user_ID); $errors = edit_user($user_ID); echo $_POST['church']; if ( is_wp_error( $errors ) ) { foreach( $errors->get_error_messages() as $message ) $errmsg .= "$message "; } if($errmsg == ''){ do_action('personal_options_update', $user_id); $d_url = $_POST['dashboard_url']; wp_redirect( get_option("siteurl").'?page_id='.$post->ID.'&updated=true' ); } else { $errmsg = '<div class="box-red">** ' . $errmsg . ' **</div>'; $errcolor = 'style="background-color:#FFEBE8;border:1px solid #CC0000;"'; } } As per usual, I am sure that I am overlooking something obvious. Any thoughts would be greatly appreciated.Thanks. A: I figured it out. The plugin adds cimy_uef_ to the front of the field names. Once I added that everything worked correctly.
Q: disable an application or a module in Symfony I have two applications. I want to disable one according a field stored in a database. Is possible to disable an application (((if it's not posible) a module) by code maybe using a filter) ? I've found a piece of code that executes the project:disable but i think it's not nice enough. The alternative I think is to check the value stored in the database inside a custom filter and then redirect to an action that inform 'The site is disabled'. A: You can create a filter that checks if the current user may access the requested module/action: if($this->getRequest()->getParameter('module')=='yourmodule' && !$this->getUser()->mayAccess('yourmodule'()){ //redirect to somewhere else } In user class: function mayAccess($module){ $key = $module.'_enabled'; if(!$this->hasAttribute($key)){ $enabled = ... //Fetch permission from database $this->setAttribute($key,$enabled); } return $this->getAttribute($key); } Something like that. Maybe you can use the modules security.yml file and override the function that checks the users credentials and permissions, like the hasCredential() method? That actually seems a more clean way to do it. See: http://www.symfony-project.org/api/1_4/sfBasicSecurityUser
Q: For which Ramsey type results density versions are wrong? I look for examples of Ramsey-type statements, for which the density counterparts do not hold. Example: usual Ramsey theorem. If all edges of a complete graph $K_n$ are colored in $c$ colors, there is a monochromatic, say, triangle if $n>n_0(c)$ is large enough. But if we choose more than $\frac1c \binom{n}2$ edges, it may appear that there is no triangle formed by the chosen edges. Another (related) example (Schur theorem): if we color $\{1,\dots,n\}$ in $c$ colors, there is a monochromatic solution of $x+y=z$. It is not true that if we choose a half of numbers, than there exists a solution of above equation with $x,y,z$ chosen. Say, we could choose only odd numbers. On there other side, there are very important examples, when denisty versions are true (Szemeredi theorem, density Hales-Jewett and many others). My question is to 1) give less trivial examples; 2) give some theorems or conjectures on when density versions hold and when fail. A: Here are a few examples from graph-Ramsey theory. In the first pair of examples, the Ramsey version and density version are essentially as far apart as one can get. In the last two pairs of examples, the two versions coincide. Now I wonder if there is an example from graph-Ramsey theory where the bound from the density version is strictly stronger than the Ramsey version? But in general, it seems to me that your question could be narrowed down by simply asking for results in which the density version implies the Ramsey version, since those seem to be more rare. 1R) In every 2-coloring of $K_n$, there is a monochromatic connected subgraph on $n$ vertices. (Folklore) 1D) Every graph on $n$ vertices with at least $\binom{n-1}{2}+1$ edges is connected. (Folklore) 2R) In every 2-coloring of $K_n$, there is a monochromatic path on at least $2n/3$ vertices. (Gerencsér, Gyárfás) 2D) Every graph one $n$ vertices with at least $\frac{2}{3}\binom{n}{2}$ edges has a path on at least $2n/3$ vertices. (Erdős, Gallai) 3R) In every 2-coloring of $K_n$, there is a monochromatic matching covering at least $2n/3$ vertices. (Cockayne, Lorimer) 3D) Every graph on $n$ vertices with at least $\frac{5}{9}\binom{n}{2}$ edges has a matching covering at least $2n/3$ vertices. (Erdős, Gallai) 4R) In every 2-coloring of $K_n$, there is a monochromatic copy of every tree $T$ with at most $n/2+1$ vertices. Furthermore, there are trees with more than $n/2+1$ vertices for which this is not true. (Burr-Erdős conjecture, solved for large $n$ by Zhao) 4D) Every graph on $n$ vertices with at least $\frac{1}{2}\binom{n}{2}$ edges contains every tree with at most $n/2+1$ vertices. (Erdős, Sós conjecture) 5R) In every $r$-coloring of $K_{n,n}$, there is a monochromatic connected subgraph on at least $2n/r$ vertices and this is essentially best possible. (Gyárfás) 5D) Every balanced bipartite graph on $2n$ vertices with at least $n^2/r$ edges has a connected subgraph on at least $2n/r$ vertices and this is also best possible. (Gyárfás; Mubayi; Liu, Morris, Prince)
extension UIActivity.ActivityType { public static let openInSafari = UIActivity.ActivityType("TUSafariActivity") }
LOS ANGELES (AP) — An Indiana man arrested over the weekend in California with three assault rifles and ammunition in his car was forbidden from leaving his home state as part of probation stemming from a case in which he pointed a gun at neighbors, according to authorities and court records. Investigators on Monday were trying to determine whether James Wesley Howell had any plans to use the weapons. The 20-year-old told police that he was in the area to attend a gay pride event in West Hollywood that draws hundreds of thousands of people. His arrest came just a few hours after 49 people were shot and killed in a gay nightclub in Orlando, Florida. Police said they had found no evidence the incidents were connected. Howell of Jeffersonville, Indiana, was arrested in Santa Monica around 5 a.m. Sunday after residents called police to report suspicious behavior by a man who parked his white Acura sedan facing the wrong way. When officers arrived, they saw an assault rifle on Howell's passenger seat, Santa Monica police Lt. Saul Rodriguez said. They searched the car and found two more assault rifles, high-capacity magazines and ammunition, and a five-gallon bucket with chemicals that could be used to make an explosive device, police said. Santa Monica Police Chief Jacqueline Seabrooks initially tweeted that Howell told officers he wanted to "harm" the gay pride event, but she later corrected her statement to say that the suspect only said he was going to the parade. Howell was accused twice last year of threatening people with a gun, according to court records. Police in Charlestown, Indiana, said the first incident involved Howell's ex-boyfriend in October and the second involved a neighbor four days later. In the first incident, the ex-boyfriend said Howell pointed a rifle at him when he arrived at Howell's home to pick up his belongings. "James told me that if I stepped foot in his yard, he would shoot me," the ex-boyfriend told a responding officer, according to a police report. In the other incident a neighbor called police and said Howell had pointed a handgun at her. When officers arrived, he denied pointing a gun at anyone, saying he only cocked it and held it at his side. Police found a loaded revolver in his waistband. Howell was charged with misdemeanor intimidation in that case and reached a plea deal in April that placed him on probation and prohibited him from having weapons and from leaving the state. A felony charge of pointing a firearm was dropped. James Hayden, chief probation officer in Clark County, Indiana, said Monday that he would seek to revoke Howell's probation. Howell met with a probation officer on May 22 who rated him a low-level offender, Hayden said. Officers hadn't yet conducted a surprise home visit to check that Howell was following a judge's order that he not have weapons during his one-year probation, he said. Howell's parents didn't know he was heading to California and were trying to figure out what happened, said Louisville, Kentucky, attorney Bobby Boyd, who represented Howell in a local case. "They're certainly shocked by learning of the arrest out there in California," Boyd said. "They're dealing with it as best they can and trying to process it. ... There's nothing to indicate any sort of acts that the news has been reporting." Boyd said Howell's family is cooperating with federal agents and they were working to find an attorney in California. Howell was scheduled to appear in Los Angeles court Tuesday on weapons charges. A Facebook page that apparently belongs to Howell includes photos of the white Acura he was driving. The postings on the page are unremarkable: There's no enmity toward gays or notable political activism. One post says he's signing a petition to legalize marijuana. The page's most recent public post, from June 3, shows a photo comparing an Adolf Hitler quote to one from Hillary Clinton. An anti-Clinton, pro-Bernie Sanders photo was posted in February. The page says Howell worked as an auditor for a company that makes air filters. A former roommate, Grace Logsdon, told The Associated Press that Howell possessed at least five guns and liked to frequent a shooting range. Logsdon said Howell had a bad temper and had relationships with men and women. She called the California incident "sad, very sad" and said she hopes Howell gets some help. In California, the LA Pride event went on as usual Sunday, albeit with increased security. Los Angeles Mayor Eric Garcetti announced the arrest at the start of the parade and struck a defiant tone.
Found in Sichuan and Hubei China in evergreen broadleaved montane forests at elevations of 1600 to 2100 meters as a small sized, cool to cold growing terrestrial with conical pseudobulbs enveloped by leaf sheaths and carrying 2 to 3, arising after flowering, elliptic to obovate-lanceolate, densely pubescent beneath leaves that blooms in the spring on an axillary, erect, 17.6" [to 44 cm] long, densely pubescent, racemose, several flowered inflorescence
KGKL (AM) KGKL (960 AM) is a radio station broadcasting a talk radio format. Licensed to San Angelo, Texas, United States, the station serves the San Angelo area. The station is currently owned by Townsquare Media and features programming from ESPN Radio. References External links Category:News and talk radio stations in the United States GKL Category:Townsquare Media radio stations
Aval (TV series) Aval (English :She) is a 2011-2013 Indian Tamil-language family soap opera that aired Monday through Friday on Vijay TV from 7 November 2011 to 4 May 2012 at 7:00PM IST and 7 May 2012 to 15 March 2013 at 6:30PM IST for 416 episodes. The concept development and direction is by G. Jayakumar, with U. Vallimuthu being the episode director. It is a remake of the Malayalam serial Kumkumapoovuthat was aired on Asianet. Plot The series focuses on the relationship between a mother and her daughter. Jayanthi (Lakshmy Ramakrishnan) arranges for her daughter Amala (Mahalakshmi) to be married to Mahesh (Sanjeev). All's well with them until Amala finds out about Shalini (Sreekala/Nithya Ram), who lives in Mahesh's house. Jayanthi must come to terms with her feelings about the daughter she believed dead. Cast Lakshmy Ramakrishnan as Jayanthi (Shalini & Amala Mother) Sanjeev as Mahesh (Amala Husband) Mahalakshmi as Amala (Mahesh Wife) Sreekala/Nithya Ram as Shalini Harish Siva as Sheela Husband. Manikandan as Meyyappan Balaji Sridhar as Arun (Jayanthi's son) Airing history The series started airing on Vijay TV on 7 November 2011 Monday through Friday at 7:00PM IST. From Monday 7 May 2012, the series was moved to the 6:30PM IST time slot to make way for a new series, 7C. References External links official website Category:Vijay TV television series Category:Tamil-language television soap operas Category:Tamil romance television series Category:Tamil Nadu drama television series Category:2011 Tamil-language television series debuts Category:2010s Tamil-language television series Category:Tamil-language television programs Category:2013 Tamil-language television series endings
Recommended advice Advice - toddler The first milk-tooth evokes a lot of emotions in both the parents and the baby. The first milk-teeth usually grow in the 6th month, but sometimes they can appear as early as the 3rd or as late as the 8th month. Does your baby suddenly become fretful, wake up in the middle of the night and demand your constant attention?This is probably one of the seven growth spurts! It will last for about a week and then... your baby will seem to have learnt new skills almost overnight. In the 7th-8th month the baby’s attachment to mum becomes very obvious and is clearly expressed. At this time the baby can experience separation anxiety - it doesn’t let mum go away even a little and is afraidof strangers.
Uninspired showing should be the last word MIAMI – George Wilson, a font of oratory in years past, has been strangely subdued this season. His role on the defense has been reduced, and he hasn’t made the plays. I suspect it’s because the Senator’s enormous capacity for belief has diminished, too. “I talk to myself all the time,” Wilson said Sunday after the Bills lost to the Dolphins, 24-10. “It’s like different chapter, same story.” Wilson has been in the organization for nine seasons. He has played for Mike Mularkey, Dick Jauron and Chan Gailey, answered to general managers and defensive coordinators too numerous to mention. Yet the story never changes. The Bills keep selling hope, but there’s always another losing season under the Christmas tree. They raise your expectations and claim they’re making progress, but when the smoke clears, they’re back at the bottom of the AFC East. It was supposed to be different in the year of Mario Williams. Maybe they wouldn’t overtake the Patriots for the division title, but they would challenge for a wild card. Clearly, they were better than a Dolphins team breaking in a rookie quarterback. No way they’d finish last. But here they are again, last in the division. Guess what? They’re not better than Miami. The Dolphins won the rematch in convincing fashion, dominating the Bills through three quarters and coasting home on a day that begged for last-minute Christmas shopping. Miami isn’t going to the playoffs, either. But the Dolphins are better, because they have the most important pieces in any rebuilding project: They have their franchise quarterback in rookie Ryan Tannehill; and they have a promising coach in first-year man Joe Philbin. The Bills talk about progress, but they’ll finish behind Miami in what was supposed to be a transitional year for the Fish. Things are supposed to go in cycles in the NFL, but the wheel never stops on “Buffalo.” They paid through the nose for Mario Williams. The schedule was in their favor. They had five games against rookie quarterbacks. It was a perfect setup. They’re 5-10. This makes them 0 for their last 3 against teams starting a rookie quarterback. That should get people fired. Center Eric Wood said he’s certain they’re better. The films show it. But Wood said he knows how the NFL works, and that people are going to lose their jobs because of this. He’s right, and it should start with the coach. Players said during the week that they might be playing for Gailey’s job. If so, this was an uninspiring show of love. The Bills were careless and unfocused on offense. Fitzpatrick said it might have been the worst offensive showing of his time in Buffalo. Stevie Johnson, who was emotional in talking about Gailey last week, had a game to forget. He dropped a pass. He lost a fumble. He took the blame for the loss, which is more than you get from Mario Williams. Johnson admitted he might have been too wired to play his best game. One way or another, the players aren’t responding to their coach. They look like a team that lacks passion and focus – that knows the operation is about to come apart, as Wood suggested. How could they believe in Gailey? He’s 15-32 in Buffalo. He’s 3-14 in the AFC East, 1-4 this season. This makes 10 or more losses in each of his three years in charge. Three kicks at the can, and Gailey couldn’t match Dick Jauron’s signature record of 7-9. You judge coaches (and quarterbacks) by their performance on the road, when the good teams assert themselves in the NFL. The Bills are 5-19 in Gailey’s three years. They were outscored in those losses, 636-310. That’s an average score of 33-16. So if you’re desperate for signs of progress, Sunday’s margin was closer than the average road defeat in the Gailey era. OK, I tried. I’m not sure what it will take for Gailey to lose his job. General Manager Buddy Nix swears his belief in the guy. Gailey has at least one, maybe two, years left on his deal. Ralph Wilson doesn’t like searching for coaches, or paying them for not working. But he hates losing to the Dolphins. Don’t start with me about the officiating. I don’t want to hear about injuries, either. The Dolphins were riddled by injuries, especially on offense. But they outplayed the Bills’ defense, which was supposed to be getting better but has lost to rookie quarterbacks the last two weeks. Lamar Miller, a rookie, had a career-high 73 yards rushing on 10 carries. Add him to my list of backs who have their breakout game against the Bills. Rishard Matthews, a rookie receiver, had a career-best 30-yard catch. Tannehill won’t make anyone forget Russell Wilson, but he had a career-long, 31-yard option run. The Dolphins’ coaches were embarrassed by the offense’s showing in Buffalo last month. They were well-prepared for the rematch. The offense played with poise and purpose. It mixed up the plays well. The line did a nice job of blocking the Bills’ front as the Dolphins ran for 182 yards. “They ran the ball better this time,” Wilson said. “They moved the ball up and down the field, changing up personnel and tempo. As a team, we never gained any traction today to get the momentum changing in our direction.” Buffalo’s offense seemed confused and skittish at times. Johnson lost focus in the first half after his near-touchdown was ruled incomplete because he didn’t complete the catch while going to the ground. It was big of him to take the blame, but a strong coach doesn’t allow these mental lapses to persist. It didn’t help to lose tight end Scott Chandler. They were also without Donald Jones, nominally their top speed receiver. T.J. Graham seemed lost. Suddenly, Dorin Dickerson turned into a go-to receiver. But modest personnel doesn’t excuse some of the sloppy execution. The Bills showed a remarkable lack of urgency when they were 21 points behind in the fourth quarter, letting big chunks of time run off the clock between snaps. The players insist they care about Gailey. They say they’re playing hard. You can’t say they quit, the way they did in Toronto. But there’s a flat, defeated quality to this Bills team. Well, there’s always the Jets. Rex Ryan and his tortured team come to Orchard Park next weekend. If the Bills win, they’ll finish ahead of the Jets on conference tiebreaker. Maybe it’ll save Gailey’s job.
Magazine Announces New Marmalade Boy Manga By Joseph Luster Be the first of your friends to like this. Posted 2/28/2013 Wataru Yoshizumi's Marmalade Boy originally made its debut in shoujo magazine Ribon in 1992, where it ran until 1995, ultimately filling up eight volumes. Tokyopop released the series in North America, but in Japan, a new chapter is about to begin. Marmalade Boy Little is set 13 years after the end of Marmalade Boy, and focuses on the younger brother and sister of original series leads Miki Koishikawa and "marmalade boy" (sweet on the outside, bitter on the inside) Yuu Matsuura. The sequel is set to kick off in this year's fifth issue of Cocohana, which goes on sale in Japan on March 28.
When confronted with a predator near their nest, both male and female northern cardinals will give an alarm call that is a short, chipping note, and fly toward the predator in an attempt to scare them away. They do not aggressively mob predators. (Halkin and Linville, 1999)
The Way We Live Now: At the bottom of a bucket metaphor. GM is burning up the money we all gave it as fast as humanly possible. Bread lines are lengthening. But something's still selling: I'm no "expert" on economics, but isn't all those billions we gave to GM a little bit like trying to fill up a bucket with a big hole in the bottom? Today the company said its cash reserves are "dwindling." As in "we are just burning that shit, in huge piles." GM's cash stockpiles are disappearing at the rate of $113 million per day. Well I'm sure it's worth it, whatever they're doing with it! Idea for our government: instead of giving billions to money-losing corporations, why not invest that money in something that will appreciate in value, like antiques? There are literally dozens of antiques lying around the White House at this very moment. The rich are still bidding up the prices of fancy old furniture, on the theory that hey, at least you can sit on it if all else fails. And who can blame them? It's either antiques, or medieval torture devices. And GM is determined to get those tongue-removal pincers at any cost.
Activin a causes cancer cell aggressiveness in esophageal squamous cell carcinoma cells. Expression of activin A is associated with lymph node metastasis and clinical stage in esophageal cancer. To clarify the aggressive behavior of tumors with high activin A expression, we used the beta subunit of activin A to establish stable activin betaA (Act-betaA)-transfected carcinoma cells in two human esophageal carcinoma cell lines, KYSE110 and KYSE140. The biological behavior of these cells was compared with that in mock-transfected cells from the same cell lines. We focused our attention on cell growth and tumorigenesis, and proliferation and apoptosis. Both Act-betaA-transfected carcinoma cell lines showed a higher growth rate than the mock-transfected carcinoma cells. In an in vitro invasion assay and a xenograft analysis, the Act-betaA-transfected carcinoma cells showed far higher proliferation in vitro and a higher potency for tumorigenesis in vivo, respectively. Moreover, in an analysis of apoptosis via Fas stimulation, the Act-betaA-transfected carcinoma cells showed a higher tolerance to apoptosis compared with the mock-transfected carcinoma cells. Moreover, anti-activin-neutralizing antibody-treated squamous cell cancer cell lines inhibited their migration. Collectively, these data indicate that continuous high expression of activin A in esophageal carcinoma cells is not related to tumor suppression, but rather to tumor progression in vitro and in vivo. The inhibition of activin might be one of the methods to attenuate tumor aggressiveness.
Q: How can I assign inserted output value to a variable in sql server? Possible Duplicate: SQL Server Output Clause into a scalar variable DECLARE @id int INSERT INTO MyTable(name) OUTPUT @id = Inserted.id VALUES('XYZ') I am trying like above. How is it possible? A: Use table variable to get id DECLARE @id int DECLARE @table table (id int) INSERT INTO MyTable(name) OUTPUT inserted.id into @table VALUES('XYZ') SELECT @id = id from @table
Ryan says health insurance mandate part of GOP tax talks Originally published November 5, 2017 at 10:10 am Updated November 5, 2017 at 1:17 pm House Speaker Paul Ryan, R-Wis., walks through Statuary Hall to his office on Capitol Hill in Washington, Friday, Nov. 3, 2017. Ryan introduced a far-reaching tax overhaul Thursday that will be a priority for the GOP. (AP Photo/J. Scott Applewhite) Share story WASHINGTON (AP) — House Speaker Paul Ryan said Republicans are discussing whether their tax plan should include a repeal of the Obama health law’s requirement that people have insurance coverage or face a penalty, a step pushed by President Donald Trump but seen by some GOP lawmakers as possibly imperiling a much-needed legislative victory. It would be another shot at further undermining the Affordable Care Act after repeated failures by the GOP-led Congress to repeal and replace the law, as Trump has demanded and Republicans promised would happen after President Barack Obama left office and Republicans ran Washington. The House Ways and Means Committee was set to begin work on the tax rewrite Monday, with the goal of full House consideration next week. The committee chairman, Rep. Kevin Brady, R-Texas, has said including a repeal of the health law’s individual mandate would be politically risky, given that the Senate has failed to pass health legislation in Trump’s first year. Ryan, R-Wis., told “Fox News Sunday” that “a lot of members are suggesting” that the House include the repeal, though he did not weigh in personally on how to deal with the coverage mandate. Equally as evasive was the second-ranking House GOP leader: “Well, I know people are talking about it,” Rep. Kevin McCarthy, R-Calif., said on CBS'” Face the Nation.” He added: “Look, my focus is on tax. As the individual mandate goes, I would not be opposed to that. But I want to see this bill go forward.” The Congressional Budget Office has estimated that repealing the individual mandate would save $416 billion over a decade. The mandate provides a powerful incentive for people to get coverage before health problems arise. But the money represents a tempting revenue source for GOP tax-writers whose tax plan would add an estimated $1.5 trillion over 10 years to the national debt. Rep. Mark Meadows, the chairman of the House Freedom Caucus, told ABC’s “This Week” that revenue counted through the repeal could in turn be used to soften the blow from the expiration of various tax credits and elimination of the deduction for state income and sales taxes. “We’re advocating on behalf of that,” said Meadows, R-N.C. Trump tweeted last week: “Wouldn’t it be great to Repeal the very unfair and unpopular Individual Mandate in ObamaCare and use those savings for further Tax Cuts for the Middle Class.” And Brady said Friday that the president had spoken to him twice by phone and once in person, making the case for scraping the mandate. “There are pros and cons to this. Importing health care into a tax reform debate does have consequences,” Brady said last week. Rep. Peter King, R-N.Y., said on ABC that “we should confine this to tax cuts and tax reform.” Some Republican lawmakers from New York and New Jersey have come out against the bill in its current form, saying that ending deductions for state and local sales and income taxes would hit their constituents the hardest. Republicans talking up the tax bill on the Sunday shows played down concerns that the bill would add substantially to the nation’s debt. “Preliminary numbers really look very good in terms of economic growth. So, over a longer period of time, some 10 to 15 years, we believe that the economic growth will outweigh any short-term deficit increase that we see,” Meadows said. “These so-called budget hawks have turned into an extinct, endangered species,” said House Democratic leader Nancy Pelosi of California during an appearance on CNN’s “State of the Union.” In examining the GOP’s tax bill, Congress’s Joint Committee on Taxation projected that the expiration of certain tax breaks would result in tax increases for some income groups in some years. An analysis from the liberal Center on Budget and Policy Priorities said the committee’s projections showed the tax cuts would overwhelmingly benefit the wealthiest households. Meanwhile, the joint committee’s analysis indicated that tax filers with incomes between $20,000 and $40,000 would pay higher individual income taxes in 2023 and each year thereafter, as would filers with incomes between $200,000 and $500,000. McCarthy said Senate rules did not allow the House to make certain tax cuts permanent, “but I will promise you this: As the growth comes in, those will be kept,” McCarthy said.
Q: FileInputStream look into root of Jar I have a FileInputStream in a class in the package com.nishu.ld28.utilities, and I want to access sound files in the folder Sounds, which is not in the com.nishu.ld28 package. I specify the path for loading like so: "sounds/merry_xmas.wav" And then try to load it like this: new BufferedInputStream(new FileInputStream(path)) When I export the jar, the command line prompt that I run it through says it can't find the file. I know how to access the files when I am running the program in Eclipse, but I can't figure out how to point the FileInputStream to the Sounds folder when I export it. Edit: As requested, here's my code: public void loadSound(String path) { WaveData data = null; data = WaveData.create(GameSound.class.getClassLoader().getResourceAsStream(path)); int buffer = alGenBuffers(); alBufferData(buffer, data.format, data.data, data.samplerate); data.dispose(); source = alGenSources(); alSourcei(source, AL_BUFFER, buffer); } WaveData accepts an InputStream or other types of IO. A: I would put the com.nishu.ld28.utilities in the same package of your class , let's call it MyClass. Your package: Your code: package com.nishu.ld28.utilities; import java.io.InputStream; public class MyClass { public static void main(String[] args) { InputStream is = MyClass.class.getResourceAsStream("sound/merry_xmas.wav"); System.out.format("is is null ? => %s", is==null); } } Output is is null ? => false
// Copyright (c) Microsoft Corporation. All rights reserved. // Licensed under the MIT license. See LICENSE file in the project root for full license information. using System; using System.Reflection; using System.Resources; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; using Microsoft.VisualStudio.TestPlatform.ObjectModel; // General Information about an assembly is controlled through the following // set of attributes. Change these attribute values to modify the information // associated with an assembly. [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("Microsoft Corporation")] [assembly: AssemblyCopyright("© Microsoft Corporation. All rights reserved.")] [assembly: AssemblyProduct("Microsoft.TestPlatfrom.ObjectModel")] [assembly: AssemblyTrademark("")] [assembly: NeutralResourcesLanguage("en-US")] [assembly: CLSCompliant(true)] // Setting ComVisible to false makes the types in this assembly not visible // to COM components. If you need to access a type in this assembly from // COM, set the ComVisible attribute to true on that type. [assembly: ComVisible(false)] // The following GUID is for the ID of the typelib if this project is exposed to COM [assembly: Guid("8a200cda-4813-43a1-aa18-9faedc31d2af")] // Type forwarding utility classes defined earlier in object model to a core utilities assembly. [assembly: TypeForwardedTo(typeof(EqtTrace))] [assembly: TypeForwardedTo(typeof(ValidateArg))]
Q: Mutual Information as probability Could the mutual information over the joint entropy: $$ 0 \leq \frac{I(X,Y)}{H(X,Y)} \leq 1$$ be defined as:"The probability of conveying a piece of information from X to Y"? I am sorry for being so naive, but I have never studied information theory, and I am trying just to understand some concepts of that. A: The measure you are describing is called Information Quality Ratio [IQR] (Wijaya, Sarno and Zulaika, 2017). IQR is mutual information $I(X,Y)$ divided by "total uncertainty" (joint entropy) $H(X,Y)$ (image source: Wijaya, Sarno and Zulaika, 2017). As described by Wijaya, Sarno and Zulaika (2017), the range of IQR is $[0,1]$. The biggest value (IQR=1) can be reached if DWT can perfectly reconstruct a signal without losing of information. Otherwise, the lowest value (IQR=0) means MWT is not compatible with an original signal. In the other words, a reconstructed signal with particular MWT cannot keep essential information and totally different with original signal characteristics. You can interpret it as probability that signal will be perfectly reconstructed without losing of information. Notice that such interpretation is closer to subjectivist interpretation of probability, then to traditional, frequentist interpretation. It is a probability for a binary event (reconstructing information vs not), where IQR=1 means that we believe the reconstructed information to be trustworthy, and IQR=0 means that opposite. It shares all the properties for probabilities of binary events. Moreover, entropies share a number of other properties with probabilities (e.g. definition of conditional entropies, independence etc). So it looks like a probability and quacks like it. Wijaya, D.R., Sarno, R., & Zulaika, E. (2017). Information Quality Ratio as a novel metric for mother wavelet selection. Chemometrics and Intelligent Laboratory Systems, 160, 59-71.
This is a build log of my custom keyboard. It is inspired by the minidox STEP 1:DESIGN I started off by designing the keyboard to fit my hand, with correct dimensions. Another terrific source of information on keyboard layout is matt3o's book This is the first sort of mock-up of the keyboard layout. While I was designing the keyboard layout, I would constantly print out the current iteration of the keypad to scale on a peice of paper, and check how my hand would manage to reach the keys. Because I would do this, I never ran into a fitment problem for the actual keyboard. STEP 2:TOP PLATE Now once the design was figured out and everything, the actual build could start. First the aluminum plates had to be made. That started with a sheet of aluminum, that I then used an angle grinder on to cut down to a similar shape and size to that of the keypad. After they were cut down to the correct size, then I drilled as many holes as possible where the keys should go. Once that had been done, pliers were then used to pull all the material that couldn't be gotten with the drill out. And finally, a file was used to get the corners square, and the size of the hole correct. STEP 3:SOLDERING Then began the soldering. For far more specific information on the soldering process, I highly recommend Cribbit's keyboard soldering guide. STEP 4:WOOD CENTER Next came the wood. Sadly I seem to have lost all the photos that I took during the woodworking process, but in essence, it was very simple. I took a birch log, and cut two slightly slanted slabs out of it. this slant was then sanded down to the final size. Then a process very similar to the top plate was used, holes were drilled, and a chisel was used very carefully to remove the center from the outside. The holes for the ports were drilled in, and a file was used to shape the holes correctly. Then I sanded the top and bottom of the wood to give it it's correct slant and make sure that everything was flat. After a final sand on the inside to make sure that the keys fit, the top plate was screwed into place. STEP 5:BOTTOM PLATE The process for the bottom plate was similar to the top plate, angle grinder to get roughly to the correct size. Drill a few holes and put the screws in, and boom. Time for the final sand. STEP 6:FINISHING TOUCHES After liberous use of a hand sander and some sand paper, I sprayed the sides of the keyboard with laquer, let it dry overnight, and it was done! STEP 7:FIRMWARE So now you have a keyboard, which is all wired up and connected, but the controller doesn't actually understand anything and doesn't know how to talk to the computer, so we have to teach it. I used an open source firmware called QMK. It is extremely powerful, and has more than enough features, with more constantly being added. Now, I'm not going to do a full write up on how to develop your own firmware for a keyboard, and how to flash it, but I will provide these two really useful links that I referenced a whole bunch. And the source code for my keyboard is on my github. RANDOM EXTRA TIPS My firmware originated from the Minidox, the real main modification that had to be done that I didn't find mentioned anywhere was the rev1 file. The original Minidox file and this is mine. The only difference is that mine has k32 and k72. I think that the way the controller actually processes that is the first number is the row, the second is the collumn, hence the grid being in the order that it is. Then I just added some more buttons to all the keymap matrices on the keymap.c file. This is the minidox file, versus mine. Now to enjoy
452 B.R. 512 (2011) In re FRIEDMAN'S INC., a Delaware corporation, et al., Debtors. Friedman's Liquidating Trust, Plaintiff, v. Goldman Sachs Credit Partners, L.P., Plainfield Direct Inc., Ramius Value and Opportunity Master Fund, Ltd., Parche, LLC, Cadence Master Fund Ltd., and Ivy MA Holdings Cayman 8, Ltd., Defendants. Bankruptcy No. 08-10161 (CSS). Adversary No. 09-51010. United States Bankruptcy Court, D. Delaware. July 12, 2011. *514 Kenneth J. Nachbar, Robert J. Dehney (Argued), Eric D. Schwartz, Andrew R. Remming, Matthew B. Harvey, Morris, Nichols, Arsht & Tunnell, LLP, Wilmington, DE, for Goldman Sachs Credit Partners L.P., Plainfield Direct Inc., Ramius Value and Opportunity Master Fund Ltd., Parche, LLC and Cadence Master Fund Ltd. John D. Demmy, Maria Aprile Sawczuk, Stevens & Lee, P.C., Wilmington, DE, and Nicholas F. Kajon, David M. Green (Argued), Constantine Pourakis, New York, NY, for Friedman's Liquidating Trust. OPINION[1] CHRISTOPHER S. SONTCHI, Bankruptcy Judge. The Plaintiff (as defined below) filed this adversary proceeding objecting to Defendants' (as defined below) claims and seeking to recharacterize them. The defendants Goldman Sachs Credit Partners L.P., Plainfield Direct Inc., Ramius Value and Opportunity Master Fund Ltd., Parche, LLC and Cadence Master Fund Ltd. (collectively, the "Defendants")[2] filed a motion to dismiss the complaint (the "Motion to Dismiss") asserting that no claim exists because it was the intent of the parties for the monies at issue to be debt, rather than equity.[3] As the Plaintiff has made facially plausible allegations regarding the recharacterization of the claim (such as there was a pro rata contribution by the shareholders, interest payments were deferred, the interest rate was below market, interest was not paid when monies were available, and the contribution was on a subordinated unsecured basis) the Court will deny the Motion to Dismiss. STATEMENT OF FACTS[4] A. Background Friedman's Inc. ("Friedman's" or the "Debtor") filed a complaint objecting to the Defendants' general unsecured claims and seeking to recharacterize them as equity.[5] Thereafter, Friedman's Liquidating Trust (the "Plaintiff") was substituted as plaintiff.[6] Friedman's, a large retail jewelry chain, first filed for bankruptcy relief under Chapter 11 in January 2005 and exited that bankruptcy in December 2005. Crescent, another retail jewelry chain,[7] commenced *515 bankruptcy in August 2004. In July 2006, Crescent, while a debtor-in-possession, was purchased by Friedman's. In order for Friedman's to purchase Crescent, it needed, among other things, additional financing in the amount of $22,041,000. Friedman's obtained these funds from its shareholders in return for an "unsecured promissory note" due more than four (4) years later. Friedman's used these monies, together with claim waivers from Harbinger (Friedman's largest shareholder), to purchase the Crescent business (the "Transaction"). The financing at issue is documented in a Contribution Agreement, dated July 28, 2006 (the "Contribution Agreement").[8] All the shareholders of Friedman's (the "Funding Participants"),[9] including the Defendants, are parties to the Contribution Agreement, whereby Friedman's obtained over $22 million (the "Funding Obligation") to purchase the stock and common equity of Crescent, which then became a wholly-owned subsidiary of Friedman's. The Contribution Agreement provides that the Funding Obligation of each of the Funding Participants would be made on a pro rata basis, with each Funding Participant contributing that percentage of the Funding Obligation equal to its percentage equity interest in Friedman's. The Contribution Agreement further provides that the Funding Participants, each of which was to receive equity of Crescent under Crescent's Plan, would contribute sufficient equity to Friedman's to enable Friedman's to acquire all of the preferred stock and common equity of Crescent. The structure of the Contribution Agreement was recommended to the Friedman's board of directors by Peter J. Solomon Company, Friedman's financial advisors, based on an analysis by Grant Thornton, Friedman's certified public accountants, primarily as a means of maximizing net operating losses and other tax attributes of Crescent's which Friedman's would acquire under the Crescent Plan. In July 2006, pursuant to the Contribution Agreement, each of the Funding Participants advanced its respective portion of the Funding Obligation and related fees and expenses, with each Funding Participant contributing that percentage of the Funding Obligation equal to its percentage equity interest in Friedman's. In return, Friedman's issued executed "unsecured promissory notes" (individually a "Note" and collectively, the "Notes") in favor of each of the Funding Participants. The Notes were expressly subordinate and junior in right of payment to Friedman's secured lender.[10] Interest under the Notes was to accrue at the rate of 8% per annum, payable in cash quarterly in arrears, provided that so long as funds under the CIT Agreement (as defined in the margin) were outstanding and Friedman's commitments to CIT under the terms of the CIT Agreement had not been terminated, interest would not be paid but would accrue and be added on a quarterly basis to the principal amount of the Notes. *516 Pursuant to the Notes, and in compliance with the CIT Agreement, Friedman's could pay the interest due under the Notes (as long as the CIT Agreement was not then in default). In December 2006, when Friedman's was in full compliance with the CIT Agreement, Friedman's received a tax refund of approximately $8.5 million from the Internal Revenue Service in respect of the fiscal years 2001-2004 (the "Tax Refund").[11] Under the terms of the Notes and the CIT Agreement, the proceeds of the Tax Refund could have been used to pay all interest then owing to the Funding Participants under the Notes. However, Friedman's did not exercise its discretion to pay the interest due and owing nor was it demanded by the Funding Participants. No interest was ever paid on the Notes. In January 2008, Friedman's was the subject of an involuntary petition and a short time later the Court converted the case to a voluntary case under Chapter 11. B. Procedural Posture Friedman's filed the above-captioned action seeking to recharacterize the Funding Contributions made by the Defendants (approximately $22 million) and objecting to the Defendants' general unsecured claims. The Defendants have filed the Motion to Dismiss the Complaint claiming that the Plaintiff has failed to state a claim because the parties intended the Funding Obligation to be a loan (and not equity). LEGAL DISCUSSION A. The Standard Regarding Sufficiency of Pleadings When Evaluating a Motion to Dismiss for Failure to State a Claim Upon Which Relief Can Be Granted. A motion under Rule 12(b)(6)[12] serves to test the sufficiency of the factual allegations in the plaintiff's complaint.[13] With the Supreme Court's recent decisions in Bell Atlantic Corp. v. Twombly[14] and Ashcroft v. Iqbal,[15] "pleading standards have seemingly shifted from simple notice pleading to a more heightened form of pleading, requiring a plaintiff to plead more than the possibility of relief to survive a motion to dismiss."[16] In Iqbal, the Supreme Court makes clear that the Twombly "facial plausibility" pleading requirement applies to all civil suits in the federal courts.[17] "Threadbare recitals of the elements of a cause of action, supported by mere conclusory statements" are insufficient to survive a motion to dismiss.[18] Rather, "all civil complaints *517 must now set out sufficient factual matter to show that the claim is facially plausible."[19] A claim is facially plausible "when the plaintiff pleads factual content that allows the court to draw the reasonable inference that the defendant is liable for the misconduct alleged."[20] Determining whether a complaint is "facially plausible" is "a context-specific task that requires the reviewing court to draw on its judicial experience and common sense.[21] But where the well-pleaded facts do not permit the court to infer more than the mere possibility of misconduct, the complaint has alleged—but not shown—that the pleader is entitled to relief."[22] After Iqbal, the Third Circuit has instructed this Court to "conduct a two-part analysis. First the factual and legal elements of a claim should be separated. The [court] must accept all of the complaint's well-pleaded facts as true, but may disregard any legal conclusions."[23] The court "must then determine whether the facts alleged in the complaint are sufficient to show that the plaintiff has a plausible claim for relief."[24] The Third Circuit has *518 further instructed that "[s]ome claims will demand relatively more factual detail to satisfy this standard, while others require less."[25] B. The Plaintiff Has Plead Sufficient Facts In Support Of Its Claim For Recharacterization. The focus of recharacterization in the Third Circuit is "whether the parties called an instrument one thing when in fact they intended it as something else. That intent may be inferred from what the parties say in their contracts, from what they do through their actions, and from the economic reality of the surrounding circumstances."[26] The Third Circuit has rejected a "mechanistic scorecard" in favor of a case-by-case approach.[27] "[T]he overarching inquiry in a recharacterization case is the intent of the parties at the time of the transaction, determined not by applying any specific factor, but through a common sense evaluation of the facts and circumstances surrounding a transaction:"[28] [C]ourts have adopted a variety of multi-factor tests borrowed from non-bankruptcy caselaw. While these tests undoubtedly include pertinent factors, they devolve to an overarching inquiry: the characterization as debt or equity is a court's attempt to discern whether the parties called an instrument one thing when in fact they intended it as something else. That intent may be inferred from what the parties say in their contracts, *519 from what they do through their actions, and from the economic reality of the surrounding circumstances. Answers lie in facts that confer context case-by-case.[29] In Broadstripe, this Court stated: "when existing lenders make loans to a distressed company, they are trying to protect their existing loans and traditional factors that lenders consider (such as capitalization, solvency, collateral, ability to pay cash interest and debt capacity rations) do not apply as they would when lending to a financially healthy company." However, in SubMicron Judge Ambro also placed considerable weight on the Judge Robinson's "reference to the conflicting testimony and relative credibility of witnesses presented by both parties," while noting that, with respect to recharacterization, "[a]nswers lie in facts that confer context case-by-case." Given the nature of the inquiry, and the fact intensive nature of this case, triable issues of fact appear to exist.[30] Recharacterization is a question of fact.[31] Courts have adopted various multi-factor tests to define the recharacterization inquiry.[32] For example, in AutoStyle, the Sixth Circuit adopted an eleven factor test. Other courts have adopted similar multi-factor tests. The Third Circuit has held that all of these tests include "pertinent factors."[33] When the District Court ruled in Submicron, it considered the following factors: (1) the name given to the instrument; (2) the intent of the parties; (3) the presence or absence of a fixed maturity date; (4) the right to enforce payment of principal and interest; (5) the presence or absence of voting rights; (6) the status of the contribution in relation to regular corporate contributors; and (7) certainty of payment in the event of the corporation's insolvency or liquidation.[34] Although the Third Circuit affirmed this ruling it did so annunciating the overarching inquiry of "intent" rather than a factored-test.[35] As many of the elements used by the District Court in Submicron are similar to those annunciated in AutoStyle (but now only used as indicia of intent), some of those indicators will be discussed infra.[36] Nonetheless, as the Third Circuit frequently cautions, "[n]o mechanistic scorecard suffices,"[37] and this Court must not allow a multi-factor test to obscure *520 the relevant factual and legal analysis. 1. Names Given to the Instruments, if any, Evidencing the Indebtedness The first factor in AutoStyle is the name given to the instruments. "The absence of notes or other instruments of indebtedness is a strong indication that the advances were capital contributions and not loans."[38] In Fidelity Bond and Mortgage Company,[39] this Court recharacterized a "promissory note" made by the debtor to old shareholders of the debtor.[40] After considering testimony, including expert testimony, the Court found: that (i) the structure at issue was created in order for the debtor to maximize certain tax benefits,[41] (ii) the debtor did not provide for payment of any principal indebtedness under the "notes" though the first five years, and (iii) the documents referred to the amount due to the defendants as "indebtedness."[42] After balancing these facts against the title given to the instrument, the Court concluded that the "promissory notes" were intended to be an equity investment in the debtor and not debt.[43] Here, similarly to Fidelity Bond, the monies were provided for under the "Contribution Agreement" and "Subordinated Promissory Note Due December 9, 2010." The Contribution Agreement states that "[t]he Funding Obligation and Expense Amount shall be made as unsecured subordinated loans to the Company."[44] This factor weighs in favor of characterizing the Funding Obligation as debt. 2. Presence or Absence of a Fixed Maturity Date and Schedule of Payments. The next factor in AutoStyle is the presence or absence of a fixed maturity date and schedule of payments. "The absence of a fixed maturity date and a fixed obligation to repay is an indication that the advances were capital contributions and not loans."[45] Here, the Notes became repayable over four years after entry into the Notes with no interim payment of principal.[46] Again, this fact is similar to the facts considered by the court in Fidelity Bond.[47] Accordingly, although there is a fixed maturity date, Friedman's was not required to make any principal payments for over four years. This factor weighs neither in favor of characterizing the Notes as equity nor as debt. 3. No Fixed Rate of Interest and Interest Payments. Another factor in the AutoStyle analysis is the presence or absence of a *521 fixed rate of interest and interest payments. The absence of such is a strong indication the investment was a capital contribution, rather than a loan.[48] Here, the Notes have an interest rate of 8% per annum,[49] however the interest accrued and was added on a quarterly basis to the principal amount of the Note.[50] The Plaintiff also alleges that the interest rate is below prime.[51] Lastly, Plaintiff points out that the Defendants could have demanded an interest payment from the Tax Refund but did not. In AtlanticRancher, Inc.,[52] the bankruptcy court was faced with the characterization of the following: a promissory note with a maturity date and interest rate used for working capital and treated as debt on the debtor's books.[53] The bankruptcy court held that "[d]espite proper documentation, [the lender] never made any effort to collect the Convertible Promissory Note or foreclose on his collateral. He recognized that if he attempted to exercise his rights as a secured creditor it would have put the company out of business. Thus, [the lender] did not treat the Convertible Promissory Note and the rights contained in it as a loan; rather he treated the $300,000 as an investment."[54] Although distinguishable from the case sub judice in that here interest was deferred and the principal payments had not yet become due, the lack of demand of payment of interest from the Tax Refund in this case indicates that the Funding Participants were treating the Notes as equity. The deferral of interest payments, the below prime interest rate, along with the allegation that the Tax Refund was not used to pay all the interest then owing to the Funding Participants under the Notes,[55] all weigh in favor of characterizing the Notes as equity. 4. Repayment Dependent on Success. The AutoStyle test also considers the source of repayments. "If the expectation of repayment depends solely on the success of the borrower's business, the transaction has the appearance of a capital contribution."[56] The Complaint alleges that the "Defendant's expectation of repayment of the Funding Obligation depended solely on the success of Friedman's business."[57]*522 Further, in the response to the Motion to Dismiss, the Plaintiff argues that even if the Tax Refund was fully collected and wholly applied to the "debt," there remained a $13,541,000 shortfall between the tax proceeds and the principal of the Notes; then at least $13,541,000 plus all accrued interest on the Notes could be paid solely from Friedman's earnings.[58] There have been no allegations nor argument that there was another source of payment for the Transaction. Therefore, this factor weighs in favor of characterizing the Notes as equity.[59] 5. Inadequacy of Capitalization. AutoStyle also considers the adequacy of capitalization. "Thin or inadequate capitalization is strong evidence that the advances are capital contributions rather than loans."[60] The Plaintiff has alleged in the Complaint that Friedman's was undercapitalized as a result of the Funding Obligations and purchase of Crescent. In fact, the Complaint specifically refers to Friedman's balance sheet pre- and post-acquisition.[61] This factor weighs in favor of characterizing the Notes as equity. 6. Identity of Interests Between Creditor and Stockholder. Another factor in the AutoStyle test is the identity of interest between the creditor and the stockholder. "If stockholders make advances in proportion to their respective stock ownership, an equity contribution is indicated. On the other hand, a sharply disproportionate ratio between a stockholder's percentage interest in stock and debt is indicative of bona fide debt. Where there is an exact correlation between the ownership interests of the equity holders and their proportionate share of the alleged loan this evidence standing alone is almost overwhelming."[62] Here, the Complaint alleges that there is an "exact correlation" between the ownership interests of the equity holders and their proportionate share of the alleged loan.[63] This factor weighs in favor of characterizing the Notes as equity. 7. Security, if any, for the Advances. Another factor in the AutoStyle test is the presence or absence of security for the advances made under the alleged debt. "The absence of a security for an advance is a strong indication that the advances were capital contributions rather than loans."[64] Here, the Note was fully subordinated to the CIT Obligation and was unsecured.[65] This factor weighs in favor of characterizing the Notes as equity. 8. Ability to Obtain Financing From Outside Lending Institutions. Yet another factor in the AutoStyle test is the debtor's ability to obtain outside financing. "When there is no evidence of other outside financing, the fact that no reasonable creditor would have acted in the same manner is strong evidence that the advances were capital contributions *523 rather than loans."[66] There are no allegations in the Complaint regarding alternative sources of financing. Therefore, this factor weighs neither in favor of characterizing the Notes as equity nor as debt. 9. Extent to Which the Advances Were Subordinated to the Claim of Outside Creditors. Another factor in the AutoStyle test is the extent to which the payments to be made are subordinated to the claims of outside creditors. "Subordination of advances to claims of all other creditors indicates that the advances were capital contributions, not loans."[67] The Note expressly states that the Funding Obligation is subordinate to Friedman's secured debt.[68] The Notes entitle the Funding Participants to payment ahead of subordinated debt and equity, and on par with trade and other general unsecured debt. The Defendants argue that as the Notes provide for a right to payment above other interests and therefore is indicative of equity. This factor weighs in favor of characterizing the Notes as debt. 10. The Extent to Which the Advances Were Used to Acquire Capital Assets. Another factor in the AutoStyle test is whether the advances were used to acquire capital assets. "Use of advances to meet the daily operating needs of the corporation, rather than to purchase capital assets, is indicative of bona fide indebtedness."[69] The Funding Obligation was used for the purchase of Crescent, which became a wholly owned subsidiary of Friedman's.[70] Therefore, this factor weighs in favor of characterizing the Notes as equity. 11. Presence or Absence of a Sinking Fund. Another factor in the AutoStyle test is the presence or absence of a sinking fund to provide repayments.[71] The Complaint alleges that: "Friedman's never established a sinking fund to provide for the repayment of the Notes, and neither the Contribution Agreement nor the Notes provided for the establishment of a sinking fund for that purpose."[72] Instead, Friedman's asserts that any repayment of the Note depended solely on the success of Friedman's business. Therefore, this factor weighs in favor of characterizing the Notes as equity. 12. Presence or absence of voting rights A factor discussed by the District Court in Submicron is the presence or absence or voting rights.[73] The Complaint does not allege nor does the Promissory Notes, or Contribution Agreement grant any of *524 the Funding Participant the right to vote. Therefore, this factor weighs in favor of characterizing the Notes as debt. 13. Other Considerations The Defendants argue that the parties' intended the Funding Obligation to be a loan. The Defendants liken this case to American Twine.[74] In American Twine, the court, was faced with validity of a $10 million secured bridge loan that certain defendants and other individuals extended to an entity of which they were also shareholders.[75] The bridge loan carried a 35% annual interest rate and a one year term and included no warrants or convertibility features.[76] The lenders received a security interest in all of the borrower's assets.[77] When structuring the loan the borrower and lender considered, but rejected, a proposal that the investment take the form of equity and a proposal to include warrants and convertibility features.[78] In reaching its conclusion after the trial that the transaction was debt and not equity,[79] the American Twine court evaluated expert testimony and testimony of some of the key players in the transaction.[80] The American Twine court also considered the reasonableness of the terms, including interest rate of 35% (an above-market rate) and the security interest granted in the borrower's assets, in reaching its conclusion that these particular terms provided for the risky nature of the investment.[81] The Defendants in this case argue that because Friedman's considered, but rejected, an equity infusion that the Funding Obligation cannot be considered equity.[82] The Defendants correctly identify one similarity between the transaction in American Twine and this case. Nonetheless, when one applies the entirety of the issues considered in American Twine to this case, one readily concludes that the Plaintiff has asserted a plausible claim that the Notes are equity. For example, in this case there was a pro rata contribution by the shareholders (based in relation to each's equity holdings), the payment of the "debt" to the Funding Participants depended upon Friedman's Tax Refund, non-payment of interest, the lack of a security interest, and the fact that the "debt" was subordinated. The Defendants also argue that as Friedman's had the option of choosing debt or equity when formulating the Transaction and that this explicitly proves "intent" for the Transaction to be debt. At this point in the proceedings, however, the only thing that shows "intent" are the actual terms of the Contribution Agreement and Note, and not the name/vocabulary *525 of the Transaction documents.[83] C. Conclusion For those keeping score at home the factors identified above weigh in favor of characterizing the Notes as equity by a score of: Equity 7, Debt 3 and Neither 2. But, of course, this Court is not to base its decision on such a mechanical exercise. Rather, the Court is to use its evaluation of the above described factors to make its decision "through a common sense evaluation of the facts and circumstances surrounding a transaction."[84] While the Defendants' arguments may ultimately prevail regarding intent, at this stage of the proceedings, taking all the allegations in the Complaint as true,[85] the Court finds that the Plaintiff has alleged plausible facts that require a further evidentiary record for the Court to characterize the Notes as debt or equity. CONCLUSION For the foregoing reasons, the Court will deny the motion to dismiss as the Complaint alleges facially plausible facts, which taken as true, may constitute a recharacterization claim. An order will be issued. NOTES [1] "The court is not required to state findings or conclusions when ruling on a motion under Rule 12...." Fed. R. Bankr.P. 7052(a)(3). Accordingly, the Court herein makes no findings of fact and conclusions of law pursuant to Rule 7052 of the Federal Rules of Bankruptcy Procedure. [2] On August 17, 2010, default judgment was entered against defendant Ivy MA Holdings Cayman 8, Ltd. ("Ivy") (Adv. D.I. 62). Ivy is not a movant nor is included in the defined term "Defendants" for the purposes of this Memorandum Opinion. [3] See Adv. D.I. 21, 22, 32, and 36. [4] The background set forth infra is gathered from the allegations set forth in the Complaint, as defined below. See Fowler v. UPMC Shadyside, 578 F.3d 203, 210-11 (3d Cir. 2009) ("The [court] must accept all of the complaint's well-pleaded facts as true, but may disregard any legal conclusions."). [5] Adv. D.I. 1 (the "Complaint"). [6] Adv. D.I. 30; see also D.I. 2391 (Order Confirming Joint Plan of Liquidation Under Chapter 11 of the Bankruptcy Code). [7] Prior to Crescent's bankruptcy, Friedman's and Crescent had some common ownership and interrelated contractual arrangements, including loans, guarantees, and service agreements. [8] The Contribution Agreement is attached as Exhibit A to the Complaint. [9] The other Funding Participants (which are not Defendants) are Harbinger Capital Partners Master Fund I, Ltd., Liberation Investments, L.P., Liberation Investments Ltd., Man Mac Gemstock Limited, and Prescott Group Aggressive Small Cao, L.P. [10] Friedman's was party to a Second Amended and Restates Loan and Security Agreement dated March 8, 2006 (the "CIT Agreement"), whereby The CIT Group/Business Credit, Inc. ("CIT") was agent. [11] The Plaintiff alleged that this tax refund was not related to the Transaction. See Plaintiff's Memorandum in Opposition to Motion to Dismiss, p. 4 (D.I. 32). [12] Federal Rules of Civil Procedure 8(a) and 12(b)(6) are made applicable to this adversary proceeding pursuant to Federal Rules of Bankruptcy Procedure 7008 and 7012, respectively. [13] Kost v. Kozakiewicz, 1 F.3d 176, 183 (3d Cir.1993) ("The pleader is required to set forth sufficient information to outline the elements of his claim or to permit inferences to be drawn that these elements exist." (citations omitted)). [14] 550 U.S. 544, 127 S.Ct. 1955, 167 L.Ed.2d 929 (2007). [15] ___ U.S. ___, 129 S.Ct. 1937, 173 L.Ed.2d 868 (2009). [16] Fowler, 578 F.3d at 210. [17] See Fowler, 578 F.3d at 210. [18] Iqbal, 129 S.Ct. at 1949. See also Sands v. McCormick, 502 F.3d 263, 268 (3d Cir.2007) (citations omitted); Bartow v. Cambridge Springs SCI, 285 Fed.Appx. 862, 863 (3d Cir. 2008) ("While facts must be accepted as alleged, this does not automatically extend to bald assertions, subjective characterizations, or legal conclusions."); General Motors Corp. v. New A.C. Chevrolet, Inc., 263 F.3d 296, 333 (3d Cir.2001) ("Liberal construction has its limits, for the pleading must at least set forth sufficient information for the court to determine whether some recognized legal theory exists on which relief could be accorded the pleader. Conclusory allegations or legal conclusions masquerading as factual conclusions will not suffice to prevent a motion to dismiss. While facts must be accepted as alleged, this does not automatically extend to bald assertions, subjective characterizations, or legal conclusions." (citations omitted)). [19] Fowler, 578 F.3d at 210 (internal quotations omitted). See also Iqbal, 129 S.Ct. at 1950 ("While legal conclusions can provide the framework of a complaint, they must be supported by factual allegations."); Buckley v. Merrill Lynch & Co. (In re DVI, Inc.), 2008 WL 4239120, 2008 Bankr.LEXIS 2338 (Bankr.D.Del. Sept. 16, 2008) ("Rule 8(a) requires a showing rather than a blanket assertion of an entitlement to relief. We caution that without some factual allegation in the complaint, a claimant cannot satisfy the requirement that he or she provide not only fair notice, but also the grounds on which the claim rests." (citations omitted)). [20] Iqbal, 129 S.Ct. at 1949. [21] Iqbal, 129 S.Ct. at 1950. "It is the conclusory nature of [plaintiff's] allegations, rather than their extravagantly fanciful nature, that disentitles them to the presumption of truth." Id. at 1951. [22] Id. at 1950 (citations and internal quotations omitted). [23] Fowler, 578 F.3d at 210-11. See also Twombly, 550 U.S. at 555, 127 S.Ct. 1955 (holding that a court must take the complaint's allegations as true, no matter how incredulous the court may be); Iqbal, 129 S.Ct. at 1949-50 ("Threadbare recitals of the elements of a cause of action, supported by mere conclusory statements, do not suffice.... When there are well-plead factual allegations, a court should assume their veracity and then determine whether they plausibly give rise to an entitlement to relief."); Winer Family Trust v. Queen, 503 F.3d 319, 327 (3d Cir.2007); Carino v. Stefan, 376 F.3d 156, 159 (3d Cir.2004). The Court may also consider documents attached as exhibits to the Complaint and any documents incorporated into the Complaint by reference. In re Fruehauf Trailer Corp., 250 B.R. 168, 183 (Bankr.D.Del.2000) (citing PBGC v. White, 998 F.2d 1192, 1196 (3d Cir.1993)). "[I]f the allegations of [the] complaint are contradicted by documents made a part thereof, the document controls and the Court need not accept as true the allegations of the complaint." Sierra Invs., LLC v. SHC, Inc. (In re SHC, Inc.), 329 B.R. 438, 442 (Bankr.D.Del.2005). See also Sunquest Info. Sys., Inc. v. Dean Witter Reynolds, Inc., 40 F.Supp.2d 644, 649 (W.D.Pa.1999) ("In the event of a factual discrepancy between the pleadings and the attached exhibit, the exhibit controls." (citations omitted)). [24] Fowler, 578 F.3d at 211 (internal quotations omitted) ("[A] complaint must do more than allege the plaintiff's entitlement to relief. A complaint has to `show' such an entitlement with its facts." (citations omitted)). "The plaintiff must put some `meat on the bones' by presenting sufficient factual allegations to explain the basis for its claim." Buckley v. Merrill Lynch & Co., Inc. (In re DVI, Inc.), 2008 WL 4239120, at *4, 2008 Bankr.LEXIS 2338, at *13 (Bankr.D.Del. Sept. 16, 2008). [25] In re Ins. Brokerage Antitrust Litig., 618 F.3d 300, 361 n. 18 (3d Cir. Aug. 16, 2010). See also Arista Records LLC v. Doe, 604 F.3d 110, 120-21 (2d Cir.2010) (stating that Twombly and Iqbal require factual amplification where needed to render a claim plausible, not pleadings of specific evidence or extra facts beyond what is needed to make a claims plausible). [26] Cohen v. KB Mezzanine Fund II, LP (In re SubMicron Sys. Corp.), 432 F.3d 448, 456 (3d Cir.2006). Official Comm. of Unsecured Creditors of Fedders N. Am., Inc. v. Goldman Sachs Credit Partners L.P. (In re Fedders N. Am., Inc.), 405 B.R. 527, 554 (Bankr.D.Del.2009) ("Recharacterization has nothing to do with inequitable conduct, however." (citation omitted)). [27] The Third Circuit has stated: While some cases are easy (e.g., a document titled a "Note" calling for payments of sums certain at fixed intervals with market-rate interest and these obligations are secured and are partly performed, versus a document issued as a certificate indicating a proportional interest in the enterprise to which the certificate relates), others are hard (such as a "Note" with conventional repayment terms yet reflecting an amount proportional to prior equity interests and whose payment terms are ignored). Which course a court discerns is typically a commonsense conclusion that the party infusing funds does so as a banker (the party expects to be repaid with interest no matter the borrower's fortunes; therefore, the funds are debt) or as an investor (the funds infused are repaid based on the borrower's fortunes; hence, they are equity). Form is no doubt a factor, but in the end it is no more than an indicator of what the parties actually intended and acted on. Id. [28] Radnor Holdings Corp. v. Tennenbaum Capital Ptnrs., 353 B.R. 820, 838-839 (Bankr. D.Del.2006). See also Fedders N. Am., Inc., 405 B.R., at 554 ("The Third Circuit has held that the overarching inquiry with respect to recharacterizing debt as equity is whether the parties to the transaction in question intended the loan to be a disguised equity contribution. This intent may be inferred from what the parties say in a contract, from what they do through their actions, and from the economic reality of the surrounding circumstances." (citations omitted)). [29] SubMicron, 432 F.3d at 455-456; see also Radnor Holdings Corp., 353 B.R. at 838-839. [30] Official Unsecured Creditors' Comm. of Broadstripe, LLC v. Highland Capital Mgmt., L.P. (In re Broadstripe, LLC), 444 B.R. 51, 94 (Bankr.D.Del.2010) (quoting SubMicron, 432 F.3d at 456-57); Official Comm. of Unsecured Creditors of Midway Games Inc. v. Nat'l Amusements Inc. (In re Midway Games Inc.), 428 B.R. 303, 322 (Bankr.D.Del.2010) ("Recharacterization is, by its nature, a fact intensive inquiry."). [31] SubMicron, 432 F.3d at 457. [32] Compare Bayer Corp. v. MascoTech, Inc. (In re AutoStyle Plastics, Inc.), 269 F.3d 726, 749-50 (6th Cir.2001) (using an eleven factor test) with Stinnett's Pontiac Serv., Inc. v. Comm'r, 730 F.2d 634, 638 (11th Cir.1984) (using a thirteen factor test) and Estate of Mixon v. United States, 464 F.2d 394, 402 (5th Cir.1972). [33] SubMicron, 432 F.3d at 456. [34] Cohen v. KB Mezzanine Fund II, L.P. (In re SubMicron Sys. Corp.), 291 B.R. 314, 323 (D.Del.2003) (citation omitted). [35] SubMicron, 432 F.3d at 455-56. [36] Compare AutoStyle, 269 F.3d at 749-50 with Submicron, 291 B.R. at 323. [37] Id. [38] AutoStyle, 269 F.3d at 750. [39] Fid. Bond & Mortg. Co. v. Brand (In re Fid. Bond & Mortg. Co.), 340 B.R. 266 (Bankr. E.D.Pa.2006). [40] Id. at 303. [41] Id. ("The Phoenix shareholders needed to structure a tax free event, and Peat Marwick went through many gyrations and ended that we, the old shareholders, although we would have liked to have held a higher equity position in the company, were told that in order for the transaction to work, we were limited to holding 20 percent of the equity, and that in order to accomplish the transaction, we needed to take back subordinated notes in the same value that Phoenix was putting into the acquisition company." (quoting trial transcript)). [42] Id. [43] Id. [44] Contribution Agreement, § 1.2. [45] AutoStyle, 269 F.3d at 750. [46] Note at p. 1. [47] Fidelity Bond, 340 B.R. at 303. [48] AutoStyle, 269 F.3d at 750-51 ("At best ... this factor cuts both ways since the deferral of interest payments indicates the possibility that during the course of the transaction the defendants eventually never expected to get repaid and converted their debt to equity. Still, it does not change the fact that, initially at least, there was a fixed rate of interest and interest payments, indicating that the transaction was originally intended to be debt not equity."). [49] The Complaint alleges that the interest rate on the Notes was less than the prime rate at the time of the transaction. Complaint at ¶ 47. [50] Note at ¶ 3. [51] Complaint at ¶ 47. See also discussion of American Twine L.P. v. Whitten, 392 F.Supp.2d 13 (D.Mass.2005), infra. [52] Aquino v. Black (In re AtlanticRancher, Inc.), 279 B.R. 411 (Bankr.D.Mass.2002). [53] In re AtlanticRancher, Inc., 279 B.R. at 437. [54] Id. (citations to transcript omitted). [55] Complaint at ¶¶ 40-41. [56] AutoStyle, 269 F.3d at 751; Stinnett's Pontiac Service, Inc. v. Commissioner of IRS, 730 F.2d 634, 639 (11th Cir.1984) (holding that the only way of repayment was from earnings produced from operations because the only non-earning source available for repayment was the sale of a boat which was inadequate to fulfill the obligation). [57] Complaint at ¶ 49. [58] Plaintiff's Opposition to Motion to Dismiss (D.I. 32), at pp. 21-22. [59] See also discussion re: sinking fund, infra. [60] AutoStyle, 269 F.3d at 750-51. [61] Complaint at ¶¶ 44-46. [62] AutoStyle, 269 F.3d at 751. [63] Complaint at ¶¶ 28-37. [64] AutoStyle, 269 F.3d at 752. [65] Note at ¶¶ 6 and 6.1-6.6. [66] AutoStyle, 269 F.3d at 752. [67] AutoStyle, 269 F.3d at 752. [68] Note at ¶¶ 1, 6, and 6.1-6.6. [69] AutoStyle, 269 F.3d at 752. [70] See Contribution Agreement, p. 1 ("WHEREAS, pursuant to the Second Amended Plan of Reorganization (Dated June 7, 2006), as Modified (as further modified or amended, the "Plan") of Crescent Jewelers, a California corporation ("Crescent"), the Company, the Funding Participants and the Claimsholders are to receive, collectively, 100% of the shares of common stock of Crescent to be issued upon the emergence by Crescent from its chapter 11 bankruptcy proceedings"); see also Contribution Agreement, § 1.1. [71] AutoStyle, 269 F.3d at 753. [72] Complaint, ¶ 49. [73] SubMicron, 291 B.R. at 323. [74] American Twine L.P. v. Whitten, 392 F.Supp.2d 13 (D.Mass.2005). [75] Id. at 15. [76] Id. at 18. [77] Id. [78] Id. at 19. [79] Id. at 23 ("Some of the Atlantic Rancher factors favor treating the Bridge Loan as capital and some favor treating it as debt. In the aggregate, an analysis of the Atlantic Rancher factors leads convincingly, but not overwhelmingly, to the conclusion that the Bridge Loan should be treated as debt.") [80] American Twine, 392 F.Supp.2d at 19 [81] Id. at 20. [82] The Motion to Dismiss refers to paragraphs 28 and 38 of the Complaint and states that an equity infusion was expressly rejected. However, the Court's reading of the Complaint, taking all allegations to be true, does not necessarily agree with the Defendant's characterizations of the allegations in the Complaint. [83] See AutoStyle, 269 F.3d at 750. [84] Radnor Holdings Corp., 353 B.R. at 838-839. [85] Fowler, 578 F.3d at 210-11.
--- abstract: 'We report results from a search for production of a neutral Higgs boson in association with a $b$ quark. We search for Higgs decays to $\tau$ pairs with one $\tau$ subsequently decaying to a muon and the other to hadrons. The data correspond to 2.7 fb$^{-1}$ of $\ppbar$ collisions recorded by the D0 detector at $\sqrt{s} = 1.96$ TeV. The data are found to be consistent with background predictions. The result allows us to exclude a significant region of parameter space of the minimal supersymmetric model.' date: 'December 4, 2009' title: Search for the associated production of a quark and a neutral supersymmetric Higgs boson which decays to tau pairs --- list\_of\_authors\_r2.tex The current model of physics at high energies, the standard model (SM), has withstood increasingly precise experimental tests, although the Higgs boson needed to mediate the breaking of electroweak symmetry has not been found. Despite the success of the SM, it has several shortcomings. Theories invoking a new fermion-boson symmetry, called supersymmetry [@b-susy] (SUSY), provide an attractive means to address some of these including the hierarchy problem and nonunification of couplings at high energy. In addition to new SUSY-specific partners to SM particles, these theories have an extended Higgs sector. In the minimal supersymmetric standard model (MSSM) there are two Higgs doublet fields which result in five physical Higgs bosons: two neutral scalars ($h, H$), a neutral pseudoscalar ($A$) and two charged Higgs bosons ($H^\pm$). The mass spectrum of the Higgs bosons is determined at tree level by two parameters, typically chosen to be $\tan\beta$, the ratio of the vacuum expectation values of up-type and down-type scalar fields and $M_A$, the mass of the physical pseudoscalar. Higher order corrections are dominated by the Higgsino mass parameter $\mu$ and the mixing of scalar top quarks. In this Letter, we present a search for neutral Higgs bosons (collectively denoted $\phi$) produced in association with a $b$ quark. The specific Higgs boson decay mode used in this search is $\phi\to\tau\tau$ with one of the $\tau$ leptons subsequently decaying via $\tau\to\mu\nu_{\tau}\nu_{\mu}$ (denoted $\tau_\mu$) and the second via $\tau\to\ \mathrm{hadrons}+\nu_\tau$ (denoted $\tau_{h}$). In the MSSM the Higgs coupling to down-type fermions is enhanced by a factor $\propto \tan\beta$ and thus the Higgs production cross section is enhanced by a factor $\propto\tan^2\beta$ relative to the SM, giving potentially detectable rates at the Tevatron. Two of the three neutral Higgs bosons have nearly degenerate masses over much of the parameter space, effectively giving another factor of two in production rate. A previous search in this final state was carried out by the D0 experiment [@b-p14]. Searches in the complementary channels $\phi Z/\phi\phi\to b\bar{b}\tau\tau,\tau\tau b\bar{b}$ [@b-LEP2], $\phi\to\tau\tau$ [@b-tautau; @b-tautau-cdf], and $\phi b\to \bbbar b$ [@b-hb-bbb; @b-hb-bbb-cdf] have also been carried out by the LEP, D0, and CDF experiments. By searching in complementary channels we reduce overall sensitivity to the particular details of the model. The $b\tau\tau$ final state is less sensitive to SUSY radiative corrections than the $\bbbar b$ final state, and has greater sensitivity at low Higgs mass than the $\phi\to\tau\tau$ channel, as the $b$-jet in the final state reduces the $Z\to\tau\tau$ background. Furthermore, an additional complementary channel will contribute to an even stronger exclusion when combining different searches. The result presented in this Letter uses an integrated luminosity of 2.7 fb$^{-1}$ which is eight times that used for the previous result in this channel. Because of analysis improvements, the gain in sensitivity compared to the prior result is greater than expected from the increased integrated luminosity only. We also extend the Higgs mass search range relative to the previous result in this channel. The D0 detector [@d0det] is a general purpose detector located at Fermilab’s Tevatron $\ppbar$ collider. The Tevatron operates at a center of mass energy of 1.96 TeV. This analysis relies on all aspects of the detector: tracking, calorimetry, muon detection, the ability to identify detached vertices and the luminosity measurement. This search requires reconstruction of muons, hadronic $\tau$ decays, jets (arising from $b$ quarks) and neutrinos. Muons are identified using track segments in the muon system and are required to have a track reconstructed in the inner tracking system which is close to the muon-system track segment in $\eta$ and $\varphi$. Here $\eta$ is the pseudorapidity and $\varphi$ is the azimuthal angle in the plane perpendicular to the beam. Jets are reconstructed from calorimeter information using the D0 Run II cone algorithm [@b-jetalg] with a radius of $R=0.5$ in $(y,\varphi)$ space, where $R=\sqrt{(\Delta y^2+\Delta \varphi^2)}$ and $y$ is the rapidity. Jets are additionally identified as being consistent with decay of a $b$-flavored hadron ($b$-tagged) if the tracks aligned with the calorimeter jet have high impact parameter or form a vertex separated from the primary interaction point in the plane transverse to the beam as determined by a neural network (NN$_b$) algorithm [@b-btag]. Hadronic $\tau$ decays are identified [@b-ztautau] as clusters of energy in the calorimeter reconstructed [@b-jetalg] using a cone algorithm of radius $R=0.3$ which have associated tracks. The $\tau$ candidates are then categorized as being one of three types which correspond roughly to one-prong $\tau$ decay with no $\pi^0$s (called Type 1), one-prong decay with $\pi^0$s (Type 2) and multiprong decay (Type 3). A final identification requirement is based on the output value of a neural network (NN$_\tau$) designed to separate $\tau$ leptons from jets. The missing transverse energy ${\mbox{$\not\!\!E_T$}}$ is used to infer the presence of neutrinos. The ${\mbox{$\not\!\!E_T$}}$ is the negative of the vector sum of the transverse energy of calorimeter cells satisfying $|\eta|<3.2$. ${\mbox{$\not\!\!E_T$}}$ is corrected for the energy scales of reconstructed final state objects, including muons. Signal acceptance and efficiency are modeled using simulated SM $\phi b$ events generated with the  event generator [@b-pythia] requiring the $b$ quark to satisfy $p_T>15$ GeV/$c$ and $|\eta|<2.5$ and using the CTEQ6L1 [@b-cteq] parton distribution functions (PDF). The  [@b-tauola] program is used to model $\tau$ decay and  [@b-evtgen] is used to decay $b$ hadrons. The dependence of the Higgs boson decay width on $\tan\beta$ is included by reweighting samples, and the kinematic properties are reweighted to the prediction of the NLO program  [@b-mcfm]. The generator outputs are passed through a detailed detector simulation based on  [@b-geant]. Each  event is combined with collider data events recorded during a random beam crossing to model the effects of detector noise, pileup, and additional $p \bar p$ interactions. The combined output is then passed to the D0 event reconstruction program. Simulated signal samples are generated for different Higgs masses ranging from 90 to 320 GeV/$c^2$. Backgrounds to this search are dominated by $Z$+jets, $\ttbar$, and multijet (MJ) production. In the MJ background the apparent leptons primarily come from semileptonic $b$ hadron decays, not $\tau$ decays. Additional backgrounds include $W$+jets events, SM diboson production and single top quark production. Except for the MJ contribution, all background yields are estimated using simulated events, with the same processing chain used for signal events. The $Z$+jets, $W$+jets and $\ttbar$ samples are generated using  [@b-alpgen] with  used for fragmentation. The diboson samples are generated using . For simulated samples in which there is only one lepton arising from the decay of a $W$ boson or from $\ttbar\to\ell+$jets, the second lepton is either a jet misidentified as a $\tau$ or a muon+jet system from heavy flavor decay in which the muon is misidentified as being isolated from other activity. Corrections accounting for differences between data and the simulation are applied to the simulated events. The corrections are derived from control data samples and applied to object identification efficiencies, trigger efficiencies, primary $\ppbar$ interaction position (primary vertex) and the transverse momentum spectrum of $Z$ bosons. After applying all corrections, the yields for signal and each background are calculated as the product of the acceptance times efficiency determined from simulation, luminosity and predicted cross sections. The initial analysis step is selection of events recorded by at least one trigger from a set of single muon triggers for data taken before the summer of 2006. For data taken after summer 2006 we require at least one trigger from a set of single muon triggers and muon plus hadronic $\tau$ triggers. The average trigger efficiency for signal events is approximately 65% for both data epochs. After making the trigger requirements a background-dominated pre-tag sample is selected by requiring a reconstructed primary vertex for the event with at least three tracks, exactly one reconstructed hadronic $\tau$, exactly one isolated muon, and at least one jet. This analysis requires the $\tau$ candidates to satisfy $E_T^\tau>10$ GeV, $p_T^\tau>7(5)$ GeV/$c$ and $NN_\tau>0.9$ for Type 1(2) taus, $E_T^\tau>15$ GeV, $p_T^\tau>10$ GeV/$c$ and $NN_\tau>0.95$ for Type 3 taus. Here $E_T^{\tau}$ is the transverse energy of the $\tau$ measured in the calorimeter, $p_T^\tau$ is the transverse momentum sum of the associated track(s). The muon must satisfy $p_T^\mu>12$ GeV/$c$ and $|\eta|<2.0$. It is also required to be isolated from activity in the tracker and calorimeter [@b-hpp]. Selected jets have $E_T>15$ GeV, $|\eta|<2.5$. The $\tau$, the muon and jets must all be consistent with arising from the same primary vertex and be separated from each other by $\Delta R > 0.5$. In addition, the muon and $\tau$ are required to have opposite charge, and the $(\mu,{\mbox{$\not\!\!E_T$}})$ mass variable $M \equiv \sqrt{2{\mbox{$\not\!\!E_T$}}E_\mu^2/p_T^\mu(1-\cos(\Delta\varphi(\mu,{\mbox{$\not\!\!E_T$}})))}$ must satisfy $M < 80,\ 80,\ \mathrm{and}\ 60$ GeV$/c^{2}$ for events with $\tau$s of Type 1, 2 and 3 respectively. Here $E_\mu$ is the energy of the muon, and $\Delta\varphi$ is the opening angle between the ${\mbox{$\not\!\!E_T$}}$ and muon in the plane transverse to the beam direction. A more restrictive $b$-tag subsample with improved signal to background ratio is defined by demanding that at least one jet in each event is consistent with $b$ quark production [@b-btag]. The $b$-jet identification efficiency in signal events is about 35% and the probability to misidentify a light jet as a $b$ jet is 0.5%. All backgrounds except MJ are derived from simulated events as described earlier. The MJ background is derived from control data samples. A parent MJ-enriched control sample is created by requiring a muon, $\tau$, and jet as above, but with the muon isolation requirement removed and with a lower quality ($0.3\leq NN_{\tau} \le 0.9$) $\tau$ selected. This is then used to create a $b$-tag subsample which requires at least one of the jets to be identified as a $b$ jet with the same $b$ jet selection as earlier. The residual contributions from SM backgrounds are subtracted from the MJ control samples using simulated events. To determine the MJ contribution in the pre-tag analysis sample, a data sample is used that has the same selection as the pre-tag analysis sample except that the muon and $\tau$ charges have the same sign. This same-sign (SS) sample is dominated by MJ events. After making a subtraction of other SM background processes which contribute to this sample, the number of MJ events in the opposite-sign (OS) signal region is computed by multiplying the SS sample by the OS/SS ratio, $1.05\pm0.02$, determined in a control sample selected by requiring a non-isolated muon.. For the $b$-tag analysis sample, statistical limitations require a different approach for the MJ background evaluation than for the pre-tag sample. For the $b$-tag sample, two methods are used. For the first method, the per jet probability $P_{tag}$ that a jet in the SS MJ control subsample would be identified as a $b$ jet is determined as a function of jet $p_T$. We apply $P_{tag}$ to the jets in the SS pre-tag sample to determine the yield in the $b$-tag sample. For the second method, the MJ background is determined by multiplying the $b$-tag MJ control sample yield by two factors: (1) the probability that the non-isolated muon would be identified as isolated, and (2) the ratio of events with a $\tau$ candidate passing the $NN_{\tau}$ requirements to events with $\tau$ candidates having $0.3\leq NN_{\tau} < 0.9$ as determined in a separate control sample. The final MJ contribution in the $b$-tag analysis sample is determined using the MJ shape from the first method with the normalization equal to the average of the two methods. We include the normalization difference between the two methods in the systematic uncertainty on the MJ contribution. The signal to background ratio is further improved using multivariate techniques. Two separate methods are used, one to address the $\ttbar$ background and one to reduce the MJ background. For the $\ttbar$ background, a neural network ($\mbox{\em NN}_{top}$) is constructed using $H_T \equiv \Sigma_{jets}E_T$, $E_{tot} \equiv \Sigma_{jets} E + E_\tau + E_\mu$, the number of jets and $\Delta\varphi(\mu,\tau)$ as inputs. For the MJ background, a simple joint likelihood discriminant ($\mbox{\em LL}_{MJ}$) is constructed using $p_T^\mu$, $p_T^\tau$, $\Delta R(\mu,\tau)$, $M_{\mu\tau}$ and $M_{\mu\tau\nu}$. Here $M_{\mu\tau}$ denotes the invariant mass of the muon and tau, and $M_{\mu\tau\nu}$ is the invariant mass computed from the muon, $\tau$, and ${\mbox{$\not\!\!E_T$}}$ momentum vectors. The final analysis sample is defined by selecting rectangular regions in the $\mbox{\em NN}_{top}$ versus $\mbox{\em LL}_{MJ}$ plane. The regions have been identified for each $\tau$ type and each Higgs boson mass point separately by optimizing the search sensitivity using simulated events. The signal to background ratio improves by up to a factor of two when applying these requirements. Table \[t-yields\] shows the predicted background and observed data yields in the analysis samples. Between 5% and 10% of $\phi\to\tau_\mu \tau_h$ decays are selected depending on $M_\phi$. Pre-tag $b$-tagged Final -------------------------------- -------------- -------------- --------------- $\ttbar$ $66.0\pm1.3$ $39.6\pm0.8$ $20.3 \pm0.6$ Multijet $549\pm26$ $38.5\pm2.3$ $28.1\pm1.9$ $Z(\to\tau\tau)+\mathrm{jets}$ $1241\pm8$ $18.8\pm0.3$ $16.3\pm0.3$ Other Bkg $ 267\pm6$ $ 5.1\pm0.1$ $ 4.1\pm0.1$ Total Bkg $2123\pm28$ $102\pm2.4$ $68.8\pm2.0$ Data 2077 112 79 Signal $14.4\pm0.3$ $4.8\pm0.1$ $4.6\pm 0.1$ : Predicted background yield, observed data yield and predicted signal yield and their statistical uncertainties at three stages of the analysis. The signal yields are calculated assuming $\tan\beta = 40$ and a Higgs mass of 120 GeV/$c^{2}$ for the $m_h^{max}$ and $\mu = -200$ GeV$/c^{2}$ scenario.\[t-yields\] Systematic uncertainties arise from a variety of sources. Most are evaluated using comparisons between data control samples and predictions from simulation. The uncertainties are divided into two categories: (1) those which affect only normalization, and (2) those which also affect the shape of distributions. The sources in the first category include the luminosity (6.1%), muon identification efficiency (4.5%), $\tau_{h}$ identification (5%, 4%, 8%), $\tau_{h}$ energy calibration (3%), the $\ttbar$ and single top cross sections (11% and 12%), diboson cross sections (6%), $Z$+($u$,$d$,$s$,$c$) rate (+2%, -5%) and the $W+b$ and $Z+b$ cross sections (30%); those in the second include jet energy calibration (2%-4%), $b$-tagging (3%-5%), trigger (3%-5%), and MJ background (33%, 12%, 11%). For sources with three values, the values correspond to $\tau$ Types 1, 2 and 3 respectively. After making the final selection, the discriminant $D$ is formed from the product of the $\mbox{\em NN}_{top}$ and $\mbox{\em LL}_{MJ}$ variables, $D=\mbox{\em LL}_{MJ} \times \mbox{\em NN}_{top}$. The resulting distributions for the predicted background, signal and data are shown in Fig. \[f-prod-final\](a). This distribution is used as input to a significance calculation using a modified frequentist approach with a Poisson log-likelihood ratio test statistic [@b-collie]. In the absence of a significant signal we set 95% confidence level limits on the presence of neutral Higgs bosons in our data sample. The cross section limits are shown in Fig. \[f-xsec\](b) as a function of Higgs boson mass. These are translated into the $\tan\beta$ versus $M_A$ plane in the $m_h^{max}, \mu=-200$ GeV/$c^2$ MSSM benchmark scenario [@b-benchmark], giving the excluded region shown in Fig. \[f-plane\](c). The signal cross sections and branching fractions are computed using  [@b-sig-xsec]. Instabilities in the theoretical calculation for $\tan\beta > 100$ limit the usable mass range in the translation into the $(\tan\beta,\,M_A)$ plane. In summary, this Letter reports a search for production of Higgs bosons in association with a $b$ quark using eight times more data than previous results for this channel. The data are consistent with predictions from known physics sources and limits are set on the neutral Higgs boson associated production cross section. These cross section limits, a factor of three improvement over previous results, are also translated into limits in the SUSY parameter space. ![image](h_MVproductcombo.eps){width="0.32\linewidth"} ![image](xseclimit_square.eps){width="0.32\linewidth"} ![image](mhmaxmuneg.eps){width="0.32\linewidth"} acknowledgement\_paragraph\_r2.tex [99]{} list\_of\_visitor\_addresses\_r2.tex H. P. Nilles, Phys Rep. [**110**]{}, 1 (1984); H. E. Haber and G. L. Kane, Phys. Rep. [**117**]{}, 75 (1985). V.M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett. [**102**]{}, 051804 (2009). The LEP Working Group for Higgs Boson Searches (ALEPH, DELPHI, L3, and OPAL Collaborations), Eur. Phys. J. C [**47**]{}, 547 (2006). V.M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett. [**101**]{}, 071804 (2008). A. Abulencia [*et al.*]{}, [(CDF Collaboration)]{}, Phys. Rev. Lett. [**96**]{}, 011802 (2006). V.M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett, [**101**]{}, 221802 (2008). T. Affolder [*et al.*]{}, (CDF Collaboration), Phys. Rev. Lett. [**86**]{}, 4472 (2001). V.M. Abazov [*et al.*]{} (D0 Collaboration), Nucl. Instrum. Methods in Phys. Res. A [**565**]{}, 463 (2006). G. Blazey [*et al.*]{}, arXiv:hep-ex/0005012 (2000). T. Scanlon, Ph.D. Dissertation, Imperial College London, FERMILAB-THESIS-2006-43 (2006). V.M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Lett. B [**670**]{}, 292 (2009). T. Sjöstrand [*et al.*]{}, Comput. Phys. Commun. [**135**]{}, 238 (2001). Version 6.409. J. Pumplin [*et al.*]{}, JHEP [**0207**]{}, 012 (2002); D. Stump [*et al.*]{}, JHEP [**0310**]{}, 046 (2003). Z. Was, Nucl. Phys. B - Proc. Suppl. [**98**]{}, 96 (2001). Version 2.5.04. D.J. Lange, Nucl. Instrum. Methods in Phys. Res. A [**462**]{}, 152 (2001). Version 9.39. J. Campbell and R.K. Ellis, Phys. Rev. D [**65**]{}, 113007 (2002). Version 5.6. R. Brun and F. Carminati, CERN Program Library Long Writeup W5013, 1993 (unpublished). Version 3.21. M.L. Mangano, M. Moretti, F. Piccinini, R. Pittau, and A. Polosa, JHEP [**07**]{}, 001 (2003). Version 2.11. V.M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett. [**93**]{}, 141801 (2004). W. Fisher, FERMILAB-TM-2386-E (2007). M. Carena, S. Heinemeyer, C.E.M. Wagner, G. Weiglein, Eur. Phys. J. C [**45**]{}, 797 (2006). M. Frank, T. Hahn, S. Heinemeyer, W. Hollik, H. Rzehak, and G. Weiglein, JHEP [**0702**]{}, 47 (2007). Version 2.65; G. Degrassi, S. Heinemeyer, W. Hollik, P. Slavich, G. Weiglein, Eur. Phys. J. C [**28**]{}, 133 (2003); S. Heinemeyer, W. Hollik, G. Weiglein, Eur. Phys. J. C [**9**]{}, 343 (1999); S. Heinemeyer, W. Hollik, G. Weiglein, Comput. Phys. Commun. [**124**]{}, 76 (2000).
LET dependence of the response of a PTW-60019 microDiamond detector in a 62MeV proton beam. This study was initiated following conclusions from earlier experimental work, performed in a low-energy carbon ion beam, indicating a significant LET dependence of the response of a PTW-60019 microDiamond detector. The purpose of this paper is to present a comparison between the response of the same PTW-60019 microDiamond detector and an IBA Roos-type ionization chamber as a function of depth in a 62MeV proton beam. Even though proton beams are considered as low linear energy transfer (LET) beams, the LET value increases slightly in the Bragg peak region. Contrary to the observations made in the carbon ion beam, in the 62MeV proton beam good agreement is found between both detectors in both the plateau and the distal edge region. No significant LET dependent response of the PTW-60019 microDiamond detector is observed consistent with other findings for proton beams in the literature, despite this particular detector exhibiting a substantial LET dependence in a carbon ion beam.
Chicago Family Sues ICE & City Over Raid, Gang Database By The MacArthur Justice Center The Chicago Police Department's sharing of its so-called "Gang Database" with U.S. Immigration and Customs Enforcement (ICE) triggered a nightmarish chain of events that left Wilmer Catalan-Ramirez imprisoned, in severe physical pain and mental anguish, and fighting deportation, according to a federal civil rights lawsuit filed on Monday. Catalan-Ramirez is a devoted father and a mechanic who has never belonged to a Chicago street gang. Despite this fact, CPD mistakenly labeled him as a gang member and conveyed this false information to ICE. ICE relied on this erroneous information during one of its March 2017 "Gang Ops" during which ICE targeted community members who have alleged gang ties. In January 2017, in his Back of the Yards neighborhood, Catalan-Ramirez was a bystander during a drive-by shooting that left him with multiple gunshot wounds. These injuries left him with fractures to his skull and shoulder, a traumatic brain injury and partial paralysis. Catalan-Ramirez has spent the months since the shooting in rehabilitation with the assistance of his wife Celene Adame. On March 27, around six ICE agents entered Catalan-Ramirez's family's apartment without a warrant, slammed him to the floor and handcuffed him - aggravating his preexisting injuries, according to the suit filed in U.S. District Court for the Northern District of Illinois. "I saw the immigration agents slam my husband to the floor while we told them he was injured, but they still hurt him and took him to detention," said Adame. "There needs to be consequences for everyone responsible for hurting our family, for my husband, and so that it doesn't happen to others." The suit alleges violations of unreasonable search and seizure, and due process protections in the U.S. Constitution. The suit also alleges that the manner in which CPD gathers and disseminates false information about gang membership violated the Illinois Civil Rights Act, which prohibits racial and ethnic discrimination. In addition to CPD and ICE agents, defendants include the City of Chicago; Ricardo Wong, ICE's Chicago Field Office Director; McHenry County; and several officials working at the McHenry County Jail, which contracts with ICE to imprison immigration detainees awaiting resolution of their cases. While in the McHenry County Jail, Catalan-Ramirez has been denied needed medical care for his injuries and spends most of his day isolated in a cell. Because of the lack of medical care, he risks living for the rest of his life with partial paralysis. Catalan-Ramirez is represented by the Roderick and Solange MacArthur Justice Center and the National Immigration Project of the National Lawyers Guild and is supported by Organized Communities Against Deportations (OCAD) and Mijente, two organizations that organize against harsh immigration enforcement tactics, and who have been advocating for an expansion of Chicago's "Sanctuary" status by removing the gang database, amongst other reforms. "Wilmer's case is an example of how local city policies, such as the Gang Database, put immigrant communities in the path of Trump's deportation machine," said Xanat Sobrevilla, organizer with OCAD. "If the City of Chicago truly wants to be a sanctuary city where immigrants can seek safe refuge, it should stop sharing its Gang Database with ICE and inform ICE the database is rife with inaccuracies and is not a legitimate law enforcement tool." "Wilmer Catalan-Ramirez's rights have been trampled on and his physical and mental well-being is in danger because of conditions in the ICE-approved detention facility where he is now held," said Vanessa del Valle, an attorney with the MacArthur Justice Center and clinical assistant professor of law at Northwestern. "ICE exacerbated his pre-existing injuries; traumatized him, his wife and children; and left him with severe injuries that could last a lifetime. Now, his condition worsens with each passing day." "The past 100 days of the Trump Administration have meant mass raids and deportations for immigrant communities across the country," said Sejal Zota, legal director of the National Immigration Project of the National Lawyers Guild. "The injuries suffered by Mr. Wilmer Catalan-Ramirez exemplify the extreme level of ICE abuse in raids carried out by this Administration. Mr. Wilmer Catalan-Ramirez is a long-time resident of Chicago, beloved father of young U.S. Citizen children, and recent gunshot victim with severe medical complications. It is clear that he belongs back with his family and community in Chicago where he can receive critical medical care. These rogue ICE raids must cease."
--- author: - 'Kevin J. Walsh' - Alessandro Morbidelli title: 'The Effect of an Early Planetesimal-Driven Migration of the Giant Planets on Terrestrial Planet Formation' --- Introduction ============ The formation of the terrestrial planets is expected to have occurred from a disk of planetesimals in two steps. In the first step, Moon to Mars-size “planetary embryos” formed by runaway and oligarchic accretion (Greenberg et al. 1978; Wetherill & Stewart 1993; Kokubo & Ida 1998). In the second step, the terrestrial planets formed by high-velocity collisions among the planetary embryos (Chambers & Wetherill 1998; Agnor et al. 1999; Chambers 2001; Raymond et al. 2004, 2005, 2006, 2007; O’Brien et al. 2006; Kenyon & Bromley 2006). The most comprehensive effort to date in modeling terrestrial planet formation (Raymond et al. 2009) focused on 5 constraints of the terrestrial planets: 1. the orbits, particularly the small eccentricities, 2. the masses, with the small mass of Mars the most difficult to match, 3. formation timescales, 4. bulk structure of the asteroid belt and 5. the water content of Earth. Despite success with some of these constraints in each simulation, no simulation satisfied all the constraints simultaneously. For the simulations with fully formed Jupiter and Saturn on nearly circular orbits, the constraint consistently missed is the small mass of Mars. A Mars of the correct size is only obtained in simulations where the giant planets are on orbits with current semimajor axes but much larger eccentricities. This scenario, however, raises the problem of not allowing any water delivery to Earth from material in the outer asteroid belt region. The size of Mars has been a consistent problem for previous works with giant planets assumed on current orbits and disks of planetesimals and embryos stretching from $\sim$0.5–4.0 AU (Chambers & Wetherill 1998; O’Brien et al. 2006), or even only up to 1.5 or 2.0 AU (Kokubo et al. 2006; Chambers 2001). However, Hansen (2009) had great success creating analogs of Mars in simulations which begin with a narrow annulus of planetary embryos between 0.7 and 1.0 AU. In these simulations both Mercury and Mars are formed from material that is scattered out of the original annulus by the growing Earth and Venus analogs. In addition, the orbits of the Earth and Venus analogs have eccentricities and inclinations similar to those observed today and the accretion timescales are in agreement, although on the low side, with the ages of the Earth-Moon system deduced from the $^{182}$Hf - $^{182}$W chronometer. This model points to the need for a truncated planetesimal disk at, or near, the beginning of the process of terrestrial planet formation. The origin of this truncation remains to be understood. Similarly, it remains to be clarified how the truncation of the disk of planetesimals at 1 AU can be compatible with the existence of asteroids in the 2-4 AU region. Nagasawa et al. (2005) and Thommes et al. (2008) effectively produced a cut in the planetesimal distribution at  1.5 AU by assuming that the giant planets were originally on their current orbits and that secular resonances swept through the asteroid belt during gas-dissipation. However, the assumption that the giant planets orbits had their current semimajor axes when the gas was still present is no longer supported. When embedded in a gas disk, planets migrate relative to each other until a resonance configuration is achieved (Peale & Lee 2002; Kley et al. 2009; Ferraz-Mello et al. 2003; Masset & Snellgrove 2001; Morbidelli & Crida 2007; Pierens & Nelson 2008). Thus it is believed that the giant planets were in resonance with each other when the gas disk disappeared (Morbidelli et al. 2007; Thommes et al. 2008; Batygin & Brown 2010) which causes problems in understanding the consequences of the Thommes et al. (2008) model. Moreover, the Nagasawa et al. (2005) and Thommes et al. (2008) simulations produce the terrestrial planets too quickly ($\sim 10$ Myr), compared to the timing of moon formation indicated by the $^{182}$Hf - $^{182}$W chronometer ($>30$ Myr and most likely $>50$ Myr; Kleine et al. 2009) and they completely deplete the asteroid belt by the combination of resonance sweeping and gas-drag (see also Morishima et al. 2010, for a discussion). The resonant configuration of the planets in a gas disk is extremely different from the orbital configuration observed today. Planetesimal-driven migration is believed to be the mechanism by which the giant planets acquired their current orbits after the gas-disk dissipation. In fact, work by Fernandez & Ip (1984) found that Uranus and Neptune have to migrate outward through the exchange of angular momentum with planetesimals that, largely, they scatter inward. Similarly, Saturn suffers the same fate of outward migration, though Jupiter migrates inward as it ejects the planetesimals from the solar system. The timescale for planetesimal-driven migration of the giant planets depends on the distribution of the planetesimals in the planet-crossing region. It is typically 10 My, with 5 My as the lower bound (Morbidelli et al. 2010). Close encounters between pairs of giant planets might also have contributed in increasing the orbital separations among the giant planets themselves (Thommes et al. 1999; Tsiganis et al. 2005; Morbidelli et al. 2007; Brasser et al. 2009; Batygin & Brown 2010). Beyond the consequences for the scattered planetesimals, the migration of the giant planets affects the evolution of the solar system on a much larger scale, through the sweeping of planetary resonances through the asteroid belt region. The chronology of giant planet migration is important for the general evolution of the solar system, including the formation of the terrestrial planets. It has been recently proposed (Levison et al. 2001; Gomes et al. 2005; Strom et al. 2005) that the migration of the giant planets is directly linked in time with the so-called “Late Heavy Bombardment” (LHB) of the terrestrial planets (Tera et al. 1974; Ryder 2000, 2002; Kring & Cohen 2002). If this is true, then the migration of the giant planets should have occurred well after the formation of the terrestrial planets. In fact, the radioactive chronometers show that the terrestrial planets were completely formed 100 Myr after the condensation of the oldest solids of the solar system (the so-called calcium alluminum inclusions, which solidified 4.568 Gyr ago; Bouvier et al. 2007; Burkhardt et al. 2008), whereas the LHB occurred 3.9–3.8 Gyr ago. Thus the terrestrial planets should have formed when the giant planets were still on their pre-LHB orbits: resonant and quasi-circular. However, the simulations of Raymond et al. (2009) fail to produce good terrestrial planet analogs when using these pre-LHB orbits. The alternative possibility is that giant planet-migration occurred as soon as the gas-disk disappeared. In this case, it cannot be a cause of the LHB (and an alternative explanation for the LHB needs to be found; see for instance Chambers 2007). However, in this case giant planet migration would occur while the terrestrial planets are forming, and this could change the outcome of the terrestrial planet formation process. In particular, it is well known that, as Jupiter and Saturn migrate, the strong $\nu_6$ secular resonance sweeps through the asteroid belt down to $\sim 2$ AU (Gomes 1997). The $\nu_6$ resonance occurs when the precession rate of the longitude of perihelion of the orbit of an asteroid is equal to the mean precession rate of the longitude of perihelion of Saturn, and it affects the asteroids’ eccentrcities. If the giant planet migration occurs on a timescale of 5–10 Myr, typical of planetesimal-driven migration, then the $\nu_6$ resonance severely depletes the asteroid belt region (Levison et al. 2001; Morbidelli et al. 2010). This can effectively truncate the disk of planetesimals and planetary embryos, leaving it with an outer edge at about 1.5 AU. Although the location of this edge is not as close to the sun as assumed in Hansen (2009) (1 AU), it might nevertheless help in forming a Mars analog, i.e. signficantly less massive than the Earth. An equally important constraint is the resulting orbital distribution of planetesimals in the asteroid belt region, between 2–4 AU. After that region has been depleted of planetesimals and embryos by the sweeping resonances, what remains will survive without major alteration and should compare favorably with todays large asteroids. Studies of late giant planet migration start with an excited asteroid belt, where inclinations already vary from 0–20$^{\circ}$ (Morbidelli et al. 2010), and cannot match the inclination distribution of the inner asteroid belt with 5 Myr or longer migration timescales. The early migration presented here is different because it occurs immediately after the dissipation of the gas disk so that the planetesimal orbits are dynamically cold, with inclincations less than 1$^{\circ}$. Thus, in principle, an early giant planet migration could lead to a different result. Also, the embryos will be present, another difference with late migration scenarios. The purpose of this paper is to investigate, for the first time, the effect that an [*early*]{} migration of the giant planets could have had on the formation of the terrestrial planets and on the final structure of the asteroid belt. In Section 2 we discuss our methods and in Section 3 we present our results. The conclusions and a discussion on the current state of our understanding of terrestrial planet formation will follow in Section 4. Methods ======= We assume in our simulations that the nebular gas has dissipated, Jupiter and Saturn have fully formed; in the terrestrial planet and asteroid belt region, in the range 0.5-4.0 AU, the planetesimal disk has already formed planetary embyros accounting for half of its total mass. The lifetime of the circumstellar gas disk is observed to be 3–6 Myr, and both Jupiter and Saturn are expected to be fully formed by this time (Haisch et al. 2001). The timescales for oligarchic growth is similar, with lunar to Mars sized embryos growing on million year timescales (Kokubo & Ida 1998,2000). The numerical simulations are done using SyMBA, a symplectic $N$-body integrator modified to handle close encounters (Duncan et al. 1998). In our model, the planetary embryos interact with each other; the planetesimals interact with the embryos but not with themselves; all particles interact with the giant planets and, except when specified (explained further below), the giant planets feel the gravity of embryos and planetesimals. Collisions between two bodies result in a merger conserving linear momentum. It has been demonstrated by Kokubo & Genda (2010) that this a priori assumption of simple accretion does not significantly affect the results. The SyMBA code has already been used extensively in terrestrial planet formation simulations (Agnor et al. 1999; Levison & Agnor 2003; O’Brien et al. 2006; McNeil et al. 2005). Protoplanetary disk ------------------- The initial protoplanetary disks are taken directly from O’Brien et al. (2006), which themselves were based on those of Chambers (2001). The O’Brien et al. (2006) study produced some of the best matches for terrestrial planets and by using similar intial conditions allows a direct comparison. The initial conditions are based on a “minimum mass” solar nebula, with a steep surface denstiy profile. The solid mass is shared between many small planetesimals and a small number of large bodies, the embryos, as suggested by runaway/oligarchic growth simulations (Kokubo & Ida 1998, Kokubo & Ida 2000). In theory, it is possible that, by the time the gas disappears from the disk (which cooresponds to time zero in our simulations) the planetary embryos in the terrestrial planet region could have grown larger than the mass of Mars. However, the current mass of Mars seemingly excludes this possibility, and argues for masses to have been martian or sub-martian in mass. The surface density profile is $\Sigma(r) = \Sigma_{0}(\frac{r}{1 \mathrm{AU}})^{-3/2}$, where $\Sigma_{0}$ = 8 g cm$^{-2}$. The distribution of material drops linearly between 0.7 and 0.3 AU. Half of the mass is in the large bodies, of which there were either 25 embryos, each of 0.0933 Earth masses ($M_\oplus$) or 50 embryos of 0.0467 $M_\oplus$. The small bodies are 1/40 as massive as the large embyros, or 1/20 as massive as the small embryos. For all test cases the embryos are spaced between 4–10 mutual hill radii at the beginning of the simulations. In some tests, the smaller planetesimals with an initial semimajor axis larger than 2.0 were cloned into two particles with identical semi-major axis, half the mass in each, and different random eccentricities and inclinations (noted as ’Double Asteroids’ in Table 1.). The initial eccentricities and inclinations were selected randomly in the range of 0-0.01 and 0-0.5 degrees respectively. Thus the initial mass of the disk consisted of 2.6 $M_\oplus$ located inside of 2 AU and a total mass of 4.7 $M_\oplus$. Giant planets and migration --------------------------- In all tests Jupiter and Saturn were started on orbits closer to each other than at the present time, i.e. with semimajor axes of 5.4 and 8.7 AU, respectively. These initial orbits are just beyond their mutual 1:2 mean motion resonance, i.e. the corrseponding ratio of orbital periods of Saturn and Jupiter is slightly larger than 2. Even if the giant planets should have started from a resonant configuration - probably the 2:3 resonance (Masset & Snellgrove 2001; Pierens & Nelson 2008) - it is known that secular resonance sweeping through the asteroid belt is important only when the planets’ orbital period ratio is larger than 2 (Gomes 1997; Brasser et al. 2009). Thus, our choice of the initial orbits of Jupiter and Saturn is appropriate for the purposes of this study. Each planet was forced to migrate by imposing a change to their orbital velocities that evolves with time $t$ as: $$v(t) = v_0 + \Delta v [1 - \exp(-t/\tau)]$$ appropriate $\Delta v$ to achieve the required change in semimajor axis, and $\tau=5$ Myr. The latter is the minimum timescale at which planetesimal-driven migration can occur, simply due to the lifetime of planetesimals in the giant planet crossing region, as discussed extensively in Morbidelli et al. (2010). Longer timescales are possible, but previous work has shown that fast timescales affect the asteroid belt region less, and since terrestrial planet formation timescales are in the 10’s of millions of years, more rapid migration has a greater chance of affecting the accretion of Mars. Thus, we think that restricting ourselves to the 5 My timescale is sufficient, as this timescale is the most favorable for these purposes. ![Example of idealized migration for a system with only Jupiter and Saturn, ending with orbits very close to the current ones. Panel (a) shows the semimajor axis of Jupiter, (b) the eccentricity of Jupiter, (c) the semimajor axis of Saturn and (d) the eccentricity of Saturn, all plotted as a function of time in years.[]{data-label="migration"}](Figs/IdealMigration.ps){width="9.0cm"} If the motion of the giant planets was not affected by the other bodies in the system, the evolution of the eccentricities and inclinations would not change much during migration (Brasser et al., 2009, and Fig. 1). Thus it is relatively simple to find initial conditions that lead to the final orbital configurations with eccentricities and inclinations with mean values and amplitude of oscillations similar to current one. In fact, as shown in Brasser et al. (2009), the initial values $(e_J,e_S)=(0.012,0.035)$ and $(i_J,i_S)=(0.23^\circ,1.19^\circ)$, after migration, lead to eccentricities and inclinations whose mean values and amplitudes of oscillation closely resemble those characterizing the current secular dynamics of the giant planets (see Fig. \[migration\]). In our case, however, as the giant planets migrate, they scatter planetesimals and planetary embryos, and their orbits are affected in response. Thus, the final orbits are not exactly like those of Fig. \[migration\]. Typically, for instance, the eccentricities and inclinations of the planets are damped, and their relative migration is slightly more pronounced than it was intended to be. Thus, we tried to modify the initial eccentricities of Jupiter and Saturn and the values of $\Delta v$ in order to achieve final orbits as similar as possible to those of Fig. \[migration\]. However, while the effect of planetesimals on the planets is statistically the same from simulation to simulation, (and so can be accounted for by modifying the initial conditions of the planets), the effects of embryos are dominated by single stochastic events. Thus, it is not possible to find planetary initial conditions that lead systematically to good final orbits. In some cases the final orbits are reasonably close to those of the current system, but in many cases they are not. In total we performed 30 simulations. We discarded the simulations with unsuccessful final orbits, and kept only those (9/30) that lead to orbits resembling the current ones. These successful runs are called hereafter “normal migration simulations”. Our criterion for discriminating good from bad final orbits was determined after the 15 Myr of migration, and the semimajor axis, eccentricity and oscillation in eccentricity ($\Delta e$) were the factors examined. Jupiter’s orbit must have had $|a - a_j| < 0.05$, $|e - e_j| < 0.0156$, and $|\Delta e - \Delta e_j| < 0.0164$, while Saturn’s orbit required $|a - a_s| < 0.075$, $|e - e_s| < 0.0252$, and $\Delta e - \Delta e_s| < 0.0256$. We complemented our normal migration simulations with what we call hereafter ’perfect migration’ cases. In these simulations, the planetesimals and embryos do not have any direct effect on the giant planets, even during close encounters. However, their indirect effects cannot be suppressed (specifically the H$_{\mathrm{sun}}$ term from eq. 32b. in Duncan et al. 1998) , but in principle they are weaker. Thus the migration of the giant planets, starting with the initial conditions from Brasser et al. 2009 (as in Fig. \[migration\]), met the above criteria in 3 out of 4 simulations. The giant planets had the full gravitational affect on the planetesimals and embryos throughout these simulations, and the mutual effects between planetesimals and embryos remained unchanged. Results ======= We present the results of 12 simulations of terrestrial planet formation each covering 150 Myr. Of these runs, 9 are normal migration simulations and 3 ’perfect migration’ simulations (all simulations are listed in Table 1. and refered to by run name, “Test31” etc., throughout). These two sets of simulations had qualitative and quantitative similarities and are thus discussed at the same time and combined in the figures. First, the resulting planets are compared with the current terrestrial planets, followed by a look at the consequences the migration has on the structure of the asteroid belt. ![The final mass ($M_\oplus$) for each planet produced in our simulations is plotted as a function of the planet’s semimajor axis. The horizontal error bars show the locations of the perihelion and aphelion of the cooresponding orbit. The open squares refer to the planets produced in the normal migration simulations, the open circles to the planets produced in the run with twice as many half-sized embryos, and the open triangles to those produced in the ‘perfect migration’ simulations; the solid squares represent the real terrestrial planets. []{data-label="RandomMvS"}](Figs/MassSemi.ps){width="8.4cm"} The planets ----------- Results for these simulations are summarized in Fig. \[RandomMvS\], where the final masses and semimajor axes of our synthetic planets are compared to those of the real terrestrial planets (see also Table \[runtable\]). The trend is similar to that found in previous works (see for instance Chambers et al. 2001), where the masses and locations of Earth and Venus are nearly matched by a number of different simulations, but most planets just exterior to Earth, near $a$ $\sim$1.5 AU are at least 3 times more massive than Mars. However, a handful of planets close to 1.8 AU were of similar mass to Mars. Of note, Test31 had two $\sim$Mars-mass bodies, at 1.2 and 2.4 AU, with an Earth mass planet at 1.52 AU. Test54, the only one of four simulations starting with the smaller embryos with successful migration, produced a sub-Mars mass body at 1.89 AU, just at the edge of the current day asteroid belt. The Ran4 simulation produced a body within 50% of Mars’ mass at 1.71 AU, though it had a high eccentricity above 0.13 and was a member of a 3 planet system. In general, planets produced at around 1.5 AU were $\sim$ 5 times more massive than Mars, and Mars-mass bodies were typically only found beyond 1.7 AU. The total number of planets produced in each simulation is not systematically consistent with the real terrestrial planet system. Only two simulations produce 4 planets, where we define a “planet” as any embryo-sized or larger body with a semimajor axis less than 2.0 AU. Most simulations had 3 planets at the end, while one produced 5 planets. A common metric for measuring the distribution of mass among multiple planets is the radial mass concentration statistic (RMC), defined as $$RMC = max\bigg(\frac{\sum M_j}{\sum M_j[\log_{10}(a/a_j)]^2}\bigg) ,$$ where $M_j$ and $a_j$ are the mass and semimajor axis of planet $j$ (Chambers, 2001). The bracketed function is calculated for different $a$ in the region where the terrestrial planets form. The RMC is infinite for a single planet system, and decreases as mass is spread among multiple planets over a range of semimajor axes. The current value of RMC for the solar system is 89.9. For all but one simulation the RMC value is below the current solar systems value, largely due to the large mass concentrated in a Mars-analog orbit (we did not include the two embryos stranded in the asteroid belt region in these calculations, one in Test31 and one in TestPM24). The single simulation with a larger RMC value did not have a Mars analog, and thus the mass was contained in a smaller semimajor axis range. The terrestrial planets have low eccentricities and inclinations, Earth and Venus both have $e < 0.02$ and $i < 3^\circ$, properties which has proved difficult to match in accretion simulations. O’Brien et al. (2006) and Morishima et al. (2008) reproduced low eccentricities and inclinations largely due to remaining planetesimals which damp the orbital excitation of the planets. A metric used as a diagnostic of the degree of success of the simulations in reproducing the dynamical excitation of the terrestrial planets is the angular momentum deficit (AMD; Laskar 1997): $$AMD = \frac{\sum_j M_j \sqrt{a_j}\left(1-\cos(i_j)\sqrt{1-e_j^2}\right)}{\sum_j M_j \sqrt{a_j}} ,$$ where $M_j$ and $a_j$ are again the mass and semimajor axis and $i_j$ and $e_j$ are the inclination and eccentricity of planet $j$. The AMD of the current solar system is 0.0014. The AMD for our simulations ranged from 0.0011 to 0.0113. The plantesimal disk used in these simulation is based on that from O’Brien (2006), and is therefore not surprising that some AMDs are consistent with the solar system value. Simulation PM22 is the one with the largest final AMD, because it produced an Earth-analog with a 10$^\circ$ inclination. ![Evolution of the system over time, showing the clearing of the asteroid belt region with inclination plotted as a function of semimajor axis. The open boxes are planetesimals on orbits within the current asteroid belt region, the crosses are planetesimals elsewhere, and the open circles are embryos or planets scaled in relation to their diameters. The simulation is Test31. []{data-label="31"}](Figs/Test31Evol.ps){width="8.4cm"} ![Same as Fig. \[31\], but for simulation Test54, which started from 50 embryos of 0.0467 $M_\oplus$ instead of 25 embryos twice as massive. []{data-label="54"}](Figs/Test54Evol.ps){width="8.4cm"} Figures \[31\] and \[54\] show snapshots of two systems evolving over time. Of interest is the radial clearing caused by the movement of the giant planets and the sweeping of their resonances, particularly the $\nu_6$ resonance. This clearing progresses from the outer edge of the disk towards the sun, following the migration of the $\nu_6$ resonance, and stops at $\sim 2$ AU, which is the final location of this resonance when the giant planets reach their current orbits. Thus, the region of $a > 2.0$ is almost entirely cleared of material in 10 Myr, with only handfuls of planetesimals surviving and a single embryo. At 3 Myr, only $a > 2.5$ AU is largely cleared. ![Mass growth of the Mars analogs for all simulations plotted as a function of time. The most massive Mars analogs exceed the mass of Mars (0.11 $M_\oplus$) in only 2–3 Myr, and then in the next 10–20 Myr continue to grow to their final sizes, ending many times more massive than Mars. The two lines starting from $\sim 0.05$ $M_\oplus$ are for two planets of simulation ’Test54’, the only successful normal simulation that started with half-Mars mass embryos. The bold line shows the mass growth of the planet ending at $\sim 1.2$ AU; the thin line the planet ending at $\sim 1.9$ AU. []{data-label="marsgrowth"}](Figs/MarsGrowth2.ps){width="8.4cm"} As seen in Figure \[marsgrowth\] the accretion of embryos for the Mars analogs (where a Mars analog is defined as the largest body between 1.2–2.0 AU) begins immediately with $\sim$2 Mars-mass typically being reached in only 2 Myr (note that Figure \[marsgrowth\] shows 12 growth curves, as there are two planets displayed for Test54). Nine of the 11 Mars analogs have reached 0.2 $M_\oplus$ by 3 Myr. At 10 Myr 6 of the 11 have reached 0.3 $M_\oplus$, and by 30 Myr 10 of 11 are above 0.3 $M_\oplus$, or $\sim$3 $M_ \mathrm{Mars}$. One might wonder if our inability to produce a small Mars analog is due to the fact that, in all but one of the presented simulations (Test54 is the exception), the planetary embryos are initially $\sim$ one Mars mass. This is not regarded as a problem for the following reasons. First, the Mars analogs with semimajor axes near that of Mars, near 1.5 AU, typically accreted 4 or 5 embryos; thus they consistently accreted much more mass than Mars, and are not simply the result of a chance accretion between two Mars mass embryos. Second, only two of the 11 Mars analogs did not accrete another embryo, in Test54 and Ran4, but both had semimajor axes larger than 1.7 AU, well beyond the current orbit of Mars. Third, our single successful normal migration simulation that started with half-Mars mass embryos also produced an Earth mass planet at 1.2 AU. This planet was already two-mars masses in 5 Myr (notice that in the same simulation one embryo escaped all collisions with other embryos and therefore remained well below the mass of Mars - see Figure \[marsgrowth\]– but this object ended up at 1.9 AU, well beyond the real position of Mars). Finally, previous works (Chambers 2001; Raymond et al. 2009; Morishima et al. 2010 to quote a few) which started with embryos significantly less massive than Mars met the same Mars-mass problem found here. The similarities between our work and previous in terms of the mass distribution of the synthetic planets as a function of semimajor axis suggest that the giant planet migration does not affect significantly the terrestrial planet accretion process. Thus it is unlikely that small changes in the adopted evolution pattern of the giant planets could lead to significantly different results. Therefore the initial conditions do not appear to be at fault for the failure to match the mass of Mars. The reason for which the Mars analog consistently grows too massive is twofold. First, they grow fast (in a few million years, as shown in Figure \[marsgrowth\]), compared with the timescale required to effectively truncate the disk at $\sim 2$ AU (10 Myr, as shown in Figures \[31\] and \[54\]). Second, the truncation of the disk caused by the sweeping of the $\nu_6$ resonance is not sunward enough: the final edge is approximately at 2 AU, whereas an edge at $\sim 1$ AU is needed (Hansen 2009; Kokubo et al. 2006; Chambers 2001). Figure \[allSims\] shows the final incination vs. semimajor axis distributions of all our simulations (respectively, ’normal’ and ’perfect’ ones). The sizes of the symbols representing the planets are proportional to the cubic roots of their masses. Again, the problem of the mass of Mars stands out. ![Endstates of all simulations with the inclination plotted as a function of the semimajor axis with asteroids as open squares, non-asteroid planetesimals as crosses and embryos/planets as open circles scaled by their mass to the 1/3 power. []{data-label="allSims"}](Figs/allSims.ps){width="8.4cm"} The asteroid belt ----------------- In the previous section we have shown that the an early sweeping of secular resonances through the asteroid belt is not useful to solve the small-Mars problem. Here we address the question of other observational constraints. For this purpose, in this section we turn to the asteroid belt, whose orbital distribution is very sensitive to the effects of resonance sweeping (Gomes 1997; Nagasawa et al. 2000; Minton & Malhotra 2009; Morbidelli et al. 2010). Morbidelli et al. (2010) have shown that the properties of the asteroid belt after the slow migration of the giant planets are largely incompatible with the current structure of the asteroid belt. However, they assumed that the migration of the giant planets occurred late, after the completion of the process of terrestrial planet accretion and after the primordial depletion/dynamical excitation of the asteroid belt. Thus, that work does not exclude the possibility of an early migration. In fact, the outcome of an early migration could be very different from that of a late migration for two reasons: first, the initial orbits of the plantesimals are quasi-circular and co-planar in the early migration case whereas they are dynamically excited in the late migration case, which is an important difference; second, planetary embryos reside in, or cross, the asteroid belt region during the early time of terrestrial planet formation, and this process has the potential of erasing some of the currently unobserved signatures of resonace sweeping. ![(Top) The inclination of current day asteroids with absolute magnitude H $<$ 9.7, corresponding to D $\gtrsim$ 50 km, plotted as a function of their semimajor axis. The long-dashed lines show the location of the major mean motion resonances with Jupiter and the short-dashed curves the location of the $\nu_{6}$ and $\nu_{16}$ secular resonances. (Bottom) Surviving planetesimals from the 12 simulations, showing a strong depletion of low inclination bodies in the inner part of the asteroid belt region.[]{data-label="Asteroids"}](Figs/Asteroids.ps){width="8.4cm"} To compare the planetesimal distribution obtained in our simulations with the “real” asteroid population, we focus on asteroids larger than $\sim$50 km in diameter, as in previous works (Petit et al. 2001; Minton & Malhotra 2009; Morbidelli et al. 2010). These bodies are a reliable tracer of the structure of the asteroid belt that resulted from the primordial sculpting process(es), as they are too large to have their orbits altered significantly by the thermal Yarkovsky effect or by collisions (see Fig. \[Asteroids\],). Moreover, their orbital distribution (see top panel of Fig \[Asteroids\]) is not affected by observational biases, because all bodies of this size are known (Jedicke et al. 2002). The final distribution of the planetesimals residing in the asteroid belt in our 12 simulations is shown in the bottom panel of Fig \[Asteroids\]. As can be seen, the difference in orbital distribution between the real belt and that resulting from the giant planet migration process is striking. A simple metric used in Morbidelli et al. (2010) to quantify the difference in orbital distributions between the real and the synthetic belts is the ratio of asteroids above and below the location of the $\nu_6$ secular resonance with semimajor axis below 2.8 AU. The current day value for asteroids with a diameter above 50 km is 0.07. Combining together all the surviving planetesimals from all our 12 simulations results in a 67/13 ratio, in stark contrast to the current value. Thus, our result is qualitatively similar to that of Morbidelli et al. (2010), even though our resulting ratio is much larger than that obtained in that work (close to 1/1). The reason for the large ratio obtained in migration simulations, as discussed in Morbidelli et al. (2010), is that the migration of the giant planets forces the $\nu_6$ and $\nu_{16}$ secular resonances to move Sun-ward. More precisely, if the orbital separation of Jupiter and Saturn increased by more than $1$ AU (as predicted by all models and enacted in our simulations), the $\nu_6$ resonance sweeps the entire asteroid belt as it moves inwards from 4.5 AU to 2 AU; meanwhile the $\nu_{16}$ resonance sweeps the belt inside of 2.8 AU to its current location at 1.9 AU (Gomes et al. 1997). In the inner asteroid belt, the $\nu_{16}$ resonance sweeps first and the $\nu_6$ resonance sweeps second. The $\nu_{16}$ resonance occurs when the precession rate of the longitude of the node of the orbit of an asteroid is equal to the precession rate of the node of the orbit of Jupiter, and it affects the asteroid’s orbital inclination. Given the characteristic shape of the $\nu_6$ resonance location in the $(a,i)$ plane (see Fig \[Asteroids\]), the asteroids that acquire large enough inclination when they are swept by the $\nu_{16}$ resonance, avoid being swept by $\nu_6$; thus, their eccentricities are not affected and they remain stable. Conversely, the asteroids that remain at low-to-moderate inclinations after the $\nu_{16}$ sweeping are then swept by the $\nu_6$ resonance and their eccentricities become large enough to start crossing the terrestrial planet region. These bodies are ultimately removed by the interaction with the (growing) terrestrial planets. This process favors the survival of high-inclination asteroids (above the current location of the $\nu_6$ resonance) over low-inclination asteroids and explains the large ratio between these two populations obtained in the resonance sweeping simulations. This ratio is larger in our simulations than in those of Morbidelli et al. (2010), because the initial orbits of planetesimals and embryos in our case have small inclinations and eccentricities. Consequently, the secular resonance sweeping can only increase eccentricities and inclinations. Conversely, in the Morbidelli et al. (2010) simulations, the initial orbits covered a wide range of eccentricities and inclinations. Large eccentricities or inclinations can be [*decreased*]{} by the secular resonance sweeping. Thus, more objects could remain at low-to-moderate inclinations after the $\nu_{16}$ sweeping and fewer objects were removed by the $\nu_{6}$ sweeping than in our case. We conclude from our simulations that the migration of the giant planets with an $e$-folding time of 5 Myr (or longer, as the effects of secular resonance sweeping increases with increasing migration timescale) is inconsistent with the current structure of the asteroid belt, even if it occurred early. In fact, our simulations provide evidence that the planetary embryos crossing the asteroid belt during the process of formation of the terrestrial planets are not able to re-shuffle the asteroid orbital distribution and erase the dramatic scars produced by secular resonance sweeping. Discussion and Conclusions ========================== This paper has investigated the effects of [*early*]{} giant planet migration on the inner disk of planetesimals and planetary embryos. In the context of solar system formation, “early” is immediately following the disappearence of the gas disk, which is identified as time-zero in our simulations. The giant planets are migrated towards their current orbits with a 5 Myr $e$-folding time which is appropriate if the migration is caused by planetesimals scattering. We have shown that the sweeping of secular resonances, driven by giant planet migration, truncates the mass distribution of the inner disk, providing it with an effective outer edge at about 2 AU after about 10 Myr. This edge is too far from the Sun and forms too late to assist in the formation of a small Mars analog. In fact, Chambers (2001) already showed similar results starting from a disk of objects with semi-major axes $0.3<a<2.0$ AU, the terrestrial planet accretion process leads to the formation of planets that are systematically 3-5 times too massive at $\sim 1.5$ AU. For completeness, we have continued our simulations well beyond the migration timescale of the giant planets to follow the accretion of planets in the inner solar system, and we have confirmed Chambers (2001) result. Hansen (2009) showed that obtaining planets at $\sim 1.5$ AU that have systematically one Mars mass requires that the disk of solid material in the inner solar system had an outer edge at about 1 AU. The inability of secular resonance sweeping during giant planets migration to create such an edge suggests that a different mechanism needs to be found. Moreover, our study adds to the continuing inability of models with a slow migration of the giant planets, $\tau \gtrsim 5$ Myr, to leave an asteroid belt with a reasonable inclination distribution. Morbidelli et al. (2010) argued that the only possibility for the orbits of Jupiter and Saturn to move away from each other on a timescale shorter than 1 Myr is that an ice giant planet (presumably Uranus or Neptune) is first scattered inwards by Saturn and is subsequently scattered outwards by Jupiter, so that the two giant planets recoil in opposite directions. They dubbed this a “jumping-Jupiter” evolution and showed that in this case the final orbital distribution of the asteroid belt is consistent with that observed. Again, Morbidelli et al. (2010) worked in the framework of a “late” displacement of the orbits of the giant planets. Our results in this paper suggest that a jumping-Jupiter evolution would also be needed in the framework of an “early” displacement of the orbits of the giant planets. At this point, it is interesting to speculate what the effects of an “early” jumping-Jupiter evolution would be on the terrestrial planet formation process. In essence, an early jumping-Jupiter evolution would bring the giant planets to current orbits at a very early time. So, the outcome of the terrestrial planet formation process would resemble that of the simulations of Raymond et al. (2009) with giant planets initially with their current orbital configuration, labelled ’EJS’ in that work. In these simulations, though, (see their Fig. 10), the Mars analog is, again, systematically too big. It is questionable whether a jumping-Jupiter evolution could bring the giant planets onto orbits with current semimajor axes but larger eccentricities, as required in the most successful simulations of Raymond et al. (2009), labelled ’EEJS’. However, even though jumping-Jupiter evolutions satisfying this requirement were found, it is important to note that all of the outcomes of the EEJS simulations of Raymond et al. (2009). While producing a small Mars in several cases, the EEJS simulations failed in general to bring enough water to the terrestrial planets, formed the Earth too early compared to the nominal timescale of 50 Myr and left the terrestrial planets on orbits too dynamically excited. For all these reasons an early jumping-Jupiter evolution is not a promising venue to pursue for a successful model of terrestrial planet formation. In conclusion, our work substantiates the problem of the small mass of Mars and suggests that understanding terrestrial planet formation requires a paradigm shift in our view of the early evolution of the solar system. Acknowledgments {#acknowledgments .unnumbered} --------------- The authors would like to thank an anonymous reviewer for a careful reading of the manuscript. KJW acknowledges both the Poincaré Postdoctoral fellowship at the Observatoire de Côte d’Azur. This work is part of the Helmholtz Alliance’s ’Planetary evolution and Life’, which KJW and AM thank for financial support. Computations were carried out on the CRIMSON Beowulf cluster at OCA. Tables ====== References ========== Agnor, C., Canup, R., Levison, H. 1999, Icarus 142, 219\ Batygin, K. & Brown, M. E. 2010, ApJ 716, 1323\ Bouvier, A., Blichert-Toft, J., Moynier, F., Vervoort, J. D., & Albar[è]{}de, F. 2007, 71, 1583\ Burkhardt, C., Kleine, T., Bourdon, B., Palme, H., Zipfel, J., Friedrich, J. M., & Ebel, D. S. 2008, 72, 6177\ Brasser, R., Morbidelli, A., Gomes, R., Tsiganis, K., & Levison, H. F. 2009, A&A 507, 1053\ Chambers, J. 2001, Icarus 152, 205\ Chambers, J. 2007, Icarus 189, 386\ Chambers, J.E. & Wetherill G.W. 1998, Icarus 136, 304\ Duncan, M. J., Levison, H. F. & Lee, M. H. 1998, ApJ 116, 2067\ Fernandez, J. A., and Ip, W. 1984, Icarus 58, 109\ Ferraz-Mello, S., Beaugé, C. & Michtchenko, T. A. 2003, CeMDA 87, 99\ Gomes, R., Levison, H., Tsiganis, K. & Morbidelli, A. 2005, Nature 435, 466\ Gomes, R. S. 1997, AJ 114, 396\ Greenberg, R., Hartmann, W.K., Chapman, C.R. & Wacker, J.F. 1978, Icarus 35, 1\ Hansen B. M. S. 2009, ApJ 703, 1131\ Jedicke, R., Larsen, J., & Spahr, T. 2002. In: ’Asteroids III’ (W.F. Bottke, A. Cellino, P. Paolicchi and R. P. Binzel, eds), Univ. Arizona Press, Tucson, Arizona.\ Kenyon, S.J. & Bromley, B.C. 2006, AJ 131, 1837\ Kleine, T., Touboul, M. & Bourdon, B. 2009,  73, 5150\ Kley, W., Bitsch, B. & Klahr, H. 2009, A&A 506, 971\ Kokubo, E. & Genda, H. 2010, ApJ 714, L21\ Kokubo, E., & Ida, S. 1998, Icarus, 131, 171\ Kokubo, E., & Ida, S. 2000, Icarus, 143, 15\ Kokubo, E., Kominami, J. & Ida, S. 2006, ApJ 642, 1131\ Kring, D. A., & Cohen, B. A. 2002, JGRE, 107, 5009\ Laskar, J. 1997, , 317, L75\ Levison, H. F., Dones, L., Chapman, C. R., Stern, S. A., Duncan, M. J. & Zahnle, K. 2001, Icarus 151, 286\ Levison, H. F. & Agnor, C. 2003, ApJ 125, 2692\ Masset F. & Snellgrove, M. 2001, MNRAS 320, 55\ McNeil, D. Duncan, M. & Levison, H. F. 2005, ApJ 130, 2884\ Minton, D. A., & Malhotra, R. 2009, , 457, 1109\ Morbidelli A. & Crida, A. 2007 Icarus 191, 158\ Morbidelli, A., Brasser, R., Gomes, R., Levison, H.F. & Tsiganis, K. 2010, AJ 140, 1391\ Morishima, R., Schmidt, M. W., Stadel, J., & Moore, B. 2008, 685, 1247\ Morishima, R., Stadel, J. & Moore, B. 2010, Icarus 207, 517\ Nagasawa, M., Tanaka, H., & Ida, S. 2000, 119, 1480\ Nagasawa, M. & Lin, D. N. C. 2005, ApJ 632, 1140\ O’Brien, P., Morbidelli, A. & Levison, H. 2006, Icarus 184, 39\ Peale, S. J. & Lee, M. H. 2002, Science 298, 593\ Petit, J.-M., Morbidelli, A. & Chambers, J. 2001. Icarus 153, 338.\ Pierens, A. & Nelson, R. 2008, A&A 482, 333\ Raymond, S. N., Quinn, T., & Lunine, J. I. 2004, Icarus 168, 1\ Raymond, S. N., Quinn, T., & Lunine, J. I. 2005, ApJ 632, 670\ Raymond, S. N., Quinn, T., & Lunine, J. I. 2006, Icarus 183, 265\ Raymond, S. N., Quinn, T., & Lunine, J. I. 2007, Astrobiology 7, 66\ Raymond, S. N.,O’Brien, D. P., Morbidelli, A. & Kaib, N. A. 2009, Icarus 203, 644\ Ryder, G., Koeberl, C & Mojzsis, S. 2000. In: ’Origin of the Earth and Moon’ (R. Canup & R. Knighter, eds). Univ. Arizona Press, Tucson, Arizona.\ Ryder, G. 2002, Journal of Geophysical Research (Planets), 107, 5022\ Strom, R. G., Malhotra, R., Ito, T., Fumi, Y. & Kring, D. A. 2005, Science 309, 1847\ Tera, F., Papanastassiou, D. A. & Wasserburg, G. J. 1974, E&PSL 22, 1\ Thommes, E. W., Duncan, M. J., & Levison, H. F. 1999,  402, 635\ Thommes, E., Nagasawa, M. & Lin, D. N. C. 2008, ApJ 676, 728\ Thommes, E. W., Bryden, G., Wu, Y., & Rasio, F. A. 2008b, 675, 1538\ Tsiganis, K., Gomes, R., Morbidelli, A. & Levison, H. 2005, Nature 435, 459\ Wetherill, G.W. & Stewart G.R. 1993, Icarus 106, 190\
Frequency of medication errors with intravenous acetylcysteine for acetaminophen overdose. Acetadote, an intravenous preparation of acetylcysteine, became commercially available in the US in June 2004 for the treatment of acetaminophen poisoning. The dosing regimen is complex, consisting of a loading dose followed by 2 maintenance doses, each with different infusion rates. To analyze the frequency of medication errors related to the complex dosing regimen for intravenous acetylcysteine. A retrospective chart review of a regional poison center's records was performed for all patients treated with intravenous acetylcysteine from August 1, 2006, to August 31, 2007. Data collected included acetylcysteine dose, infusion rate, interruptions in therapy, unnecessary administration, and medical outcome. Records that revealed medication errors were further examined for the time and location of the errors. There were 221 acetaminophen overdose cases treated with intravenous acetylcysteine that met inclusion criteria. Of these, 84 medication errors occurred in 74 (33%) patients. The frequency and types of errors were 1.4% incorrect dose, 5% incorrect infusion rate, 18.6% more than 1 hour of interruption in therapy, and 13.1% unnecessary administration. The frequency and types of medication errors in pediatric patients were similar to those in the total patient population. Errors occurred most frequently in the emergency department compared with intensive care units or general medical floors. In addition, errors occurred most frequently on third shift, compared with first or second shift. Evaluation of medical outcomes in cases involving acetaminophen only found that medication errors did not have an impact on coded outcomes. Medication administration errors occur frequently with intravenous acetylcysteine. Awareness of this problem, coupled with increased vigilance in identifying factors associated with errors, should decrease medication errors with intravenous acetylcysteine therapy for acetaminophen poisoning.
% This file was created with JabRef 2.7b. % Encoding: UTF8 @ARTICLE{arm_exploiting_linux, author = {Emanuele Acri}, title = {Exploiting ARM Linux Systems}, year = {2011}, url = {http://www.exploit-db.com/download_pdf/16151/} } @MISC{url:are, author = {{Anthony Desnos, Hanno Lemoine}}, title = {Virtual Machine for Android Reverse Engineering}, howpublished = {\url{http://redmine.honeynet.org/projects/are/wiki}} } @CONFERENCE{arm_stack_exploitation, author = {Itzhak Avraham}, title = {Non-Executable Stack ARM Exploitation}, booktitle = {Black Hat DC}, year = {2011} } @CONFERENCE{arm_exploitation, author = {Itzhak Avraham}, title = {Exploitation on ARM: Technique and bypassing defense mechanisms}, booktitle = {Def Con 18}, year = {2010} } @MISC{url:smali, author = {{Ben Gruver, et al}}, title = {smali}, howpublished = {\url{http://code.google.com/p/smali/}} } @MISC{url:soot, author = {Eric Bodden}, title = {Soot: a Java Optimization Framework}, howpublished = {\url{http://www.sable.mcgill.ca/soot/}} } @CONFERENCE{aslr_android, author = {Hristo Bojinov}, title = {Address Space Randomization for Mobile Devices}, booktitle = {ACM WiSec}, year = {2011} } @BOOK{security_mobile_comm, title = {Security of Mobile Communications}, publisher = {CRC Press}, year = {2009}, author = {Noureddine Boudriga} } @MISC{url:apktool, author = {Brut.alll}, title = {android-apktool}, howpublished = {\url{http://code.google.com/p/android-apktool/}} } @MISC{url:android_x86, author = {{Chih-Wei Huang, Yi Sun, et al}}, title = {Android-x86 - Porting Android to x86}, howpublished = {\url{http://www.android-x86.org/}} } @MISC{url:contagio, author = {contagio}, title = {contagio mobile malware mini dump}, howpublished = {\url{http://contagiominidump.blogspot.com}} } @MISC{url:ded, author = {{Damien Octeau, Patrick McDaniel, William Enck}}, title = {ded: Decompiling Android Applications}, howpublished = {\url{http://siis.cse.psu.edu/ded/}} } @MISC{url:androguard, author = {Anthony Desnos}, title = {androguard}, howpublished = {\url{http://code.google.com/p/androguard/}} } @MISC{url:android_adb, author = {Android Developers}, title = {Tools: Android Debug Bridge}, howpublished = {\url{http://developer.android.com/guide/developing/tools/adb.html}} } @MISC{url:android_android, author = {Android Developers}, title = {Tools: android}, howpublished = {\url{http://developer.android.com/guide/developing/tools/android.html}} } @MISC{url:android_emulator, author = {Android Developers}, title = {Tools: Android Emulator}, howpublished = {\url{http://developer.android.com/guide/developing/tools/emulator.html}} } @MISC{url:android_manifest, author = {Android Developers}, title = {The AndroidManifest.xml File}, howpublished = {\url{http://developer.android.com/guide/topics/manifest/manifest-intro.html}} } @MISC{url:android_using_emulator, author = {Android Developers}, title = {Managing Virtual Devices: Using the Android Emulator}, howpublished = {\url{http://developer.android.com/guide/developing/devices/emulator.html}} } @MISC{url:jd-gui, author = {Emmanuel Dupuy}, title = {Jd-gui}, howpublished = {\url{http://java.decompiler.free.fr/?q=jdgui}} } @BOOK{ida_pro, title = {The IDA Pro book}, publisher = {No Starch Press, Inc.}, year = {2011}, author = {Chris Eagle}, edition = {2nd} } @MISC{url:enck_seminar, author = {Enck, William}, title = {CSC591-006 - Smartphone OS Security}, howpublished = {\url{http://www.csc.ncsu.edu/faculty/enck/csc591-s12/index.html}}, year = {2012} } @MISC{url:psu_sectutor, author = {Enck, William and McDaniel, Patrick}, title = {Understanding Android's Security Framework}, howpublished = {\url{http://siis.cse.psu.edu/android_sec_tutorial.html}} } @MISC{url:sourcery, author = {Mentor Graphics}, title = {Sourcery CodeBench Lite Edition}, howpublished = {\url{http://www.mentor.com/embedded-software/sourcery-tools/sourcery-codebench/editions/lite-edition/}} } @TECHREPORT{dalvik_analysis, author = {Security Engineering Research Group}, title = {Analysis of Dalvik Virtual Machine and Class Path Library}, institution = {Institute of Management SciencesPeshawar, Pakistan}, year = {2009} } @BOOK{mobile_app_security, title = {Mobile Application Security}, publisher = {McGraw-Hill}, year = {2010}, author = {{Himanshu Dwivedi, Chris Clark, David Thiel}} } @BOOK{android_forensics, title = {Android Forensics: Investigation, Analysis and Mobile Security for Google Android}, publisher = {Syngress}, year = {2011}, author = {Andrew Hoog} } @MISC{url:debian_arm_qemu, author = {Aurélien Jarno}, title = {Debian on an emulated ARM machine}, howpublished = {\url{http://www.aurel32.net/info/debian_arm_qemu.php}} } @MISC{url:debian_img_arm, author = {Aurélien Jarno}, title = {Debian Lenny arm image for QEMU}, howpublished = {\url{http://people.debian.org/~aurel32/qemu/arm/}} } @MISC{url:debian_img_armel, author = {Aurélien Jarno}, title = {Debian Lenny and Squeeze armel images for QEMU}, howpublished = {\url{http://people.debian.org/~aurel32/qemu/armel/}} } @MISC{url:jasmin, author = {{Jonathan Meyer, Daniel Reynaud}}, title = {Jasmin}, howpublished = {\url{http://jasmin.sourceforge.net}} } @BOOK{mobile_malware, title = {Mobile Malware Attacks and Defense}, publisher = {Elsevier}, year = {2009}, author = {{Ken Dunham, et al}} } @MISC{url:securitycompass, author = {Security Compass Labs}, title = {New Mobile Security Course and ExploitMe Mobile}, howpublished = {\url{http://labs.securitycompass.com/mobile/23/}} } @MISC{url:dava, author = {{Laurie J. Hendren, et al}}, title = {Dava: A tool-independent decompiler for Java}, howpublished = {\url{http://www.sable.mcgill.ca/dava/}} } @MISC{url:androidcracking, author = {lohan+}, title = {android cracking}, howpublished = {\url{http://androidcracking.blogspot.com/}} } @CONFERENCE{arm_ropmap, author = {Long Le, Thanh Nguyen}, title = {ARM exploitation ROPmap}, booktitle = {Black Hat USA}, year = {2011} } @MISC{url:dedexer, author = {Gabor Paller}, title = {Dedexer}, howpublished = {\url{http://dedexer.sourceforge.net/}} } @MISC{url:radare, author = {{pancake. et al}}, title = {radare, the reverse engineering framework}, howpublished = {\url{http://www.radare.org/}} } @MISC{url:dex2jar, author = {Panxiaobo}, title = {dex2jar}, howpublished = {\url{http://code.google.com/p/dex2jar/}} } @MISC{url:undx, author = {Marc Sch\"{o}nefeld}, title = {undx}, howpublished = {\url{http://undx.sourceforge.net/}} } @CONFERENCE{dalvik_undx, author = {Marc Schönefeld}, title = {Reconstructing Dalvik applications}, booktitle = {CONFidence}, year = {2009} } @MISC{url:smiasm, author = {serpilliere}, title = {smiasm}, howpublished = {\url{http://code.google.com/p/smiasm/}} } @BOOK{android_app_security, title = {Application Security for the Android Platform}, publisher = {O'Reilly}, year = {2012}, author = {Jeff Six} } @MISC{url:android4me, author = {Dmitry Skiba}, title = {android4me}, howpublished = {\url{http://code.google.com/p/android4me/}} } @MISC{url:xmlpull, author = {Aleksander Slominski}, title = {XML Pull Parsing}, howpublished = {\url{http://www.xmlpull.org/}} } @BOOK{art_virus, title = {The Art of Computer Virus Research and Defense}, publisher = {Addison-Wesley}, year = {2005}, author = {Peter Szor} } @MISC{url:jad, author = {Tomas Varaneckas}, title = {Jad}, howpublished = {\url{http://www.varaneckas.com/jad}} } @MISC{url:dex-decomplier, author = {wendal1985}, title = {dex-decomplier}, howpublished = {\url{http://code.google.com/p/dex-decomplier/}} } @MISC{url:zip_format, author = {Wikipedia}, title = {Zip (file format)}, howpublished = {\url{http://en.wikipedia.org/wiki/ZIP_(file_format)}} } @MISC{url:androidaudittools, author = {wuntee}, title = {androidAuditTools}, howpublished = {\url{https://github.com/wuntee/androidAuditTools}} } @ARTICLE{arm_alphanumeric, author = {YYounan, PPhilippaerts}, title = {Alphanumeric RISC ARM Shellcode}, journal = {Phrack Magazine}, year = {2009}, volume = {13}, number = {66} } @BOOK{android_jishuneimu, title = {Android技术内幕:系统卷}, publisher = {机械工业出版社}, year = {2011}, author = {杨丰盛} } @BOOK{android_neihepouxi, title = {Android内核剖析}, publisher = {电子工业出版社}, year = {2011}, author = {柯元旦} } @BOOK{android_shenrulijie, title = {深入理解Android:卷I}, publisher = {机械工业出版社}, year = {2011}, author = {邓凡平} } @MISC{url:android_dev, title = {Android Developers}, howpublished = {\url{http://developer.android.com}} } @MISC{url:angstrom, title = {{The {\AA}ngstr{\"o}m Distribution}}, howpublished = {\url{http://www.angstrom-distribution.org/}} } @MISC{url:blackhat, title = {black hat multimedia archives}, howpublished = {\url{https://www.blackhat.com/html/archives.html}} } @MISC{url:blog_gliethttp, title = {gliethttp}, howpublished = {\url{http://gliethttp.blog.chinaunix.net}} } @MISC{url:group_mobilemalware, title = {mobile.malware group}, howpublished = {\url{http://groups.google.com/group/mobilemalware}} } @MISC{url:openssl, title = {OpenSSL}, howpublished = {\url{http://www.openssl.org/}} } @MISC{url:seandroid, title = {SEAndroid}, howpublished = {\url{http://selinuxproject.org/page/SEAndroid}} } @MISC{url:viaforensics, title = {viaforensics}, howpublished = {\url{http://viaforensics.com/}} } @MISC{url:vx_heavens, title = {VX Heavens}, howpublished = {\url{http://vx.netlux.org}} } @MISC{url:wireshark, title = {Wireshark}, howpublished = {\url{http://www.wireshark.org/}} } @MISC{url:xda, title = {XDA Developers Forum}, howpublished = {\url{http://forum.xda-developers.com/}} } @comment{jabref-meta: selector_publisher:} @comment{jabref-meta: selector_author:} @comment{jabref-meta: selector_journal:} @comment{jabref-meta: selector_keywords:}
alias was Morph cuz she was never true to form and always outta line. motto was 'fuck the box', it binds. i walk backwards into time and move forward on rewind. i.like.my.art .abstract. Kadinsky:Ordered Compression. 116x89 Music: vernacular of the soul Monday, March 8, 2010 she had no tears left to relinquish for sanity's sakeno anguished screams to exchange for a night's peaceslowly stepped over into abyss lost to the world, she smilesHere nothing exists to remind her of himor what he didand what she was pregnant with pauses (or some variation of that)is gonna be a series of bootleg writings to get me out of this writers block rutThey probably wont be complete or very goodbut i gotta start somewhere wish me luck ps:hair post goes up when i find my camWill probably weave it up tuesday and continue this damn weave as protective style challenge Wednesday, March 3, 2010 I call my supervisor "fake llcoolj green mile lookin motherfuckin hoe"(To be fairhe is always lickin his lips when he's talkin to the ladies and he is a husky 6'5")or "Shug"when im bein lazy (Shug as in Shug Avery not Knight cuz he likes to dress in drag on the weekends...okay maybe he doesnt do that but i tell him he should) my coworker, an old bag who is always inappropriate but who i cant get mad at cuz he's old as hell n funny with his skeezy old man routine,"project hoe"This old bag has almost no standardsHe said so himselfif its walkin n looks like it could have been female at any point in time, he'll stick his dick in it(I made him promise me tomorrow that he'll do one of his pole dances while on the train ride from work.Did i mention that he used to be a stripper back in his heyday. I think he told me his stage name was "SMOKE") ILL GET VIDEO N POST IT IF HE DOES my other coworker, This thick jewish girl who wears tight pants all the time. "yeast infection skank or skankasaur" dependin on the dayshe tells mad innapropriate but hilariously funny stories about her sex life Usually starts off with "Wanna hear somethin funny"She pretends to get bashful for about two seconds after which she proceeds to tell me about the time she broke her boyfriend's dickor the time when she was twenty and she went to a hotel with some entertainment lawyer and some 35 year old woman that picked her up in the barShe's great at parties by the way This older British guy i call my husband unless he's gettin on my nerves at which point he's "the baby daddy who im threatenin to divorce and kidnap our kids and live off the grid somewhere he'll never track us down" i try not to get mad at him too often cuz that shit is a mouthful. He's this painfully thin British dude who loves horror films and can be quite patronizin and rude at times. That's why i love his sarcastic flat ass. doesnt hurt that he calls me "farmer slut" either (don't ask) I.M.PERFECT i've always maintained that ive been insane since before i even descended upon this terrestrial plane. barreling head first from that celestial plane. sinister grin on my face as i made my way to life's rugged terrain, cuz i was a thrill seeker. before I even knew my own name. never heard of someone growing more sane, so everyday i know i grow more deranged. condition? rapidly deterioratiating like a case of alzheimers to the brain. 'don't know my own name' be my constant refrain like im in limbo. and tho im nimble, tired of bending over backwards under that friggin stick, then under the knife cuz anyway u slice it folk say i just aint right and that must be the reason i'm always left behind. or maybe i'm just evolutionary theory come to life. unsuspend you from that Matrix, bring u forth into the light. inverse then reverse it so it's night of the living dead turned to life dying of the day from day of the dying life. they say true genius is insanity. so i say fuck normality. take my art to bed, make love to the the lune in me. embrace the demented, get off to the moon in me. flash the maniac and dance to the tune in me.
Design, synthesis and biological evaluation of new Axl kinase inhibitors containing 1,3,4-oxadiazole acetamide moiety as novel linker. Using the principle of bioisosteric replacement, we present a structure-based design approach to obtain new Axl kinase inhibitors with significant activity at the kinase and cellular levels. Through a stepwise structure-activity relationships exploration, a series of 6,7-disubstituted quinoline derivatives, which contain 1,3,4-oxadiazol acetamide moiety as novel Linker, were ultimately synthesized with Axl as the primary target. Most of them exhibited moderate to excellent activity, with IC50 values ranging from 0.032 to 1.54 μM against the tested cell lines. Among them, the most promising compound 47e, as an Axl kinase inhibitor (IC50 = 10 nM), shows remarkable cytotoxicity against A549, HT-29, PC-3, MCF-7, H1975 and MDA-MB-231 cell lines. More importantly, 47e also shows a significant inhibitory effect on EGFR-TKI resistant NSCLC cell lines H1975/gefitinib. Meanwhile, this study provides a novel type of linker for Axl kinase inhibitors, namely 1,3,4-oxadiazol acetamide moiety, which is out of the scope of the "5- atoms role ".
Pages Sunday, June 3, 2012 Passing recipes on My mom and me at 8 yrs old. I had a bunch of organic apples that were too soft to eat but not spoiled yet and I didn't want to turn them into applesauce or chutney or compote or any of the mushy fruit type of saves, so I decided to just cut them up and toss them in a saute pan with brown sugar, butter and some brandy and caramelize them just to prevent them from all going bad. Then I started thinking that there was too much to do the typical thing of having them with ice cream or making an apple crisp or something and I started thinking about the way my mother used to make apple pie. She never made traditional pie dough but she would make her sugar cookie recipe which wasn't a typical sugar cookie recipe it was crunchy on the outside and soft on the inside. The only thing was that she wouldn't cook the apples first so sometimes if the apples had a lot of water content it would get mushy the next day. Once I got older I learned the trick of adding some flour to the apples to thicken up the juices. She would add lemon juice and brandy to flavour and prevent from discolouring but it would add a lot more liquid to it. Although the flavour was great. Later on when I was more food savvy I would cook the apples to caramelize it and then my mother would add it to the pie. I used to help her make it but I never paid much attention to the measurements and only to the ingredients that went into it. Now I wish I had paid more attention because the recipe isn't written down anywhere as my mother wasn't the best note taker and because she didn't have much education she wasn't and organized person. So why don't I just ask her how to make it? Well I can't anymore. You see my mother was diagnosed with Dementia about 7 years ago. My mother wasn't one to have all of her recipes written up in a nice book or recipe box and she only had the odd recipe that she had written down from someone else's recipe. It would be written on a scrap piece of paper in pencil and my mother's not so legible handwriting. I only wrote down a couple of recipes that my mother made because most of the things weren't exact measurements every time and she would change things up once in a while. There were a few recipes she baked and was known for making. Sugar cookies, chiffon cake, chocolate mousse and after she started winding down on those recipes she became obsessed with frying wontons and adding icing sugar on top of them. She would hand them out to everyone at her bank, drug store and doctor's office. That was the only thing she remembered how to do. I should have realized that she had lost most of her memory when she stopped doing that. I thought it was because people were telling her to stop making them, but as I look back now I think she probably started to forget how to make everything. She passed away last April after being in a long term care facility for a year and a half. I never really had the chance to ask her things like recipes and info about relatives and now it's too late. My mother's best friend is still alive and is younger than my mother so I try and ask her if she remembers any of the recipes but she only knows some of them so some of the recipes are lost in my family forever now. Today it made me realize the value of PASSING ON RECIPES to family members and carrying them forward to future generations. I remember a lot of the recipes but not all of them and since I don't have any kids they will be gone after I am gone unless I write them in a cookbook or pass them on to friends or in this blog. I will do my best to try and document any such recipes as I carry on writing these blog posts. So my advice to all of you cooks and bakers is to write down your precious recipes and pass them on. Even if they are simple but someone loves them try to document them somehow. I actually have the last batch of wontons my mother ever made in my first film "Potluck". I used them in a funny scene because of the powdered sugar on top they made a mess when you would take a bite so I used that in a scene and used the actual wontons my mother made. That was the last time she ever made any and I didn't even know it at the time. People make fun of people that post photos of food and talk about food. But once again my thought is that food is universal and is meant to be shared.
Shankarpalli railway station Shankarpalli Railway Station is located in Rangareddi District of Telangana State, India and serves Shankarpalli. Overview Shankarpalli is a station located in between Secunderabad Junction to Vikarabad Junction Railway Line. It is well connected to Bidar, Tandur, Secunderabad, Vijayawada, Guntur, Kazipet, Tirupati, Shiridi, Nizamabad, CSMT Kolhapur, Manuguru and Kakinada Port Railway Stations through passenger, express and super fast express trains. References External links Category:Railway stations in Ranga Reddy district Category:Secunderabad railway division
Comparison of the bioactivity of mometasone furoate 0.1% fatty cream, betamethasone dipropionate 0.05% cream and betamethasone valerate 0.1% cream in humans. Inhibition of UV-B-induced inflammation monitored by laser Doppler blood flowmetry. The bioactivity of a novel topical glucocorticosteroid, mometasone furoate 0.1% fatty cream was compared with betamethasone dipropionate 0.05% cream and betametasone valerate 0.1% cream. An ultraviolet light (UV-B)-induced inflammation assay in humans was used, and the combined effect of a single, open application of the corticosteroids was evaluated. Reduction of UV-B induced inflammation was monitored by laser Doppler blood flowmetry, clinical skin scoring and skin reflectance spectrophotometry. Skin scoring and reflectance spectrophotometry were found unsuitable because one of the cream vehicles contained titanium dioxide which shielded skin erythema. Laser Doppler blood flowmetry showed that mometasone furoate 0.1% fatty cream was more than twofold better in reducing UV-B-induced inflammation than betamethasone dipropionate 0.05% cream and betametasone valerate 0.1% cream, and that the effect was sustained for at least 24 h after a single application.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/> <meta http-equiv="X-UA-Compatible" content="IE=9"/> <meta name="generator" content="Doxygen 1.8.3"/> <title>xilflash: xilflash_intel.h File Reference</title> <link href="tabs.css" rel="stylesheet" type="text/css"/> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="dynsections.js"></script> <link href="navtree.css" rel="stylesheet" type="text/css"/> <script type="text/javascript" src="resize.js"></script> <script type="text/javascript" src="navtree.js"></script> <script type="text/javascript"> $(document).ready(initResizable); $(window).load(resizeHeight); </script> <link href="doxygen.css" rel="stylesheet" type="text/css" /> <link href="HTML_custom.css" rel="stylesheet" type="text/css"/> </head> <body> <div id="top"><!-- do not remove this div, it is closed by doxygen! --> <div id="titlearea"> <table cellspacing="0" cellpadding="0"> <tbody> <tr style="height: 56px;"> <td id="projectlogo"><img alt="Logo" src="xlogo_bg.gif"/></td> <td style="padding-left: 0.5em;"> <div id="projectname">xilflash </div> <div id="projectbrief">Xilinx Vitis Drivers API Documentation</div> </td> </tr> </tbody> </table> </div> <!-- end header part --> <!-- Generated by Doxygen 1.8.3 --> <div id="navrow1" class="tabs"> <ul class="tablist"> <li><a href="index.html"><span>Overview</span></a></li> <li><a href="annotated.html"><span>Data&#160;Structures</span></a></li> <li><a href="globals.html"><span>APIs</span></a></li> <li><a href="files.html"><span>File&#160;List</span></a></li> </ul> </div> </div><!-- top --> <div id="side-nav" class="ui-resizable side-nav-resizable"> <div id="nav-tree"> <div id="nav-tree-contents"> <div id="nav-sync" class="sync"></div> </div> </div> <div id="splitbar" style="-moz-user-select:none;" class="ui-resizable-handle"> </div> </div> <script type="text/javascript"> $(document).ready(function(){initNavTree('xilflash__intel_8h.html','');}); </script> <div id="doc-content"> <div class="header"> <div class="headertitle"> <div class="title">xilflash_intel.h File Reference</div> </div> </div><!--header--> <div class="contents"> <a name="details" id="details"></a><h2 class="groupheader">Overview</h2> <div class="textblock"><p>This file consists definitions, Macros and structures specific to the Intel flash devices. </p> <pre> MODIFICATION HISTORY:</pre><pre>Ver Who Date Changes </p> <hr/> <p> 1.00a rmm 10/25/07 First release 1.00a mta 10/25/07 Updated to flash library 1.01a ksu 04/10/08 Added support for AMD CFI Interface 1.02a ksu 06/16/09 Added support for multiple banks in Intel flash. Added Reset Bank function. Added support for 0xF0 reset command. Added XFL_DEVCTL_SET_CONFIG_REG IOCTL to write to the Configuration Register of the Xilinx Platform Flash XL which can be used to set the Flash in Sync/Async mode. The Xilinx Platform Flash XL is set to Async mode during the initialization of the library. Added bank(s) reset function at the top of the read function. Updated Lock and Unlock operations for multiple blocks. 3.01a srt 03/02/12 Added support for Micron G18 Flash device to fix CRs 648372, 648282. 3.02a srt 05/30/12 Changed Implementation for Micron G18 Flash, which fixes the CR 662317. CR 662317 Description - Xilinx Platform Flash on ML605 fails to work.</pre><pre></pre> </div></div><!-- contents --> </div><!-- doc-content --> <div id="nav-path" class="navpath"><!-- id is needed for treeview function! --> <ul> <li class="footer">Copyright &copy; 2015 Xilinx Inc. All rights reserved.</li> </ul> </div> </body> </html>
Search Navigation Subnavigation Noncommunicable diseases Noncommunicable diseases (NCDs) are a group of diseases that affect individuals over an extended period of time causing socio-economic burden to the nation. The major NCDs share four behavioral risk factors- unhealthy diet, lack of physical activity, and use of tobacco and alcohol. Factors contributing to the rise of NCDs also include ageing, rapid unplanned urbanization and globalization. In 2008, NCDs accounted for 5.2 million deaths in India. A rising trend in the burden of NCDs is expected in the years ahead. There are primarily four type of noncommunicable diseases: cancer, chronic respiratory disease, stroke and diabetes, which are responsible for a majority of morbidity and mortality in the country. Mental health and injuries also have a considerable burden. In South-East Asia, including India, NCDs affect relatively younger population as compared to the western countries, thus causing severe economic burden to the nation. In order to reduce the growing burden of NCDs, it is important not only to address the diseases but also their key underlying risk factors, namely tobacco use, unhealthy diet, harmful use of alcohol and physical inactivity. A range of interventions have been identified that constitute as ‘best buys’. Best-buys for addressing the NCD risk factors Preventive strategies focus on the common underlying behavioral risk factors for NCDs including tobacco and harmful alcohol use, physical inactivity and unhealthy diet. These will help in controlling the metabolic risk factors like raised blood pressure, blood sugar and cholesterol, and obesity. Tobacco control Implementing the key elements of the WHO Framework Convention on Tobacco Control have been found cost-effective. These include increasing taxes, comprehensive legislations creating smoke-free indoor workplaces and public places, health information and warnings about the effects of tobacco, and bans on advertising, promotion and sponsorship. Harmful alcohol use Reduction in the harmful use of alcohol not only prevents cancers and cardiovascular diseases, but also prevents conditions like liver cirrhosis, depression and road traffic injuries. Enhanced taxation of alcohol beverages and comprehensive bans on their advertising/ marketing have proved to be beneficial. Unhealthy diet Excessive salt intake is related to raised blood pressure. Reducing salt content in foods is an effective strategy. The use of added salt should be discouraged. In India, we need to address both, homemade and processed food. Population based approaches include reaching out through mass media campaigns. Use of polyunsaturated fats as cooking medium, along with avoiding transfats is also recommended. Physical inactivity Indoor air pollution The dependence on solid fuels (coal, wood, animal dung, crop wastes) and traditional stoves for cooking and heating leads to high levels of indoor air pollution. This increases the risk of childhood pneumonia, chronic lung disease and lung cancers. In addition to tobacco control, reducing indoors air pollution is the most important strategy for preventing chronic lung disease, particularly in non-smokers. Best-buys for tackling major NCDs Cardiovascular disease (CVD) and diabetes Counselling and multi-drug therapy (including blood sugar control for diabetes mellitus) for people with risk of developing heart attacks and strokes will reduce the morbidity and mortality due to these conditions. A regimen of aspirin, statin and blood pressure-lowering agents will significantly reduce vascular events in people with cardiovascular risk and is considered a best buy. Preventive measures, such as tobacco cessation and adopting a healthy life style, augment the therapeutic benefits. Administration of aspirin to people who develop a myocardial infarction is another best buy. Cancer Hepatitis B immunization beginning at birth can prevent liver cancer. Presently a regimen of three doses- first at birth (only possible in case of institutional deliveries), at six weeks, 10 weeks and 14 weeks along with diphtheria, pertussis, tetanus(DPT) have been included into the Universal Immunization Programme (UIP). Screening and treatment of pre-cancerous lesions is effective for preventing cervical cancer. Pain relief and palliative care is a low cost, yet essential, intervention when judged against societal norms and standards, keeping in mind the human rights perspective. Chronic respiratory disease Chronic respiratory disease, including asthma and chronic obstructive pulmonary disease are major contributors towards morbidity and mortality in the country. Treatment of persistent asthma with inhaled corticosteroids and beta-2 agonists like salbutamol are very low cost interventions and feasible to deliver in primary care, but their cost-effectiveness is limited by their modest impact on disease burden. However, as already mentioned above, tobacco cessation and alleviation of indoor air pollution are the key strategies for preventing chronic respiratory disease.
Q: Usefulness of an iron core in a magnetic coil used to generate current I wish to light a LED by moving a permanent magnet relative to an inductor consisting of a coil of copper wire. Can I increase efficiency (current produced for a given magnet motion) by using an iron or ferrite core in my magnetic coil? A: Of course, that is what is regularly done in all common AC generators - the coils are wound on special steel poles with high permeability. Well placed ferromagnetic core can increase the induced emf in wires by three and even more orders of magnitude.
506 F.Supp. 806 (1980) TWO WHEEL CORP. d/b/a Honda of Mineola, Plaintiff, v. AMERICAN HONDA CORPORATION, Defendant. No. 79 C 3064. United States District Court, E. D. New York. March 19, 1980. *807 Meltzer, Lippe & Goldstein by Richard A. Lippe, Mineola, N. Y., for plaintiff. Lyon & Lyon by Roland N. Smoot, Robert C. Weiss, Allan W. Jansen, Los Angeles, Cal., for defendant. DECISION AND ORDER BRAMWELL, District Judge. When a national distributor of goods entrusts the local sale of such goods to a small dealer, the distributor must accept the economic reality that it cannot completely control the activity of the independent local outlet. The extent of the dealer's independence, therefore, may become a source of friction between the contracting parties. This case evolves from the plaintiff Honda of Mineola's claim that this friction has manifested itself in the termination of its Honda dealership. Being of the belief that the defendant lacked a valid basis for terminating its dealership, the plaintiff filed the instant action which attacks both the motive behind, and the grounds for, the defendant's discontinuance of its status as a Honda dealer. This Decision and Order concerns the plaintiff's motion for a preliminary injunction in that action. On December 3, 1979, the plaintiff asked this Court to temporarily restrain the termination of its dealership. After hearing *808 oral argument by representatives of the parties, the Court acceded to the plaintiff's request. The parties then set forth their positions with respect to the instant motion in detail during a six (6) day hearing held on December 13, 17, 18, 19, 20 and 21 of 1979. Pursuant to Fed.R.Civ.P. 52(a), the Court's evaluation of the evidence elicited at this lengthy hearing, and its disposition of the instant motion, will be rendered in the form of Findings of Fact and Conclusions of Law. FINDINGS OF FACT I. The Parties 1. The plaintiff Two Wheel Corp. d/b/a Honda of Mineola is a New York corporation (Complaint, par. 4). 2. The defendant American Honda Corporation is a California corporation with its principal place of business in Gardena, California (Complaint, par. 6). 3. The defendant is the exclusive United States distributor of motorcycles manufactured in Japan by its parent corporation Honda Motor Corp. Ltd. (Complaint, par. 6). 4. These motorcycles are distributed by the defendant in the United States through a network of approximately 1750 dealers (Complaint, par. 6). II. The Evolution of Honda of Mineola 5. In 1965, the plaintiff, through its president and principal stockholder Morris Zegarek, entered into a written agreement with the defendant pursuant to which the plaintiff became a dealer for the sale, servicing and assembly of Honda motorcycles (15).[*] No provision in this agreement forbids a Honda dealer from selling non-Honda parts and accessories (See Pl. Ex. 2). 6. The plaintiff established its original facility at 529 Jericho Turnpike, Mineola, N. Y. (88). 7. In 1970 or 1971, the plaintiff moved its facility to 336 Jericho Turnpike, Mineola, N. Y. (88). 8. Until September 20, 1979, the plaintiff conducted a "full service-sales motorcycle operation" at 336 Jericho Turnpike (88). 9. Over the last 15 years, this operation has constituted Mr. Zegarek's sole business activity and has been the sole source of support for his family (14). 10. Mr. Zegarek presently is 58 years of age (12). 11. During the course of the contractual relationship between the plaintiff and the defendant, the plaintiff established an excellent financial record, while maintaining in good standing a $700,000 line of credit at a local bank (178). 12. In 1977, 1978 and 1979, the plaintiff ranked number one in sales of Honda motorcycles in the region comprising New York, New Jersey, Connecticut, Vermont, Pennsylvania, Rhode Island, Maine, and Massachusetts. (70-72, Pl. Ex. 6, 7).[**] 13. Pursuant to a lease dated July 7, 1971 (Pl. Ex. 11), the plaintiff, from 1971 on, began to operate a facet of its business at 344 Jericho Turnpike, Mineola, N. Y. (91). 344 Jericho Turnpike is located across the street from 336 Jericho Turnpike (88). 14. The lease agreement entered into between the plaintiff and the lessor of 344 Jericho Turnpike contains the following provision: Tenant shall use and occupy premises for selling storing and servicing of automobiles, motorcycles and accessories. (94, Pl. Ex. 11). 15. This activity at 344 Jericho Turnpike included servicing and assembly of motorcycles (94), and motorcycle storage (318). 16. Although disputed by the testimony of a former employee of the plaintiff (527), and by a representative of the defendant (805), Mr. Zegarek testified that some motorcycle *809 display (323), and a few motorcycle sales took place at 344 Jericho Turnpike between July 7, 1971 and September 20, 1979.[1] 17. No written document evidences the extent of the plaintiff's use of 344 Jericho Turnpike before September 20, 1979 (351). 18. Mr. Zegarek testified that the defendant was aware of the plaintiff's use of 344 Jericho Turnpike because: (a) Between July 7, 1971 and September 20, 1979, the defendant delivered merchandise to the 344 Jericho Turnpike location (95). (b) In May or June of 1979, Tony Ferente, a representative of the defendant's lawn mower division, visited the 344 Jericho Turnpike location with an eye toward sanctioning a Honda lawn mower showroom at that site (102-03). (c) The 344 Jericho Turnpike location periodically has been visited by representatives of the defendant since 1971 (96, 347, 380). 19. On September 20, 1979, the plaintiff's facility at 336 Jericho Turnpike burned to the ground, apparently as a result of arson (90). Mr. Zegarek had procured fire insurance on the 336 Jericho Turnpike location (374). 20. After the September 20, 1979 fire, the plaintiff transferred its entire operation to the 344 Jericho Turnpike location (90). 21. At a cost of over $80,000, the plaintiff has made extensive improvements to the 344 Jericho Turnpike location in an attempt to conform it to the defendant's facility requirements (97). 22. In its renovated condition, the 344 Jericho Turnpike facility contains over 2000 square feet of showroom area in total and 1600 square feet of service area (98-100).[2] 23. Mr. Zegarek testified that the 344 Jericho Turnpike is comparable to, or larger than, the facilities employed by other Long Island Honda dealers (98-101, Pl. Ex. 12; see also Def. Ex. I).[***] 24. On October 5, 1979, the defendant refused to accept further motorcycle orders from the plaintiff (105-07). 25. Mr. Zegarek's inquiries with respect to this situation disclosed that the plaintiff had been placed on "Code 1" status as a result of the move to 344 Jericho Turnpike (107). 26. Mr. Zegarek testified that representatives of the defendant assured him that the problems would be straightened out after a technical computer alteration (107). 27. Five days later, Winston Farrington, the defendant's sales representative (801) visited the 344 Jericho Turnpike facility (109, 803). 28. Mr. Zegarek testified that, during the course of his visit, Mr. Farrington gave a "verbal approval" for the use of 344 Jericho Turnpike as a Honda facility. This apparently occurred after Mr. Farrington heard Mr. Zegarek's renovation plans for the building (109, 477, 480, Pl. Ex. 30, Def. Ex. J). 29. Mr. Farrington, however, testified that he did not officially recommend to the defendant that the use of 344 Jericho Turnpike be approved (804). A report written by Mr. Farrington supports this testimony (Def. Ex. AI, AK). 30. Steven Sabatini, defendant's District Service Manager (818) also testified that he did not approve the 344 Jericho Turnpike facility as an authorized Honda location after the fire (821). 31. Between September 20, 1979 and November 21, 1979, the plaintiff did not receive motorcycles, "special factory tools, special engine tools, special parts, service manuals" and other necessary manuals from the defendant (151-66). *810 32. Between September 20, 1978 and December 17, 1978, the plaintiff sold 100-150 motorcycles; between September 20, 1979 and December 17, 1979, the plaintiff sold 15 motorcycles (165). 33. As a result of the defendant's failure to respond to the plaintiff's requests for motorcycles and parts, customers of the plaintiff cancelled numerous orders (132-46, Pl. Ex. 13, 14, 15, 16, 17). 34. Citing the plaintiff for operating out of an unauthorized location, for submitting false claims, for disregarding local law and for poor set up of motorcycles, the defendant terminated the plaintiff's dealership status on November 21, 1979 (Pl. Ex. 1). III. The False Claims Issue 35. Pursuant to the Honda Dealer's Agreement, claims by a Dealer for damage to shipped motorcycles occurring during shipment to the dealer are honored by American Honda (774). 36. Similarly, when a crate containing a motorcycle does not evidence damage, but the motorcycle within the crate is damaged upon reaching the dealer, American Honda honors a dealer's claim for concealed shipping damage (775). 37. The dealer, however, possesses responsibility for damage to motorcycles arising out of incidents that occur at the dealer's facility (944). 38. Michael Karlin, service manager at Honda of Mineola from September or October of 1977 until March or April of 1978 (506), testified that Mr. Zegarek authorized him to record the plaintiff's in house damage as concealed shipping damage so that the defendant would pay for the loss (504, 515). Mr. Karlin also testified that the plaintiff submitted duplicate shipping claims to American Honda (513, 515). 39. Mr. Karlin testified that the overcrowded nature of 336 Jericho Turnpike's storage facilities (502, 510, 520) and the transportation by forklift of motorcycles from 336 to 344 Jericho Turnpike (520) caused numerous motorcycles to fall and to be damaged on the plaintiff's premises. 40. Mr. Karlin, however, testified to only one specific submission of an allegedly false shipping claim (509). 41. Prior to his employment by the plaintiff, Mr. Karlin worked at a Volvo dealer for 3 years (536). After having words with his employer, he was asked to leave this job (534). 42. Mr. Karlin also worked at a Suzuki dealer before being hired by the plaintiff (541); after asking for more money, he also was discharged from this position (542, 563). 43. Mr. Zegarek testified that, during Mr. Karlin's last two months at Honda of Mineola, he neglected paper work,[3] spent excessive time on personal phone calls and occasioned customer complaints because of his false promises (868). 44. Mr. Zegarek added that Mr. Karlin had financial difficulties during his employment with the plaintiff (869, 891). 45. Mr. Zegarek testified that he fired Mr. Karlin (869). 46. Conceding that, in most cases, he cannot tell when shipping damage to motorcycles occurs (947) due to the practical problems inherent in making such a determination (916), Mr. Zegarek testified that he "never instructed anyone at Honda of Mineola to make out a false shipping claim" (921). 47. Mr. Zegarek also noted that if motorcycles fall from forklifts at Honda of Mineola, they are examined to determine if such a fall occasioned damage (917-18). 48. Deposition testimony by Jeffrey Prince (Pl. Ex. 33) and the testimony before this Court of Sam Cohen (955) corroborate *811 Mr. Zegarek's testimony. Mr. Prince and Mr. Cohen have served in the plaintiff's employ (Pl. Ex. 33, 951). 49. Mr. Karlin also testified that the plaintiff submitted warranty claims to the defendant for Yuassau batteries when, in fact, AMB batteries were sold to the customer (523). Yuassau batteries are covered by American Honda's warranty; AMB batteries are not (523). 50. Mr. Karlin added that Mr. Zegarek had obtained the franchise for the AMB batteries (523). The AMB batteries cost less than the Yuassau batteries (526, 940). 51. Mr. Zegarek, however, explained that the enactment of local laws mandating that motorcycles travel with their headlights on caused a run on batteries (893). Since the defendant could not supply an adequate number of Yuassau batteries, the plaintiff sold the available AMB batteries instead (462, 896, Def. Ex. U). 52. In this regard, Mr. Zegarek testified that in no case did the plaintiff ever supply an AMB battery to a customer "without his knowledge of it and without his option to replace it" with a Yuassau (896). 53. No representative from American Honda ever has advised the plaintiff of its belief that false shipping or false warranty claims have been submitted by the plaintiff (211, 789, 808, 817, 919). 54. In early 1978, as a result of a burglary, the plaintiff made 123 claims to its insurance broker for damage valued at $6,096 (195-201). 55. Three of these insurance claims were submitted on items previously claimed to the defendant under the rubric of shipping damage (203-08). 56. Mr. Zegarek testified that this duplicate filing of claims resulted from inadvertance (205), and that he had withheld receipt of payment on the entire insurance claim until the discrepancy was ironed out (210-11).[4] IV. The Plaintiff's Relations with the Village of Mineola 57. Throughout the plaintiff's tenure in Mineola, residents of the Village have complained to their public officials and to the defendant about the noise and the element incidental to, and attracted by, the plaintiff's business (220, 241, 251-52, 667, 668, 679, Def. Ex. A, C, E, AA). 58. In 1971, Byron F. Katz, a Mineola trustee asked the plaintiff to minimize motorcycle activity through the rear of 336 Jericho Turnpike where a residential area is located (214). By utilizing front entrances, the plaintiff complied with this request (215). 59. As the years went by, the Village made similar demands concerning hours of operation, deliveries, business traffic and employee parking (215-16). 60. In 1974, the Village issued the plaintiff a number of summonses for violation of noise and access ordinances; these eventually were dismissed (220-25, Pl. Ex. 25). 61. During the course of 1977 and 1978, the Village issued the plaintiff and Mr. Zegarek over 100 summonses for alleged improper conduct in the course of operations (226, 606). The plaintiff was found guilty on two of these summonses, which presently are on appeal (226, 607). Mr. Zegarek was found not guilty on the summons with which he was charged (226). 62. In 1978, the Village of Mineola instituted an action against Honda of Mineola that sought to enjoin its use of 255 Jericho Turnpike as a warehouse for its unsold motorcycles. Honda of Mineola prevailed (228, Pl. Ex. 27). 63. After the September 20, 1979 fire, the Village of Mineola sought to enjoin the owner of 336 Jericho Turnpike from instituting *812 necessary repairs to the structure of the building (232). Judgment was rendered against the Village (234, Pl. Ex. 28). 64. After the September 20, 1979 fire, the Village of Mineola sought to enjoin Honda of Mineola from using 344 Jericho Turnpike for the purpose of selling motorcycles (236, 609, Pl. Ex. 29). A motion by the Village for a preliminary injunction in this action was denied (237, 624). 65. The local law that forms the basis for the Village's case in the action against Honda of Mineola's use of 344 Jericho Turnpike became effective in 1976 while Honda of Mineola actually was using 344 Jericho Turnpike (608). 66. In late 1979, the Village of Mineola issued violations to Honda of Mineola for its resurrection of a sign in front of 344 Jericho Turnpike and for its installation of heaters and air conditioning in the facility without a permit or a variance (238, 611). No decision in this matter has been rendered (239). 67. In several instances, laws drafted by the Village of Mineola designed to affect Honda of Mineola have been vetoed by New York State officials as being outside the scope of the Village's jurisdiction (Pl. Ex. 32, 629). 68. The Mayor of Mineola testified that the plaintiff's business has created a "problem" in the Mineola community (667, 671). 69. The Mayor, however, drew no distinction between Honda of Mineola and American Honda (678, 680, 681). In fact, the Mayor testified that the nature of the plaintiff's business provided the true source of the friction between the plaintiff and the neighboring community (679, 684, 686). 70. Mr. Zegarek explained his "problem" with the Village by citing the following: (a) The fears of the Mineola residents of the stereotypical motorcycle consumer (264-69, Pl. Ex. 24) (b) Mr. Zegarek's belief that the public officials of Mineola represent not the businessmen of the community, but the residents who vote (254) (c) the fact that the plaintiff does not purchase insurance from "the right people" (263) (d) the Village's failure to accept in good faith efforts to compromise offered by the plaintiff (296-99). V. Custom Accessories, Motorcycle Safety and Motorcycle Setup at Honda of Mineola 71. Pursuant to the Honda Motorcycle Dealers Agreement (Pl. Ex. 2), dealers of American Honda are required to set up or assemble motorcycles before displaying them to the customer (771-72, Pl. Ex. 2). Before motorcycles are delivered to a purchaser, other procedures must be followed (77, 771, Pl. Ex. 9). 72. These requirements are designed to ensure that a purchaser of a Honda motorcycle receives the motorcycle in a safe condition. 73. On several occasions during his extensive testimony before this Court, Mr. Zegarek expressed an overriding concern for his customers and for motorcycle safety (62, 249, 300, 354. See Def. Ex. R, V). 74. Donald Keller, customer service manager for the defendant (766) expressed his belief that the plaintiff ranked poorly in the Honda network with respect to customer complaints (787). Mr. Keller based this belief on "general conversation" among the defendant's customer relation staff (795).[5] 75. On cross examination of Mr. Keller, plaintiff's counsel established that Mr. Keller's appraisal of the plaintiff's customer relations did not take into account the ratio of the number of customer complaints received to the volume of the plaintiff's sales (795). 76. Between 1973 and 1979, the plaintiff's manner of motorcycle set up has occasioned representatives of the defendant to file 9 formal deficiency notices (73) during *813 the course of their visits to the plaintiff (76, Pl. Ex. 8). 77. These 9 reports cited 187 separate deficiencies on 114[6] display motorcycles (Pl. Ex. 8).[7] 78. Mr. Keller, the defendant's customer service manager (766), testified on his direct examination that, of the 1750 dealers in the Honda network, the plaintiff placed as the worst with respect to motorcycle set up (773, 786). 79. On cross examination of Mr. Keller, however, plaintiff's counsel established that Mr. Keller's appraisal of the plaintiff's set up record did not take into account the ratio of plaintiff's deficiencies to plaintiff's sales volume, and that Mr. Keller had no actual knowledge of the number of the plaintiff's deficiencies for 1977, 1978 or 1979 (797). 80. The plaintiff and the defendant have argued over the set up issue in the past (Def. Ex. N, O, P). 81. Troubled by the plaintiff's set up record, defendant's counsel visited the plaintiff's facility on December 13, 1978 (419, Def. Ex. Q). During the course of said visit, defendant's counsel warned the plaintiff that if his set up performance did not improve, a lawsuit would be instituted by American Honda against Honda of Mineola (420). 82. Mr. Zegarek testified that, during the December 13, 1978 meeting, defendant's counsel advised him that American Honda would rather have the plaintiff sell less motorcycles if that would ensure conformity with American Honda requirements (487).[8] 83. Apparently unsatisfied with the plaintiff's conduct after the December 13, 1978 meeting, American Honda commenced an action against Honda of Mineola in this Court (79 Civ. 562) on March 24, 1979 for breach of contract, and for a declaratory judgment. 84. Efforts were made to settle this action. See Def. Ex. L. 85. 47 of the deficiencies cited in the 9 reports that instigated the prior action in this Court concerned throttle cable routing; 55 of these deficiencies concerned loose speedo or tach cable nuts; 6 of the deficiencies concerned custom parts (Pl. Ex. 8, see also Def. Ex. M). 86. Mr. Zegarek explained that the constant interchange of custom parts on display motorcycles in the plaintiff's showroom caused these particular deficiencies (82-83, 366, 402), which account for over 50% of the total cited deficiencies because: (a) The use of custom parts on Honda motorcycles necessitates a change in the throttle cable routing required by American Honda (51, 58, 405); (b) the display motorcycles must have loose nuts in order to accommodate heavy customer traffic and to meet customer demands to view custom parts (60-62). 87. Almost 95% of the custom parts sold by the plaintiff as incidents to the sale of Honda motorcycles are not manufactured by Honda (28, 32). 88. The plaintiff acquires its non-Honda custom parts from Japanese sources that also act as a supplier for the defendant (23-24). 89. In 1978 and 1979, the plaintiff had $2 million in gross sales. $1 million of this figure resulted from the sale of Honda motorcycles; the other $1 million derived from the sale of non-Honda accessories (33). 91. Mr. Zegarek testified that the plaintiff's extensive implementation of custom non-Honda parts on the Honda motorcycles sold have affected the claims for warranty he makes to the defendant, even when a proper warranty claim has been submitted (41, 63-66, 187-92, 195, 247, 453). *814 92. American Honda policy provides that if a non-Honda part "affects" the warranted Honda part, the warranty does not cover that part (456, 779, 823, Def. Ex. X). 93. Mr. Zegarek also testified that representatives of the defendant such as Mr. Tardiff (34-36, 451), Mr. Gaudreau (38), Mr. Bigerton (39, 451), Mr. Clark (39), Mr. Dean (43), Mr. Farrington (47), and Mr. Sabatini (49) expressed varying degrees of displeasure with the plaintiff's accent on non-Honda accessories. Phil Zegarek, Morris Zegarek's son and a stockholder of Honda of Mineola, echoed Mr. Zegarek's testimony in this regard in his deposition (Pl. Ex. 34). 94. In 1974, defendant's counsel wrote to Mr. Zegarek explicitly stating that "American Honda has not and will not attempt to control or limit products sold by Honda of Mineola" (450, Def. Ex. T). 95. Similarly, Mr. Farrington (806) and Mr. Sabatini (822) testified that they never told the plaintiff that it could not sell non-Honda parts and accessories. CONCLUSIONS OF LAW The plaintiff's complaint challenges the propriety of the defendant's termination of its Honda franchise on two legal grounds. The first ground has its genesis in the defendant's alleged disdain for the plaintiff's extensive dealings in custom motorcycle parts manufactured by the defendant's competitors. Thus, the plaintiff initially complains that its termination as a Honda dealer is motivated by anticompetitive sentiments, which behavior constitutes a violation of sections 1 and 2 of the Sherman Anti-Trust Act, 15 U.S.C. §§ 1, 2 (1976). On this cause of action, the plaintiff seeks treble damages. Similarly, the plaintiff asserts that the defendant acted in bad faith and without cause when it terminated the plaintiff's Honda dealership. Thus, the plaintiff claims that the defendant contravened N.Y. Gen.Bus.Law § 197 (McKinney Supp. 1979-80)[9] which forbids the termination of any contract for the sale of new motor vehicles between a distributor and a dealer "except for cause." On this cause of action, the plaintiff seeks permanent injunctive relief. At this stage of the instant proceedings, however, the Court need only be concerned with the plaintiff's motion for a preliminary injunction. The standards governing such a motion explicitly have been set down most recently by the Second Circuit in Jack Kahn Music Co., Inc. v. Baldwin Piano & Organ Co., 604 F.2d 755 (2d Cir. 1979) where the court stated: Preliminary injunctive relief in this Circuit calls for a showing of (a) irreparable harm and (b) either (1) likelihood of success on the merits or (2) sufficiently serious questions going to the merits to make them a fair ground for litigation and a balance of hardships tipping decidedly toward the party requesting the preliminary relief. See also Seaboard World Airlines v. Tiger International, Inc., 600 F.2d 355 (2d Cir. 1979); Sonesta International Hotels Corp. v. Wellington Associates, 483 F.2d 247 (2d Cir. 1973); Mulligan, Foreword—Preliminary Injunctions in the Second Circuit, 43 Brooklyn L.Rev. 831 (1977). Having announced the Court's guide for the disposition of the instant motion, the necessary next step is to ascertain whether the facts *815 found by this Court in this matter satisfy the governing standard. 1. Irreparable Harm. The "irreparable harm" component of the Second Circuit's preliminary injunction test does not present a formidable hurdle to the plaintiff in this motion. In fact, in its papers, the defendant has offered only token opposition to the plaintiff's assertion that an eventual damage award in this action would be of minimal consolation to the plaintiff in the absence of immediate equitable intervention by this Court. The plaintiff details several elements of irreparable injury that it would suffer if this Court did not grant it preliminary injunctive relief. These include the destruction of plaintiff's dealership which provides the sole source of support for Mr. Zegarek and his family (F. 9),[9a] the eventual bankruptcy of the plaintiff's business, and the loss of the customers, location, reputation and goodwill that have contributed to the plaintiff's success (See F. 12). Finally, the plaintiff contends that Mr. Zegarek's advanced age of 58 years (F. 10) presents an almost insurmountable obstacle to his cultivation of a new business. A review of the relevant case law in this circuit indicates to this Court that the threatened destruction of a business fits within the definition of irreparable harm. See Jackson Dairy, Inc. v. H. P. Hood & Sons, Inc., 596 F.2d 70 (2d Cir. 1979); P. J. Grady, Inc. v. General Motors Corp., 472 F.Supp. 35 (E.D.N.Y.1979). Accordingly, the Court concludes that the plaintiff has made a showing of irreparable harm sufficient to sustain the grant of a preliminary injunction. 2. The Second Prong of the Preliminary Injunction Test A. General Considerations In its termination letter of November 21, 1979 to the plaintiff, the defendant set forth 4 specific grounds for discontinuing the plaintiff's Honda dealership (F. 34). Specifically, the defendant cited the plaintiff with (1) the submission of false claims (2) flagrant disregard for the laws of the Village of Mineola, (3) operating its business out of an unauthorized location, and (4) noncompliance with American Honda motorcycle set up procedures (See F. 71, 72). As noted earlier, the plaintiff's instant attack on the defendant's termination of its dealership questions both the propriety of the above cited bases for the discontinuance of the plaintiff's status, and the motive underlying the termination. If, however, on this motion for a preliminary injunction, the Court finds it more likely than not that the defendant discontinued the plaintiff as a Honda dealer for "cause" or that the propriety of the termination of the plaintiff's franchise does not present questions of a sufficiently serious nature as to make them a fair ground for litigation, the defendant's motive becomes irrelevant. Thus, this Court initially will review the sufficiency of the business reasons cited in the November 21, 1979 termination letter before any consideration of the defendant's motive will be entertained. With respect to the set up issue, however, since the Court believes it impossible to separate the discussion of this alleged contractual violation from the plaintiff's attack on the defendant's motives, the two will be handled at the same time. Before embarking on an analysis of the defendant's causal bases for termination of the plaintiff's dealership, a brief review of N.Y.Gen. Bus Law § 197 (McKinney Supp. 1979-80) is necessary to delineate the parameters of the Court's impending inquiry. N.Y.Gen. Bus. Law § 197 (McKinney Supp. 1979-80) was enacted in order to outlaw overreaching by large motor vehicle distributors and to "establish guidelines for enforcement of a fair and equitable balance" between the motor vehicle dealer and its distributor. 1970 N.Y. Laws ch. 582 § 1. Thus, N.Y.Gen. Bus. Law § 197 (McKinney Supp. 1979-80) permits a court asked to implement its mandate to scrutinize the reasons cited by motor vehicle distributors *816 when a termination of a motor vehicle dealer occurs. Semmes Motors, Inc. v. Ford Motor Co., 429 F.2d 1197 (2d Cir. 1970); P. J. Grady, Inc. v. General Motors Corp., 472 F.Supp. 35 (E.D.N.Y.1979); Niagara Mohawk Power v. Graver Tank & Mfg., 470 F.Supp. 1308 (N.D.N.Y.1979). The statutory language of § 197, however, provides little guidance to the reviewing court. The statute merely states that the court must be satisfied that "cause" supports the distributor's decision to terminate a dealer. Fortunately, however, a legislative pronouncement accompanying a 1970 revision of § 197 offers further direction. That language exhibits a legislative intent to have § 197 provide small motor vehicle dealers "judicial relief from unfair and inequitable practices affecting the public interest." 1970 N.Y. Laws ch. 582 § 1. Thus, it is safe to say that principles of fundamental fairness must guide the application of § 197 to the facts of this case. In this light, the Court turns to undertake that task. B. The False Claims Issue Paragraph 7 of Article XIII of the Honda Motorcycle Sales Agreement (Pl. Ex. 2) states that American Honda may terminate a dealer because of: Submission by Dealer of any false or fraudulent application, report or statement, or false or fraudulent claim for reimbursement, refund or credit, including, but not by way of limitation, false and fraudulent warranty claims or the sale of any demonstrator or used Honda motorcycles as new and unused. The defendant asserts that Honda of Mineola consistently engaged in the conduct condemned by the above quoted provision. The testimony of Michael Karlin, a former service manager for the plaintiff (F. 38), provides the central source for this assertion.[10] Mr. Karlin testified before this Court that the plaintiff submitted false shipping and false warranty claims to the defendant (F. 38, 39, 49, 50). Mr. Zegarek's testimony, however, directly contraverted that of Mr. Karlin (F. 46, 47, 51, 52), as do the depositions of other individuals who have been employed by the plaintiff (F. 48). The Court also has observed that Mr. Karlin's testimony was not corroborated by any other witness who testified in this Court during the lengthy hearing on this matter.[11] Nor was Mr. Karlin's testimony substantially documented by objective evidence (F. 40). Moreover, several representatives of the defendant testified that at no time did they advise the plaintiff of any belief on their part that false claims had been submitted by the plaintiff (F. 53). Because of the extreme potential impact of Mr. Karlin's testimony, the Court must carefully examine Mr. Karlin's credibility in order to ascertain whether this Court will afford his testimony the significance the defendant suggests it deserves in its papers. Such an evaluation of credibility uniquely is within the province of this Court as the trier of fact on this motion. General Electric Co. v. Parr Electric Co., 21 F.Supp. 471 (E.D.N.Y. 1937); Rattancraft of California v. United States, 340 F.Supp. 978 (U.S. Cust.Ct. 1972). In assessing the credibility of Mr. Karlin, the Court has observed that Mr. Karlin's cross examination unearthed a number of factors that cast substantial doubt over the reliability of his testimony in this matter. The first of these evolves from the Court's perception of Mr. Karlin's approach to his job at Honda of Mineola and at other establishments as irresponsible (F. 41, 43). The Court also has observed that Mr. Karlin consistently has been plagued with financial difficulties (F. 42, 44). *817 Similarly, the Court finds the reliability of Mr. Karlin undermined by Mr. Zegarek's testimony that he fired Mr. Karlin (F. 45). Since the bias of a witness can be considered by the trier of the facts in its determination of credibility, United States v. Blackwood, 456 F.2d 526 (2d Cir. 1972); United States v. Haggett, 438 F.2d 396 (2d Cir. 1971), the Court, in so doing, determines that the apparent dismissal of Mr. Karlin from the plaintiff's employ casts a cloud over the reliability of his damaging testimony in this action. Thus, the Court concludes that, since Mr. Karlin's testimony aptly can be described as a mode of seeking revenge for his dismissal from Honda of Mineola, the impact of such tainted testimony can not be deemed by this Court to establish a likelihood of the existence of the plaintiff's systematic submission of false warranty and false shipping claim to the defendant. The defendant also characterizes the plaintiff's submission after a 1978 burglary of claims on 3 identical items to the defendant and to plaintiff's insurer (F. 55) as gross fraud sufficient to support termination of the plaintiff's Honda dealership. In this regard, the Court has taken note of the plaintiff's explanation that its conduct in this incident resulted from mere inadvertence on the part of one of its employees (F. 56). The Court finds Mr. Zegarek's explanation to be a plausible one. And, since the Court declines to characterize inadvertence as a sincere basis for termination of a motor vehicle dealership, the Court holds that this predicate for termination does not constitute "cause" as contemplated by N.Y.Gen. Bus. Law § 197 (McKinney Supp. 1979-80). During the course of the hearing before this Court, the defendant also directed this Court's attention to other fraudulent practices allegedly engaged in by the plaintiff.[12] These included the sale of motorcycles in crates, a practice prohibited by American Honda. The evidence introduced with respect to these other instances of fraud, however, convinces this Court that the cited conduct evinces impropriety of an inconsequential nature, possibly warranting a reprimand, but surely not termination of the plaintiff's dealership. Therefore, for the above stated reasons, the Court finds that on the issue of the submission of false claims, the plaintiff has established a probability of success on the merits. C. The Plaintiff's Relations With The Village of Mineola As a second ground for termination, the defendant cites the plaintiff's fiery relations with the Village of Mineola. See F. 57, 58, 59, 68. Contending that the conduct of Mr. Zegarek provides the overriding cause for the hostile state of affairs between Mineola's residents and public officials and the plaintiff, the defendant argues that such a condition violates the Honda Dealer's Agreement (Pl. Ex. 2), and that it constitutes an unsatisfactory impairment to Honda's reputation in the Mineola community. Contrary to the defendant's depiction of the reasons for the hostility between the plaintiff and the representatives of Mineola, the evidence educed at the hearing before this Court tended to show that much of Mineola's dismay with the plaintiff's operations stems from the apparent incompatibility of a motorcycle business and a neighboring residential community. The testimony of Mineola's Mayor and of Mr. Zegarek bears this out. Both of these witnesses observed that the animosity of the Mineola residents toward the plaintiff evolves primarily from their fears of the violent, black leather jacket image of the motorcycle consumer, and of the noise and activity attendant to the plaintiff's business (F. 69, 70). Moreover, it appears that the fears of the Mineola residents have been translated into a legal campaign by the public officials of Mineola to make the plaintiff's existence in Mineola a difficult one.[13] Since 1971, the *818 Village has cited the plaintiff for numerous violations of local ordinances arising from the operation of its facilities at 336 Jericho Turnpike, 344 Jericho Turnpike and 255 Jericho Turnpike; the plaintiff has prevailed in the overwhelming majority of these cases that have reached the local courts (F. 60, 61, 62, 63, 64, 66). Thus, the facts surrounding the plaintiff's difficulties with the Mineola community shed an explanatory light on their existence. It is this Court's opinion that, in this light, the persuasiveness of the defendant's claim that the plaintiff's relations with the Village of Mineola constitute cause for termination of the plaintiff's dealership diminishes dramatically. As a result, this Court concludes that the plaintiff has established a probability of success on its refutation of the defendant's claim that its relations with the Mineola community is cause for termination of its dealership. In reaching this conclusion, the Court has taken into account the fact that Mr. Zegarek's some-what abrasive independence has occasioned some of his problems with the Mineola community. But, the Court also believes that Mr. Zegarek's approach to community relations is but a reaction to the community's bitter attitude towards him and his business. Sitting in equity, this Court cannot countenance the termination at this time of a man's business under such attenuated circumstances. D. The Plaintiff's Use of 344 Jericho Turnpike Section 1 of Article III of The Honda Motorcycle Sales Agreement (Pl. Ex. 2) provides: To provide proper Honda Motorcycle and Honda Motorcycle Parts representation commensurate with the reputation and goodwill attached to the name "Honda" and to facilitate the proper sales and servicing of Honda Motorcycles and Honda Motorcycle Parts, Dealer will maintain business premises satisfactory to Distributor with respect to appearance, location, size of buildings and adequate layout as well as equipment, showroom, office, storage space, workshop and service operation; the whole which shall be adequate for the sale and service of Honda Motorcycles and Honda Motorcycle Parts in proportion to the number of Honda Motorcycles and Honda Motorcycle Parts that may reasonably be expected to be sold and serviced by Dealer in Dealer's Primary Area of Responsibility, and possible expansion of the premises to handle any foreseeable future requirements. Furthermore, Section 3 of Article III of the agreement provides: Once a Dealer has established its business facilities or location mutually satisfactory to Dealer and Distributor, Dealer will not move to or establish a new or different location, branch sales office or branch service establishment without first obtaining a written approval of Distributor. In its termination letter of November 21, 1979, the defendant cited a violation of the above quoted sections as a ground for termination of the plaintiff's dealership. This alleged violation stems from the plaintiff's transfer of its entire operation from 336 Jericho Turnpike to 344 Jericho Turnpike after the September 20, 1979 fire (F. 8, 20). The defendant claims that 344 Jericho Turnpike is not a satisfactory motorcycle sales and service facility, and that the plaintiff moved to and established its business at 344 Jericho Turnpike without receiving the required written approval from the defendant. Thus, the defendant views the plaintiff's transfer to 344 Jericho Turnpike to be an act in derogation of the controlling agreement. In opposition, the plaintiff asks this Court to extend to it an equitable license to operate out of 344 Jericho Turnpike until it can return to its resurrected facility at 336 Jericho Turnpike. Thus, the plaintiff argues that it began full scale operations out of 344 Jericho Turnpike solely out of necessity due to the destruction of the 336 Jericho Turnpike facility. *819 In reviewing these contentions, the Court must keep in mind that, on this motion for a preliminary injunction, it sits in equity and must be cognizant of the need, when circumstances so dictate, to take appropriate interim action. Triebwasser & Katz v. American Telephone & Telegraph Co., 535 F.2d 1356 (2d Cir. 1976); Hamilton Watch Co. v. Benrus Watch Co., 206 F.2d 738 (2d Cir. 1953). This Court took such an approach to the issue of the plaintiff's temporary use of 344 Jericho Turnpike on December 3, 1979 when it recognized the plaintiff's entitlement in equity "to some reasonable length of time to recover [its] business and to put that business back into the position it previously was in ... [before] the fire completely destroyed the [336 Jericho Turnpike facility]." Transcript of December 3, 1979 Proceedings Before This Court, p. 19. After reviewing the testimony elicited at the lengthy hearing before this Court, the Court is of the opinion that a similar approach must govern the Court's disposition of this issue on this motion. Several factors militate in favor of such a finding at this time. The first of these is Mr. Zegarek's substantial efforts in good faith to conform 344 Jericho Turnpike's specifications to Honda's service and sales requirements (F. 21). On the basis of the testimony on this score considered by this Court, it appears that these improvements have made 344 Jericho Turnpike's facilities comparable to those of other Honda dealers on Long Island (F. 22, 23). While improving 344 Jericho Turnpike, the plaintiff has participated in efforts to rebuild 336 Jericho Turnpike (F. 63). Since the plaintiff procured insurance on the 336 Jericho Turnpike facility (F. 19), and has always maintained a good financial record (F. 11), the Court concludes that the resurection of 336 Jericho Turnpike more closely resembles a reality than an idle hope. Under this circumstance, the necessity of interim operations at 344 Jericho Turnpike for the plaintiff intensifies. If the plaintiff cannot operate temporarily while 336 Jericho Turnpike is being reconstructed, a strong possibility exists that the customers, goodwill and reputation of the plaintiff will diminish to such a level as to make the resumption of business at 336 Jericho Turnpike when it is rebuilt economically unfeasible. Since the Court's analysis of the previously discussed bases for termination makes it more likely than not that the plaintiff eventually will prevail on the injunctive phase of this action, this Court concludes that it would be manifestly unfair to cut off the plaintiff's chance to rebuild a viable business to its prior confines. Another factor favoring the plaintiff is its prior use of 344 Jericho Turnpike in connection with its Honda operation (F. 13, 15). This Court concludes that the testimony it heard supports a finding that representatives of the defendant familiar with the plaintiff's operation knew of the plaintiff's previous use of 344 Jericho Turnpike (F. 18), although the extent of such use is disputed (F. 16). Moreover, the plaintiff's lease for 344 Jericho Turnpike contains a provision relating to motorcycle sales, storage and service (F. 14). Thus, this Court concludes that, as a result of the tragic fire at 336 Jericho Turnpike, the plaintiff's temporary transfer of operations to 344 Jericho Turnpike, a location it already utilized, can be sanctioned as a necessary reaction to an emergency, a reaction to which the defendant reasonably should be expected to adjust. This Court understands the importance of motorcycle safety which the defendant points to as its primary reason for objecting to the plaintiff's transfer to 344 Jericho Turnpike. In dramatic terms, however, Mr. Zegarek professed similar concerns (F. 73). Accordingly, in not characterizing the plaintiff's interim use of 344 Jericho Turnpike as "cause" for termination of the plaintiff's dealership for the limited purpose of this motion for a preliminary injunction, this Court is in no way expressing approval of unsafe motorcycle operations. The Court is convinced that the plaintiff's interim operation out of 344 Jericho Turnpike will evince the high regard for safety to which Mr. Zegarek testified. And, the Court expects American Honda, as the national distributor *820 of Honda motorcycles, to ensure that the plaintiff's operations at 344 Jericho Turnpike will conform to reasonable facility requirements. Therefore, with this caveat, the Court concludes that the plaintiff has established a probability of success on the issue of the propriety of its temporary use of 344 Jericho Turnpike for its Honda dealership in Mineola, N. Y. E. Motorcycle Set Up at Honda of Mineola Paragraph 12 of Article XIII of the Honda Motorcycle Sales Agreement (Pl. Ex. 2) states that a dealer may be terminated because of: Failure of Dealer to perform the required set-up and pre-delivery inspection, repairs and services and procedural requirements relating thereto. Using this section as a sword in this action, the defendant contends that the magnitude of the plaintiff's set up deficiencies constitutes cause for termination of the plaintiff's dealership. Plaintiff's Exhibit 8 provides the prime objective indicia of the plaintiff's set up record. This Exhibit details 187 separate deficiencies on 114 display motorcycles (F. 76, 77). The defendant introduced evidence attempting to prove that this number of deficiencies placed the plaintiff as the worst dealer in the Honda chain with respect to set up (F. 780),[14] and that the plaintiff's manner of set up has been a constant headache to American Honda (F. 80, 81, 82, 83, 84). The plaintiff has countered this attack on its set up record by arguing that many of the cited deficiencies, in fact, are not deficiencies at all. Instead, the plaintiff asserts that 108, or over 50% of the cited deficiencies, are the result of its extensive trafficking in custom non-Honda parts, see F. 87, 88, 89, 90, at its Honda dealership (F. 85, 86). The plaintiff also asks this Court to observe that, of the remaining 79 deficiencies, 22 of these involve the minor mishap of dirty motorcycle seats. Finally, and most importantly, the plaintiff contends that the remaining 57 deficiencies are de minimus when measured against the large number of motorcycles sold by the plaintiff. See F. 12. Therefore, the plaintiff asks this Court to conclude that this insignificant number of deficiencies does not provide a causal basis for termination of its dealership. Acceptance of this argument, however, would require this Court to adopt the predicate upon which the argument rests. That predicate includes a serious attack on the bona fides of the defendant. Accordingly, this Court concludes that the defendant's attack on the plaintiff's manner of motorcycle set up, and the plaintiff's antitrust claim which questions the motive behind such an attack are inextricably interwoven. This being so, the Court will treat these two issues as an entity. Essentially, plaintiff's antitrust claim alleges that the defendant's anticompetitive motive has resulted in the termination of its dealership. Simply stated, the plaintiff asserts that its extensive dealings in non-Honda parts and accessories and its resultant eschewal of Honda parts and accessories constitute the true reason for its termination. The plaintiff's dealings in non-Honda parts and accessories make up over 50% of its gross sales (F. 89, 90). At the hearing before this Court on the instant motion, the plaintiff introduced a great deal of evidence designed to prove that anticompetitive motives led to its termination, and that the other claimed maladies in its operation merely served as a screen to shield the defendant's illegal motive. In this regard, Mr. Zegarek pointed to nagging warranty problems with the defendant allegedly tied to the mere implementation of non-Honda parts in Honda motorcycles (F. 91). Mr. Zegarek and his son[15] also testified that representatives of *821 the defendant had expressed varying degrees of displeasure with the plaintiff's massive foray into the sale of non-Honda parts (F. 93). Understandably, the defendant vehemently has denied the plaintiff's antitrust accusations. The evidence presented in this connection has taken the form both of documentary evidence, albeit rather ancient (F. 94), and of testimony from representatives of the defendant (F. 95). After careful consideration of this evidence in contradistinction to that presented by the plaintiff, the Court declines to opine as to whether the plaintiff or the defendant has demonstrated a probability of success on the merits of either the set up or the antitrust issue. This conclusion, however, does not complete the Court's preliminary injunction inquiry on these intertwined issues. This is so because the Court next must consider whether these issues present sufficiently serious questions concerning the defendant's motive to make them a fair ground for litigation, and whether the balance of hardships tips decidedly toward the party requesting the preliminary relief. In Jacobson & Co., Inc. v. Armstrong Cork Co., 548 F.2d 438 (2d Cir. 1977), the Second Circuit explained that, in an antitrust context, the "sufficiently serious question" standard is satisfied if the plaintiff raises substantial questions regarding the existence of antitrust violations. Id. at 443. See also Hamilton Watch Co. v. Benrus Watch Co., 206 F.2d 738 (2d Cir. 1953). Therefore, in order to satisfy this standard in this matter, the Court must acknowledge at least the inference that the plaintiff's dealership termination resulted because of its competition with the defendant in the accessory market. See 548 F.2d at 444. It is this Court's opinion that such an anticompetitive inference is cognizable from the facts presented to this Court. In reaching this conclusion, the Court relied heavily upon its impression that the evidence set forth by the plaintiff at the lengthy hearing before this Court raised substantial questions regarding the bona fides of the defendant with respect to its relations with the plaintiff. Specifically, the Court deems the possibility of an unarticulated but subtle design of the defendant to lessen the plaintiff's emphasis on the sale and installation of non-Honda parts to be of a sufficiently serious nature as to warrant litigation over it. The ongoing dispute over set up attests to this possibility. In this regard, the evidence concerning the defendant's reaction to the plaintiff's forced transfer of its operations to 344 Jericho Turnpike after the September 20, 1979 fire must be considered. While Mr. Zegarek testified that he received verbal approval from certain representatives of the defendant concerning his transfer of the plaintiff's operation to 344 Jericho Turnpike (F. 27, 28), these representatives testified that they never officially recommended that the 344 Jericho Turnpike location be approved. Notwithstanding this conflicting testimony, it is clear to this Court that for the six weeks preceding the formal termination of the plaintiff's dealership, the defendant left the plaintiff in limbo by cutting off access to all essential Honda parts and tools (F. 24, 31), without clearly informing the plaintiff of its status (F. 25, 26). This conduct severely injured the plaintiff's business (F. 32, 33). To this Court, such conduct evinces bad faith on the defendant's behalf toward a dealer with whom relations had been carried on for over 15 years. This mala fides tips the equitable scale in the plaintiff's favor, while serving to raise serious questions as to the motive underlying the termination of the plaintiff's dealership. As for the "balance of hardships" prong of the test, this Court's prior conclusion that the plaintiff will sustain irreparable harm in the absence of equitable intervention by this Court goes a long way towards satisfying this requirement. New York v. Nuclear Regulatory Commission, *822 550 F.2d 745 (2d Cir. 1977); Triebwasser & Katz v. American Telephone & Telegraph Co., 535 F.2d 1356 (2d Cir. 1976). Moreover, this Court is convinced that this potential irreparable injury to the plaintiff is substantially greater than the hardship the defendant would suffer by being required to continue the plaintiff as a Honda dealer during the pendency of a preliminary injunction. In short, the Court finds that the balance of hardships in this matter tips decidedly toward the plaintiff who, in the absence of preliminary relief, would lose its business, its location, its reputation and its goodwill. Therefore, for the foregoing reasons: (1) The Court concludes that the plaintiff has made a sufficient showing of irreparable harm. (2) The Court concludes that the plaintiff has demonstrated a likelihood of success on the false claims issue, on the issue of the plaintiff's relations with the Village of Mineola and on the issue of the plaintiff's use of 344 Jericho Turnpike. (3) The Court concludes that the plaintiff's antitrust claim and the set up issue present sufficiently serious questions going to the merits as to make them a fair ground for litigation and that the balance of hardships tips decidedly in the plaintiff's favor. Accordingly, the plaintiff's motion for a preliminary injunction is hereby GRANTED in all respects. It is SO ORDERED. NOTES [*] Throughout the course of this Decision and Order, the numbers within parentheses denote reference to the transcript of the Hearing held in this matter. [**] "Pl. Ex." denotes reference to the exhibits introduced into evidence by the plaintiff during the Hearing before this Court. [1] Most of the sales that took place at 344 Jericho Turnpike were for wholesale and overseas customers (327). [2] 344 Jericho Turnpike presently has 1500 square feet of showroom space on its first level. There will be 500-800 square feet of additional showroom space on the upstairs level (98). [***] "Def. Ex." denotes reference to the Exhibits introduced into evidence by the defendant during the Hearing before this Court. [3] Mr. Zegarek introduced evidence tending to prove that, during Mr. Karlin's period of employment at Honda of Mineola, a minimal number of shipping claims actually were submitted to American Honda and that this number fell far below the plaintiff's norm (888). Through its interpretation of the plaintiff's monthly shipping claim statistics over an extended period of time, the defendant attempted to rebut Mr. Zegarek's testimony (Def. Ex. AB, AD, AE). [4] Although they have not been established to the satisfaction of this Court, several other claimed incidents of fraud allegedly committed by the plaintiff should be mentioned in passing. Several of these involve the plaintiff's alleged sale of motorcycles in crates, see Def. Ex. H, a practice prohibited by American Honda. Although evidence concerning such practices was not prominently displayed at the hearing before this Court, one such sale appears to have been the subject of Small Claims Court litigation (Def. Ex. V). [5] To corroborate Mr. Keller's testimony, the defendant introduced Def. Ex. AH consisting of documents containing customer complaints concerning Honda of Mineola. Of the 46 such documents found within Def. Ex. AH, however, a number of these documents were duplicates. [6] These statistics reflect this Court's plenary review of the 9 deficiency reports. [7] Mr. Karlin testified that the information contained within these reports was accurate (501). [8] Mr. Zegarek testified that at the December 13, 1978 meeting, he informed defendant's counsel that for the plaintiff to remove accessories from its motorcycles would be a costly and detrimental way for Honda of Mineola to merchandise its accessory lines (486). [9] The plaintiff also has based its second claim on 15 U.S.C. § 1222 (1976) which provides: An automobile dealer may bring suit against any automobile manufacturer engaged in commerce, in any district court of the United States in the district in which said manufacturer resides, or is found, or has an agent, without respect to the amount in controversy, and shall recover the damages by him sustained and the cost of suit by reason of the failure of said automobile manufacturer from and after August 8, 1956 to act in good faith in performing or complying with any of the terms or provisions of the franchise, or in terminating, canceling, or not renewing the franchise with said dealer: Provided, That in any such suit the manufacturer shall not be barred from asserting in defense of any such action the failure of the dealer to act in good faith. This section, however, has been held not to apply to motorcycle dealers. Small Arms Co., Inc. v. The Brooklyn Cycle, Inc., 408 F.Supp. 707 (E.D.N.Y.1976). As a result, this Court will not employ it in the disposition of the instant motion. [9a] "F" denotes reference to the Court's Findings of Fact in this matter. [10] Despite the importance of Mr. Karlin's testimony, Mr. Karlin's appearance as a witness for the defendant came as a complete surprise to plaintiff's counsel (497). [11] The defendant has attempted to corroborate Mr. Karlin's testimony with affidavits of other employees of the plaintiff. Since the deponents of these affidavits were not subject to cross examination, the Court has exercised its discretion so as not to consider these affidavits. [12] See note 4 supra. [13] This is evidenced by the passage of laws designed to impede the plaintiff's business (F. 65). Some of these laws have been found to be outside the jurisdiction of the Village (F. 67). [14] Plaintiff's counsel's cross examination of Mr. Keller, a witness called by the defendant, substantially questioned the basis for Mr. Keller's conclusion that the plaintiff's deficiency record placed it as the worst in the Honda chain (F. 79). [15] Mr. Zegarek's son did not testify before this Court; he did, however, submit to a deposition at which a representative of the defendant was present (Pl. Ex. 34).
''' Created on 20 Feb 2014 @author: siva ''' import sys import os schema_folder = sys.argv[1] types = {} schema = {} for domain in os.listdir(schema_folder): with open(os.path.join(schema_folder, domain)) as fpt: entity = "" for line in fpt: line = line.rstrip() if line == '' or line[0] == '#': continue if line[0] != '\t': entity, entity_type = line.split('\t') if entity_type.find("foreign") < 0: # entity type is main/mediator belonging to the domain types[entity] = entity_type elif entity not in types: types[entity] = entity_type else: if entity not in schema: schema[entity] = set() schema[entity].add(line) key_priority = {"main" : 0, "mediator" : 1, "foreign" : 2, "foreign_mediator" : 3, "main_extended" : 4} for entity, entity_type in sorted(types.items(), key = lambda x : key_priority[x[1]]): print "%s\t%s" %(entity, entity_type) if entity not in schema: print continue for relation in schema[entity]: print relation print
UK PM Theresa May fired defence secretary Gavin Williamson over Huawei leak, following an inquiry from a top-level National Security Council meeting. The inquiry followed reports over a plan to allow Huawei limited access to help build the UK's new 5G network. PM lost confidence in his ability to serve. Williamson denied leaking the information. He held that position since 2017 & now Penny Mordaunt will take on the role.
But John Seddon presents the issue in such a dismissive way that rather than enlightening the reader about the conundrum, he just breezes past it as if it didn’t exist at all – and also leave you wondering what role in his world there is for a member of the public who wants a GP service that works for them, which to their mind includes choice over when to see the GP. Is the idea of people wanting choice over when they see their GP so risible and trivial that it deserves such dismissive treatment? Or take another example: “The assumption (for which there is no empirical evidence) is that people have different skill sets”. Really? No-one in the public services varies in what skills they have? And there I was thinking that my IT skills are much higher than my plumbing ones, with the evidence of the dripping pipe being a good piece of evidence. Mind you, I am ex-public sector. There is a good point buried in here, about the way in which public service jobs too often are very specialised. Public sector workers are expected to specialise in very narrow tasks rather than to have the broader problem-solving skills which would better reflect the messy actual demands on public services. Yet although the point about need greater and broader skills at the front end of public services is returned to elsewhere, the generality expressed with utter confidence that people don’t have different skill sets obscures rather than helps such a discussion. What say should the public get in public services? Moreover, there’s a illiberal thread running through the book as it’s not just the idea that people might want choice that it dismisses. Also in the long list of ideas breezily dismissed, for example, is the idea that the public should be asked what matters to them when it comes to setting targets or objectives for public services. “Such surveys can only yield unreliable data and invalid conclusions,” says Seddon in his sweeping dismissal of the idea that a public service perhaps should in part worry about what service the public wants. In doing so, he again leaves behind an important debate. In this case, two of the crime-related ideas he derides are that the public is concerned about fear of crime – and so tackling that might be advisable – and also that the public likes to see a police presence and so therefore that might be something to try to balance with other calls on police time. Of course, dealing with fear of crime and putting police where the public can see them may well detract from using police resources to stop crime or catch criminals. But the questions about how do you reconcile those competing demands do not get a look in as John Seddon instead discards as utterly flawed the idea that the public views on priorities matter or that fear of crime might be a significant problem. Failure demand Which is all very odd, not only in its own terms, but also when you turn to other parts of the book, where he argues eloquently and convincingly that the best way to understand public services and to improve them is to focus on the overall experience of individuals and how they get treated by different parts of the system. That leads to the very useful insight of ‘failure demand’, namely how much of the work done by public services is caused by the failure to deal with an issue properly an earlier stage. As a result, apparently efficient services are really nothing of the sort. A call centre that deals with a large volume of calls, at a low cost per call, may look a success – until you then realise how many of the calls are generated by the failure of an earlier call to resolve an issue. Repeatedly dealing quickly with progress chasing calls isn’t a sign of efficiency, it’s a sign of failure that costs more than a longer call which results in an issue being sorted first time round. Hence the point mentioned above about needing broader problem solving skills at the front line of public services rather than niche specialisms which shuffle people around repeatedly without anyone quite getting to grips with the underlying issues. When is a target not a target? That in turn leads to a useful discussion about the problem with traditional targets in the public sector. The distinction between a target (bad) and an outcome measure (good) can appear to a novice rather like a medieval theological debates at times. Whilst Seddon and his supporters are often very critical of targets and dismissive, verging on rude, about the main proponents of targets, they own preferred approaches still involve turning things into numbers where the numbers moving in one direct is bad and in another is good. For example, when I heard John Seddon speak about his approach, he started off with an example of a much improved local council housing service – and he led with two numbers to illustrate how improved it was, namely lower costs and shorter waiting times. Yet on the same occasion he was also very hostile to the idea that numerical targets are useful for improving public services. The book sets out an approach for using numbers that help understand what is really going on in a system and which leave people in the public service free to work out the best way of providing a service, rather than micro-managing their choice for them. But ultimately the book didn’t persuade me that in practice – especially given the political and media scrutiny and pressure around public services – such measures wouldn’t end up being that different from a good target. They would certainly be better than a bad target, but if the measure tells you something useful about what is happening in a public service, and people are keen to see the public service improve, then it ends up morphing into a target, even if only at the behest of media stories covering the public service. Indeed, Seddon quotes W Edwards Deming approvingly saying, “A system must have an aim. Without an aim, there is no system”. But don’t tell him that sounds to you rather like saying you should have a target… There is plenty of value in this book then, such as the importance of integrating policy making with administration so that the policies that are set are capable of sensible administration (echoing the point made in Conundrum), even if the author’s sometimes rather implausible sweeping claims and regular dismissiveness about almost everyone else often obscures rather than illuminates. This review is a little unfair, so I write this to respond to two criticisms and offer Mark a challenge. Firstly: “The assumption (for which there is no empirical evidence) is that people have different skill sets”. This is taken entirely out of context. I was describing how HMRC was assessing managers as being suited to ‘policy work’ versus ‘people management’, evidencing how far people will go in pursuing the wrong problem. There is no empirical justification for this distinction. It was a specific, not general, argument about skill sets. Secondly: Choice. I have read David Boyle’s report. It was an attempt to seek ways to extend the idea of choice; the report didn’t question whether choice improved public services. The evidence for choice improving public services simply doesn’t exist. What research there is, is contrary to the view that choice improves services. See for examples here and here. It is more accurate to see David Boyle’s examples of public services failing to meet the needs of users as just that, failures, rather than as an argument for ‘choice’ as a solution. I gave Mark a pre-publication copy of our report commissioned by Locality, which illustrates why public services fail to meet users’ needs and charts a way of working that not only meets peoples’ needs but dramatically reduces costs. It will be launched in March. My challenge to Mark: Where is the evidence for choice improving public services? Exclusive monthly newsletter about the Liberal Democrats If you submit this form, your data will be used in line with the privacy policy here to update you on the topic(s) selected. This may including using this data to contact you via a variety of digital channels.
Teslagrad Slated For a PS Vita Release This Summer Teslagrad Slated For a PS Vita Release This Summer For anyone who’s followed my continued coverage of Rain Games’ recently released Teslagrad, you may have noticed that I occasionally dream of a PS Vita release that would allow me to take the gorgeous puzzle platformer with me on the go. My prayers have been answered, because today a PS Vita port has been announced for this summer. Teslagrad was a fantastic experience — so much that I gave the PC version a 9.5 out of ten in my review — one that stars a young boy who finds himself in Tesla Tower, a steampunk-inspired vision of old Europe. Teslagrad has more than 100 hand-drawn environments that players can explore as they like, as long as they have the proper tools, which includes control over magnetism and electricity. Between its visual storytelling, its soundtrack, its complex puzzles and its old school boss fights, the game is one I’d highly recommend. On the PS Vita, according to an interview with the game’s lead programmer and “code teslamancer” Fredrik Ludvigsen on the PlayStation Blog, the game made the transition quite smoothly, with memory usage cut down and the touch screen emulating a mouse pointer for menu selection. The buttons will work just as well as a joystick or gamepad does on the computer. One thing that they’re working on is reducing background loading times, which is more visible on consoles than it is on PC, due to the system architecture. So optimization tests are being run now to make the experience as smooth as it was on the PC, where there were no loading time in the entire game from start to finish. On the Vita though, Unity makes the transition well for textures and animations, as well as the memory. Mostly they just have to “shrink” everything, which will be easier on the Vita with its 512MB memory than the PS3’s 256MB memory. The game was previously released to PC, Mac and Linux platforms via Steam, Desura and other digital distributors for $9.99/£6.99/€8.99. While the upcoming releases have no set price point or exclusive content announced yet, you can at least expect the PS3 and Wii U versions during Spring of this year, and once again on the PS Vita during the Summer.
Q: Default constructor in cpp shallow or deep copy? Does the default copy-constructor do a shallow or a deep copy in C++? I am really confused with default copy constructor in cpp, as it does shallow copy or deep copy, as when I did v2=v1; suppose v1={1,2,3}, now if I have done v2[0]=1; It does not get reflected but I heard it does shallow copy, can anybody please explain? A: It doesn't do either. It does a memberwise copy. I.e. it copies all the members of the class using their copy constructors. If those members have copy constructors that do a deep copy then you'll get a deep copy, if they do a shallow copy then you'll get a shallow copy, or they could do something else entirely. Deep copy and shallow copy are not C++ concepts, instead C++ lets you do a deep or shallow copy as you prefer.
Arc discharge-mediated disassembly of viral particles in water. In this study, we investigated the inactivation effects on murine norovirus (MNV-1) with/without purification in water using a submerged plasma reactor of arc discharge (underwater arc), which produced a shockwave, UV light, reactive oxygen species and reactive nitrogen species. Underwater arc treatments of 3 and 6 Hz at 12 kV resulted in 2.6- and 4.2-log reductions in the virus titer of non-purified MNV-1 after 1 min of treatment, respectively. The reduction of purified MNV-1 was higher than that of non-purified MNV-1 after underwater arc treatment for all applied conditions (12 or 15 kV and 3 or 6 Hz). One of the viral capsid proteins (VP1) was not detectable after underwater arc treatment, when its integrity was assessed by western blot analysis. Transmission electron microscopy analysis also revealed that MNV-1 particles were completely dissembled by the treatment. This study demonstrates that underwater arc treatment, which was capable of disintegrating the MNV-1 virion structure and the viral capsid protein, can be an effective disinfection process for the inactivation of water-borne noroviruses.
Progesterone enhances L-dopa-stimulated dopamine release from the caudate nucleus of freely behaving ovariectomized-estrogen-primed rats. In the present experiment we examined the effect of progesterone upon dopamine (DA) release induced by a direct infusion of unlabeled L-dihydroxyphenylalanine (L-DOPA) into the caudate nucleus of freely behaving rats. Ovariectomized rats were implanted with a push-pull cannula directed at the caudate nucleus and subjected to perfusion under 3 different hormonal conditions: (1) following 4 days of treatment with estradiol benzoate (EB), (2) following 4 days of treatment with estradiol benzoate plus progesterone at 4-6 h prior to perfusion (EB + P-4-6 h) and (3) following 4 days of treatment with estradiol benzoate plus progesterone at 28 h prior to perfusion (EB + P-28 h). During each perfusion session and under each of the 3 hormonal treatment conditions, L-DOPA was infused through the push side of the cannula. Three increasing doses of L-DOPA (10(-6), 10(-5) and 10(-4) M) were infused with a 45-75 min interval between infusions. Regardless of hormonal treatment condition, a clear dose-response increase in DA and 3,4-dihydroxyphenylacetic acid (DOPAC), but not 5-HIAA, output was observed in response to the increasing doses of L-DOPA infusion. For each of the 3 doses of L-DOPA, maximal DA output was observed for animals tested under the EB + P-4-6 h hormonal condition, with statistically significant differences in the areas under the L-DOPA-stimulated DA response curves obtained following the 10(-6) and 10(-5) M doses of L-DOPA infusion.(ABSTRACT TRUNCATED AT 250 WORDS)
The Manitou Springs High School (Colo.) Pirates hosted St. Mary's on Tuesday -- but before Manitou's eventual 5-2 win, the game had to take a brief recess due to some surprise visitors.The two teams were playing in an annual rivalry game when a couple of deer wandered onto the field. The game was delayed for a full 10 minutes while school administrators, and even Manitou Springs shortstop, Joey Allen, attempted to shoo away the animals:The deer eventually exited the field, but it appeared this is something the school is used to.According to KRDO, back in October of 2015, a deer interrupted a Mustangs football game, but the crowd, and the deer itself, seemed to love the attention.You should always embrace wildlife -- they just want to play.Jessica Kleinschmidt/MLB/CUT4
Changing The Game: Mobile Moves Behavior I recently spent time at the Mobile Insider Summit in Lake Tahoe listening to and speaking with mobile marketers who are themselves surprised at the tectonic shift of user behaviors to mobile. Google, Yahoo, Travelocity, Pandora, OfficeMax, and many other brands were there, and almost everyone reported the sometimes-shocking acceleration of mobile use in their categories. For instance, 48% of Pandora users now access the music streaming service exclusively through mobile devices, either smartphones or tablets. At Hotels.com, 10% of bookings are now being made on mobile devices. According to Travelocity's VP of Global Product Marketing Beth Murphy, mobile access is changing buyer behavior in the category. Fully functional mobile apps and sites now allow the user to book later and research away from the Web itself. Mobile is going to reshape the purchase funnel in many categories. Traditionally in digital, it is always a bad idea to bank on the prospect of changing consumer behavior. After all, the reason behavioral targeting can work so well is because past actions are reliable predictors of future ones. We are creatures of habit, and only when a new medium becomes ritualized in our society is a critical mass of users reliably available to media and marketers. Arguably, the Internet itself, first envisioned by mass media as a new content-consumption platform, was most profoundly embraced by users as a communications platform. Email has always been the killer app here, and few in 1997 would have supposed social networking would be the killer app a decade later. More often than not, new technologies do not change societies; instead, the societies impose on the technologies patterns of use that combine perennial needs with some new patterns. Ask any spouse; you just can't bank on people changing for you. However mobile, both in smartphone and in tablet formats, appears to be that rare instance where a gadget really is fundamentally altering behaviors. To put some real data points behind the anecdotes above, Chadwick Martin, Baily just released a new report on how smartphones and tablets are changing consumers' entertainment behaviors. For instance, 89% of respondents said that in the last year they have used maps and directions content on other media less because they are consulting their smartphones and tablets instead. For watching movies, 79% say they use other platforms less. At the Tahoe Summit, Vivaki/Starcom MediaVest's Innovation Director Tracey Scheppach said flat-out that the tablet/iPad was a game-changer for media consumption, on a level she had never seen before in her years covering emerging platforms. The CMB study may bear this out. iPad owners were reporting that they are substituting fablet viewing of TV and movie media for other touch points. For instance, 45% of those who are substituting tablet for another device are watching movies on laptops less often and 34% report going to the movies less often. In this slice, women tend to use the tablet to substitute movie theaters and TV more than men, who are replacing laptops and game machines. It is not a zero-sum game, however. Overall media consumption is going up as a result of the additional touchpoints. But there are some patterns of behavior that media mobility will affect. The portability of mapping and directions, for instance, alters both online and offline research and purchase patterns. With 80% of all mobile device owners using them for mapping or directions in the last year, 67% say they are not using the Internet as often to look up and print these items. And in a shift that actually could affect a retail segment's bottom line, 60% say they stop less often for directions at places like gas stations because of mobilized directions/maps. Of course some old stereotypical traits are behaviors that will die hard. According to CMB, 66% of men are substituting mobile mapping for stopping to ask directions, while 54% of women are doing the same. What did we tell you? Technology never really trumps nature. Steve - I agree te shift to mobile in digital marketing has occurred VERY fast, but I am not sure I agree with "Traditionally in digital, it is always a bad idea to bank on the prospect of changing consumer behavior." The researchers at Goldman even suggested that the disruptive impact of mobile devices would create the largest breakdown of capital and similarly the greatest destination for investment, of any technology to come before - and it would happen faster than anyone anticipated. Steve Smith is the Editorial Director, Events at MediaPost where he oversees all OMMA and Insider Summit event content. He is also the longtime Mobile Insider/MoBlog columnist for Mobile Marketing Daily. A recovering academic who taught media studies at Brown and University of Virginia, he spent the last decade as a digital media critic for numerous publications and as a digital strategy consultant. He also writes for Media Industry Newsletter and eContent magazine. Contact him here.
QV.1 QV.1 is a 40-storey modernist skyscraper in Perth, Western Australia. Completed in 1991, the building is the fourth-tallest building in Perth, after Central Park, Brookfield Place and 108 St Georges Terrace. The project was designed by architect Harry Seidler & Associates and has won numerous awards for its innovative design and energy efficiency. Site and construction history The property, which fronts St Georges Terrace, Hay Street and a whole block on Milligan Street, was home to various buildings, including two 11-storey buildings and, at the corner of Hay and Milligan Streets, the first Fast Eddy's burger bar. Planning for the redevelopment began in the second half of the 1980s, with the design done by Harry Seidler & Associates. The site fell partially within the boundaries of the statutory Parliamentary Precinct, which limited skyscraper heights near Parliament House. The Environmental Committee of the Royal Australian Institute of Architects (WA Branch) recommended that the requirements for the precinct be amended to allow the development to take place. The tower was named "Q.V.1" after the Latin phrase Quo vadis (meaning "where are you going?"). With the plans finalised and approved, the site was purchased in 1989 for $30 million by a joint venture between Barrack Properties (50% share), Kajima Corporation (30%) and Interstruct (20%). The purchase also included a site across Hay Street, which would be turned into a 4.5-storey car park for the development. The owners of the Fast Eddys restaurant had wanted to incorporate this restaurant into the new development, however they accepted a $5.2 million offer from the developers, and instead moved the restaurant to the corner of Murray and Milligan Streets, where it remains today. Finance on the tower had been made possible by a put option granted by BT Property Trust and the New South Wales State Superannuation Board, whereby for an estimated fee of $20 million they agreed to buy the tower upon completion for $340 million if the option was exercised. However, during construction of the tower, Perth property prices suffered a major collapse as demand for office space slumped. In August 1991, just six weeks out from completion, the building did not have a single tenant, and was regarded as one of the great white elephants of the Australian property scene. The owners exercised the option in 1991 upon completion of the project, handing joint ownership to the New South Wales Superannuation Board and BT Property Trust. Post-completion After QV.1's opening in 1991, Perth's office vacancy rate hit a high of 73.6% in 1993. However, by 1996 the tower was fully leased and in June 1998 remained the only premium-grade office tower in the city to be fully leased. When the tower was completed, some suggested the tower was too far west in the central business district. However, the securing of WAPET (now Chevron Australia) as a tenant in QV.1 was regarded as a turning point for the precinct, and helped to establish the west end of the CBD as a resources sector. For many years the roof of the building has been used as a base from which to launch fireworks shells in the city's annual Lotterywest Skyworks fireworks display on Australia Day. Also, following the death of the tower's architect Harry Seidler on 9 March 2006, a powerful light was temporarily installed on the roof of QV.1 to shine a beam into the sky as a memorial. The BTA Property Trust sold its half-stake in the building in 1998 to corporate stablemate BT Office Trust for $130.6 million. In 2003 that half-stake was acquired by Investa Property Group, which in 2006 valued QV.1 at $400 million. The other 50% is owned by Eureka Funds Management. Design According to architect Harry Seidler, one of the architectural objectives in the design of QV.1 was to minimise the impact of the tower when viewed from Parliament House, and this was addressed by offering a narrow profile to that direction. This was necessary to secure government approval for the construction of the tower within the Parliamentary Precinct. To this end, none of the building's facades point directly towards the Parliament building. On the northern side of the site is a two-level retail plaza featuring an artificial waterfall and pond. Another of the design briefs identified by Harry Seidler was that the building should employ passive design in order to minimise energy costs. This is achieved through the use of tinted double glazed windows, as well as the installation alongside windows of horizontal and vertical sun shades. The use of sun shades alone was estimated to reduce the building's cooling costs by $70,000 per year. The building also features separate air-conditioning units for each floor, so that energy does not need to be wasted cooling or heating unoccupied floors. This can lead to substantial energy savings because in Perth's warm climate cooling costs can account for 60 to 70% of total building energy consumption. The tower has a reinforced concrete core, measuring on each side, which bears lateral forces including wind loading. The perimeter of the building features reinforced concrete support columns spaced apart, and there are no internal columns within floors. The perimeter columns and the core support clear span post-tensioned beams apart, with concrete slabs spanning between them. These beams are terminated slightly short so that they can be used as mechanical ducts. Some floors feature landscaped balcony gardens on their south faces, and the top floors feature two tiers of luxurious penthouse offices with landscaped terraces. The main entrance to QV.1 from St Georges Terrace features a set of stone-clad hyperboloid supports that carry the loads of the two perimeter columns which terminate above them on the third floor. The lowest two office floors are mezzanines so that the ceiling in the lobby is an imposing high. The St Georges Terrace entrance is also protected from the elements by a flowing suspended glass canopy. The building is clad with polished granite. The modernist building was criticised for being "Perth's most ugly building" and "a giant Lego block", but architect Harry Seidler described QV.1 as "the best building he had ever built". Awards Awards which have been won by QV.1 include: 1992 Royal Australian Institute of Architects (WA) Architecture Design Award – Commercial buildings over $200 million 1992 Royal Australian Institute of Architects (WA) Commendation – Civic Design Award for Commercial Buildings 1992 Royal Australian Institute of Architects (National) – Best design for a commercial building (over $200 million) 1992 Master Builders Association of Australia – Best workmanship for a building (over $200 million) 1999 Master Builders Association of Australia – National Energy Efficiency Award for Commercial Buildings (joint winner with Stadium Australia) See also List of tallest buildings in Perth List of tallest buildings in Australia References External links QV.1 Official site Emporis page on QV.1 SkyscraperPage page on QV.1 State Library of Western Australia – photograph of the building under construction Category:Landmarks in Perth, Western Australia Category:Skyscrapers in Perth, Western Australia Category:Office buildings in Perth, Western Australia Category:Harry Seidler buildings Category:St Georges Terrace Category:Hay Street, Perth Category:1991 establishments in Australia Category:Skyscraper office buildings in Australia Category:Retail buildings in Western Australia Category:Office buildings completed in 1991
rc. 10/143 Calculate prob of picking 1 x and 2 w when three letters picked without replacement from {a: 4, w: 5, x: 1, q: 1, i: 1, e: 3}. 2/91 Calculate prob of picking 1 j and 1 e when two letters picked without replacement from {a: 2, j: 1, x: 1, e: 1}. 1/10 Two letters picked without replacement from {u: 1, a: 1, h: 2, t: 4, d: 3}. What is prob of picking 1 t and 1 a? 4/55 Four letters picked without replacement from ssksskskskssksssp. Give prob of picking 3 s and 1 k. 165/476 Four letters picked without replacement from jjioppu. What is prob of picking 1 i and 3 u? 0 Two letters picked without replacement from {w: 1, i: 2, a: 2, j: 1}. What is prob of picking 1 a and 1 i? 4/15 Two letters picked without replacement from iffxiiflxxfxxxfff. What is prob of picking 2 f? 21/136 What is prob of picking 1 p and 1 t when two letters picked without replacement from {r: 4, t: 2, c: 7, p: 3, x: 1}? 3/68 Three letters picked without replacement from {y: 1, b: 3, a: 1}. What is prob of picking 1 y and 2 b? 3/10 Four letters picked without replacement from gngnnggggngggn. What is prob of picking 1 n and 3 g? 60/143 What is prob of picking 3 s when three letters picked without replacement from {f: 2, u: 1, t: 3, s: 10}? 3/14 Three letters picked without replacement from {z: 2, y: 4}. What is prob of picking 2 y and 1 z? 3/5 Calculate prob of picking 3 i when three letters picked without replacement from igvvvivvvii. 4/165 Two letters picked without replacement from zbaxxzazabzzxzzaz. What is prob of picking 1 b and 1 z? 2/17 Two letters picked without replacement from fjmjl. What is prob of picking 1 m and 1 l? 1/10 Calculate prob of picking 1 s and 1 g when two letters picked without replacement from uuuuugwsuuwwjusd. 1/60 Four letters picked without replacement from {n: 9, w: 10}. Give prob of picking 1 w and 3 n. 70/323 What is prob of picking 1 e and 1 a when two letters picked without replacement from {e: 1, o: 3, a: 1, z: 1, r: 1}? 1/21 Three letters picked without replacement from {n: 4, d: 6, j: 1, a: 2}. Give prob of picking 2 n and 1 d. 18/143 Two letters picked without replacement from {i: 1, l: 1, q: 2, h: 2}. Give prob of picking 1 i and 1 l. 1/15 What is prob of picking 1 b and 1 v when two letters picked without replacement from asvbbwwbscw? 3/55 Two letters picked without replacement from asupe. Give prob of picking 1 a and 1 u. 1/10 Calculate prob of picking 1 k and 1 a when two letters picked without replacement from kkjjjjkkja. 4/45 Two letters picked without replacement from vovoovovoovv. Give prob of picking 2 o. 5/22 Calculate prob of picking 3 p and 1 o when four letters picked without replacement from {p: 5, o: 6, r: 2}. 12/143 Four letters picked without replacement from {k: 6, c: 12}. Give prob of picking 4 c. 11/68 Two letters picked without replacement from rbzbrbpztrr. Give prob of picking 1 p and 1 b. 3/55 What is prob of picking 1 h and 1 t when two letters picked without replacement from {t: 1, s: 2, h: 4, i: 1, r: 2}? 4/45 Three letters picked without replacement from {n: 7, o: 2, r: 1, e: 1, f: 1, y: 8}. Give prob of picking 1 n, 1 r, and 1 f. 7/1140 What is prob of picking 1 f and 1 c when two letters picked without replacement from icddfc? 2/15 Calculate prob of picking 2 a, 1 v, and 1 n when four letters picked without replacement from vvpvvnpappvppnvppppa. 4/1615 Calculate prob of picking 2 r when two letters picked without replacement from {r: 3, j: 9}. 1/22 Four letters picked without replacement from vcvscvvccmv. Give prob of picking 1 m and 3 c. 2/165 What is prob of picking 3 i when three letters picked without replacement from qiqqxqqixiqqqiiiqqq? 20/969 Two letters picked without replacement from qggflfvs. What is prob of picking 1 q and 1 s? 1/28 Calculate prob of picking 1 t and 1 a when two letters picked without replacement from ssataastaatasa. 3/13 Calculate prob of picking 1 u, 1 c, and 1 q when three letters picked without replacement from qrcnryuq. 1/28 What is prob of picking 1 z, 1 i, and 2 c when four letters picked without replacement from {i: 2, f: 1, b: 3, z: 3, c: 2, y: 1}? 2/165 Two letters picked without replacement from {x: 1, n: 2, o: 1, i: 1}. What is prob of picking 1 x and 1 n? 1/5 Four letters picked without replacement from {s: 7, w: 6}. What is prob of picking 4 s? 7/143 What is prob of picking 1 z and 1 h when two letters picked without replacement from {c: 2, e: 1, h: 1, j: 2, z: 6}? 1/11 Two letters picked without replacement from okuououok. Give prob of picking 2 k. 1/36 What is prob of picking 1 n and 1 m when two letters picked without replacement from {n: 7, a: 9, m: 3}? 7/57 What is prob of picking 1 x and 2 u when three letters picked without replacement from {m: 2, x: 2, u: 2}? 1/10 Two letters picked without replacement from kdr. Give prob of picking 1 d and 1 k. 1/3 Three letters picked without replacement from stvvtvtvtvststvvtv. Give prob of picking 2 s and 1 v. 1/34 Two letters picked without replacement from nnnnnnnnnnnnnnnqnn. What is prob of picking 1 n and 1 q? 1/9 Three letters picked without replacement from {z: 1, d: 2, i: 9, q: 2}. Give prob of picking 1 z and 2 d. 1/364 Four letters picked without replacement from addll. Give prob of picking 2 d and 2 l. 1/5 Four letters picked without replacement from {h: 12, i: 7}. Give prob of picking 4 i. 35/3876 What is prob of picking 1 m and 1 d when two letters picked without replacement from {r: 1, m: 6, e: 7, o: 3, d: 2}? 4/57 What is prob of picking 1 l, 1 b, and 2 n when four letters picked without replacement from {b: 1, q: 1, d: 2, l: 1, i: 1, n: 2}? 1/70 Two letters picked without replacement from vbdvvbddvtdfdvddd. What is prob of picking 1 b and 1 d? 2/17 Two letters picked without replacement from qkppkukqytkt. What is prob of picking 1 q and 1 k? 4/33 Calculate prob of picking 1 f, 1 j, 1 k, and 1 e when four letters picked without replacement from {e: 2, j: 4, f: 2, k: 3, b: 2, i: 1}. 48/1001 Two letters picked without replacement from {h: 2, u: 1, b: 1, o: 2, j: 1}. What is prob of picking 2 h? 1/21 What is prob of picking 1 y and 1 h when two letters picked without replacement from yehqqhhhyqyeqddhh? 9/68 Calculate prob of picking 2 u when two letters picked without replacement from {u: 3, c: 1, y: 5}. 1/12 Three letters picked without replacement from {e: 12, x: 7}. Give prob of picking 3 e. 220/969 Four letters picked without replacement from {f: 2, p: 2, r: 4, m: 2}. Give prob of picking 3 r and 1 f. 4/105 What is prob of picking 2 j when two letters picked without replacement from jjjrrr? 1/5 Four letters picked without replacement from {r: 1, p: 2, g: 1, e: 2, x: 1}. Give prob of picking 1 x, 1 g, and 2 e. 1/35 Three letters picked without replacement from gyyvgyvyy. What is prob of picking 1 v and 2 y? 5/21 What is prob of picking 3 c and 1 e when four letters picked without replacement from ecccecccccccccec? 33/70 Calculate prob of picking 1 a and 1 x when two letters picked without replacement from xaxxaanxonaxxnaox. 35/136 Two letters picked without replacement from {a: 4, t: 1, v: 2}. What is prob of picking 2 v? 1/21 Two letters picked without replacement from vnnyynmssmvkvv. What is prob of picking 2 m? 1/91 Four letters picked without replacement from {h: 15, i: 5}. Give prob of picking 2 i and 2 h. 70/323 Two letters picked without replacement from agggggaggggggaaaggg. Give prob of picking 1 g and 1 a. 70/171 Three letters picked without replacement from xtxytxxktltyyxy. What is prob of picking 1 k, 1 x, and 1 y? 4/91 Two letters picked without replacement from nennnneefxnnfnunnfh. What is prob of picking 1 h and 1 f? 1/57 Three letters picked without replacement from {s: 1, o: 3, f: 3, g: 1, y: 3, t: 4}. What is prob of picking 1 f, 1 o, and 1 s? 9/455 Two letters picked without replacement from llaaaallljlll. Give prob of picking 1 a and 1 j. 2/39 What is prob of picking 1 f, 1 m, and 2 v when four letters picked without replacement from {m: 1, v: 2, y: 1, f: 13}? 13/2380 What is prob of picking 2 o when two letters picked without replacement from {i: 5, o: 5}? 2/9 Three letters picked without replacement from gglgllllgllllg. What is prob of picking 3 g? 5/182 Two letters picked withou
You can choose a password length of not more than 50 characters. Do not forget to switch keyboard layout to the English. Do not choose a password too simple, less then 4 characters, because such a password is easy to find out. Allowed latin and !@#$%^&*()_-+=., characters Shingeki no Kyojin Attack on Titan Several hundred years ago, humans were nearly exterminated by titans. Titans are typically several stories tall, seem to have no intelligence, devour human beings and, worst of all, seem to do it for the pleasure rather than as a food source. A small percentage of humanity survived by walling themselves ... More in a city protected by extremely high walls, even taller than the biggest of titans. Flash forward to the present and the city has not seen a titan in over 100 years. Teenage boy Eren and his foster sister Mikasa witness something horrific as the city walls are destroyed by a super titan that appears out of thin air. As the smaller titans flood the city, the two kids watch in horror as their mother is eaten alive. Eren vows that he will murder every single titan and take revenge for all of mankind. Note: The last episode received a pre-airing in Tokyo, Marunouchi Piccadilly 1 theater. TV broadcast took place after midnight, a few hours later. Less
Shoebox – my virtual hand-drawn, hand-coded live band - michael_forrest https://medium.com/@michael.forrest.music/shoebox-my-virtual-hand-drawn-hand-coded-live-band-454368d0e66f ====== busymichael This project is so impressive, and I'm so disappointed with HN's reaction. At this point there are only 7 comments and 42 points for the thread. Yet this is one of the most unique and creative pieces of coding that I've seen on a here ever. I wish the community would take a second look at this. It needs more credit. ------ aantix I love it. Fantastic work. I'd love to play around with it if you ever decide to open source it. ------ lmm Are you aware of MikuMikuDance? There's a big community doing 3D-rendered music videos from drawn characters there, though with somewhat different emphasis. ~~~ michael_forrest I wasn't - I'll have a look ------ 20after4 This is amazing, do you plan to release the engine for this? I'd love to play around with the code. ------ seertaak Great job -- nice song, cool video, and thanks for the write-up! ------ mitchellshow Very cool work. How would someone use this with their own song? ~~~ michael_forrest Thanks. I guess there's scope for a web service where people could make their own videos by uploading their music and artwork and although that would be a pretty huge job. If there are any programmers who would want to do something with this I might be open to putting aspects of it up on Github. ~~~ 20after4 I'm definitely interested in this. ------ SingletonIface Nice work, both with the music and the video.