content
stringlengths
37
2.61M
Race, Ethnicity and Politics in Three Peruvian Localities: An Analysis of the 2005 CRISE Perceptions Survey in Peru The purpose of this article is to present and discuss some of the findings of the Peruvian Perception Survey, which was conducted in 2005 for the research project Horizontal Inequalities and Conflict of the Center for Research on Inequality, Human Security and Violence (CRISE), Queen Elizabeth House, Oxford University. It analyzes relationships between ethnic identity, social exclusion, and political action in three different locations in Peru. In a first section, the paper discusses the issue of measuring ethnic and racial categories in Peru from a quantitative perspective. Then, it presents some of the survey's results in regard to the respondent's ethnic and racial self-identification. Finally, it analyzes and outlines some conclusions about the relationships that have been found in the survey between variables of ethnic or racial self-identification, perceptions of political and social exclusion, political activism, attitudes toward violence, and perceptions of the efficacy of political action.
Amazon will host Prime Day, a shopping event touted as having 'more deals than Black Friday' on July 15 for members of its premium subscription service. Prime has been a huge success for the e-retailer, and subscribers buy often and spend more on the Amazon website. An Amazon.com package awaits delivery from UPS in Palo Alto, Calif. Amazon will host Prime Day, an international shopping event that will have "more deals than Black Friday" on July 15 for its Prime members. Amazon is offering a global shopping event that has “more deals than Black Friday” to commemorate its 20th birthday and entice consumers to sign up for its premium subscription service. Doing so is key for the e-retailer: Prime has been a huge success for Amazon, and it could be a key factor in securing consumers' loyalty as the e-commerce space becomes increasingly crowded with competitors. On July 15, Amazon will host Prime Day, an international daylong sale, the e-commerce company announced Sunday. The sale is exclusively available for Prime members in the US, United Kingdom, Spain, Japan, Italy, Germany, France, and Austria. Shoppers in the US will be able to find deals at midnight, with new offers coming on as often as every 10 minutes. Now in its 10th year, Prime, which costs $99 per year, has a range of perks, including free two-day shipping and unlimited streaming of TV shows, movies, and songs. Although Amazon has yet to disclose how many people participate in the service, CEO Jeff Bezos said in the company’s 2014 fourth quarter financial statement that Amazon has “tens of millions” of subscribers. Worldwide paid membership grew 53 percent in 2013, with US membership growing 50 percent that year. In January 2015, Re/Code reported that a partner at Amazon said the company had around 60 million Prime members worldwide. In May, Consumer Intelligence Research Partners (CIRP) estimated that Amazon Prime had 41 million US members. But for Amazon, Prime doesn't just offer up millions of customers paying $99 per year for free shipping; it also offers up customers who spend more than their non-Prime counterparts. CIRP found that Prime members in the US spend on average about $1,100 per year, while non-members spend about $700 in that time frame, adding up to an extra $164 billion spent by Prime members per year. The extra spending could attributed to the company’s "conversion rate" – how often retailer gets a website visitor to become a paying customer. In June, website traffic measurement firm Millward Brown Digital found that when Amazon Prime members go to the site, they "convert" (buy something) 74 percent of the time, Internet Retailer reported. In comparison, non-Prime members convert 13 percent of the time. And it turns out that Prime members are very loyal: when they go to other online retailers, such as Walmart or Target, they convert just 6 percent of the time. In late April, Amazon reported its first quarter sales were $22.72 billion, 15 percent higher compared to 2014’s first quarter. The company estimates its second quarter sales will be between $20.6 billion and $22.8 billion, which means sales would grow grow between 7 percent and 18 percent compared to last year’s second quarter. Amazon did not disclose how much of its sales comes from Prime members. But if CIRP’s estimates within the ballpark of Amazon’s actual numbers, then it is a no-brainer for the online retailer to continue to expand Prime. Besides the Prime Day sale, Amazon has extended its Prime Now service, where members in select US cities can order “essential” items on their mobile devices and have their orders delivered within two hours for $7.99. Last week, Amazon started to offer the service in certain parts of London, according to CNET. The Bite Whole Foods admits to overcharging customers. Does the apology ring true?
Blind trust? The importance and interplay of parasocial relationships and advertising disclosures in explaining influencers persuasive effects on their followers Abstract With the rise of social media, social media influencers (SMIs) have steadily gained importance as brand endorsers by establishing strong parasocial relationships (PSRs) with their followers. These PSRs, however, have been neglected in previous experimental studies on SMIs because they used non-followers as their study participants. This paper, therefore, analyzes the impact of SMIs on their respective followers and reports the results of two experiments. The first experiment (N=144) confirmed that follower status (non-followers/followers) affected the strength of PSRs, source credibility, and evaluation of a sponsored Instagram post. The second experiment (N=157) additionally included the presence or absence of an advertising disclosure. Although there were no group differences in conceptual persuasion knowledge, the results indicated that followers, who had established a strong PSR with the SMI, reported lower evaluative persuasion knowledge. Furthermore, followers reported enhanced purchase intentions and brand evaluations, especially when the posts contained advertising disclosures.
Relationship between salivary IgA secretion and upper respiratory tract infection following a 160-km race. AIM The relationship between salivary IgA secretion rate and upper respiratory tract infection (URTI) was studied in 155 ultramarathoners (126 males, 29 females, mean age 46.5+/-0.7 y) who had qualified to run the 160-km 2003 Western States Endurance Run. METHODS Subjects provided saliva samples during registration, held the morning before the race, and within 5-10 minutes postrace (mean race time, 26.2+/-0.3 h). Unstimulated saliva was collected by expectoration for 4 minutes into 15-mL plastic, sterilized vials. Runners finishing the race and providing pre- and postrace saliva samples (n=106) turned in a health log specifying URTI episodes and severity of symptoms for the 2-week period following the race. RESULTS The total volume of saliva that the runners was able to expectorate during sample collection decreased 51% postrace compared to prerace values (P<0.001). Saliva protein concentration increased 20% (P<0.001) while the saliva protein IgA concentration decreased 10% (P<0.05). Salivary IgA secretion rate decreased 46% when comparing pre- to postrace values (P<0.001). Twenty-four percent of the runners finishing the race and providing salivary samples reported an URTI episode lasting 2 days or longer during the 2-week period following the race (mean number of days with symptoms was 5.4+/-0.6 days). The decrease in salivary IgA secretion rate (pre- to postrace) was 53% greater in the 25 runners reporting URTI (-355+/-45 microg/min) compared to the 81 runners not reporting URTI (-232+/-37 microg/min), (P=0.04). CONCLUSIONS In summary, nearly 1 in 4 runners reported an URTI episode during the 2-week period following a 160-km race, and the decrease in salivary IgA secretion rate was significantly greater in these runners compared to those not reporting URTI.
In many situations, it is desirable to produce a rating stream concerning a media presentation. One example of such a presentation is the presidential debates. It is very useful for news organizations and other groups to have an understanding of how the public feels about the different events that occur during a debate. One way of obtaining such information is to show a presidential debate to a roomful of people. During the presidential debate, the people in the room can be provided with a knob that they adjust to indicate how they feel about the different events that occur during the presidential debate. The collected information thus gives some indication of how the public feels about the debate. The public's approval or disapproval at different points in the debate can be newsworthy. A downside of this system is that such systems do not tend to provide an accurate representation of the opinion of the public at large, since the people in the room tend not to be a representative sample of the public. The members of the panel tend to be from a single area. Additionally, the sample size tends to be far too small to provide an accurate understanding of the public's approval or disapproval of different events that occur during the presentation. It is desired to have an improved method of getting a rating information concerning a media presentation that more accurately reflects the opinions of the public.
Salivary electrolyte concentrations and electrical potential difference across the parotid salivary duct of anaesthetized sodium-replete sheep. Salivary flow rate and the concentrations of electrolytes in parotid saliva and arterial plasma from anaesthetized sodium-replete sheep were measured before, during and after ipsilateral intracaroitd infusion of acetylcholine at 10 nmol min-1 to ascertain whether anaesthesia altered the relation between salivary flow and sodium concentration. The potential difference (PD) between the lumen of the parotid duct and the vascular system was also measured. Concentrations of salivary sodium and phosphate were decreased and potassium concentration and total CO2 content were increased when rate of salivary flow was increased by acetylcholine infusion. Salivary chloride concentration was reduced in five experiments and increased in three experiments when flow rate was elevated. Thus the flow-composition relations of parotid saliva from anaesthetized sheep were essentially the same as those for saliva from conscious animals. The PD between the lumen of the parotid duct and blood at resting flow rates was 9.4 +/- 1.07 mV, lumen negative. At high flow rates, stimulated by acetylcholine infusion, the PD increased to 21.9 +/- 2.20 mV, lumen negative. This increase in PD of the duct epithelium appeared to depend on changes in the composition of saliva arriving at the site of potential measurement.
Chloroplast and mitochondrial DNA are paternally inherited in Sequoia sempervirens D. Don Endl. Restriction fragment length polymorphisms in controlled crosses were used to infer the mode of inheritance of chloroplast DNA and mitochondrial DNA in coast redwood (Sequoia sempervirens D. Don Endl.). Chloroplast DNA was paternally inherited, as is true for all other conifers studied thus far. Surprisingly, a restriction fragment length polymorphism detected by a mitochondrial probe was paternally inherited as well. This polymorphism could not be detected in hybridizations with chloroplast probes covering the entire chloroplast genome, thus providing evidence that the mitochondrial probe had not hybridized to chloroplast DNA on the blot. We conclude that mitochondrial DNA is paternally inherited in coast redwood. To our knowledge, paternal inheritance of mitochondrial DNA in sexual crosses of a multicellular eukaryotic organism has not been previously reported.
Ocular Trauma: 2 Years Retrospective Study in Sari, Iran Introduction: Ocular trauma is one of the very important causes of blindness and disability in developing countries; despite the fact that it is preventable in the majority of cases. Considering the importance of the topic and the fact that most of such injuries are preventable, a better understanding of the etiology and epidemiology of the injuries has a vital importance in planning for reduction of their prevalence. The current study has aimed to identify the prevalence and epidemiology of the ocular trauma at the University Hospital Boo-Ali-Sina in the city of Sari (Northern Iran) between 2009-2010. Method: This is a retrospective study of 178 patients case notes who were admitted through the ophthalmology service to the above center. A proforma was designed and For the purpose of classification, the International Classification of Diseases (ICD-10) was used. Results: During the study period, 178 patients with eye trauma were admitted to the hospital of these, 135 (75.8%) male and 43 (24.2%) were female. 35 (19.7%) aged 25-34 and 98 (55.1%) cases lived in villages. The injuries were most common during winter (55/30.9% of cases). The most common diagnosis was open wound of eyelid (ICD-10 code: S01.1) (40/ 22.5% of cases) and in the majority of cases it was unilateral (left eye) (96/53.9%). The mode of the trauma was contact with blunt object in 22 (12.4%) cases (ICD-10 code: Y29). Conclusions: Considering the prevalence of ocular trauma, appropriate education and the use of safety equipment are important measures to prevent the injuries. Our data shows that the prevalence of the injuries amongst the young workers is high and this necessitates the age group to be the target for such education. The education should include the vulnerable population at both extremes of age. INTRODUCTION Ocular trauma is one of the very important causes of blindness and disability in developing countries; despite the fact that it is preventable in the majority of cases. The injuries have major financial implications for patients and the society and are considered an important etiology for the blindness worldwide. Corneal injury specifically can cause a permanent disability which can not be corrected by glasses. Moreover lens trauma can lead to cataract which even with appropriate management can lead to disability. Moreover, injuries to the retina or optic nerve are irreversible. Trauma to the eye is extremely common. This is especially so in developing countries like Pakistan. 5% of all ophthalmic admissions in the developed world result from ocular trauma, while in developing world this figure is much higher. According to the research undertaken in north east Colombia, Kuwait, Turkey, Italy and Hong Kong, most of the pediatric ocular injuries occur in boys and the number of non-penetrating injuries have been more than the penetrating injuries. In a study in rural areas of Tanzania, most of the ocular trauma has been related to wood chips which are mainly observed in the workers and farmers during their day to day work. Similar results have been demonstrated in studies undertaken at Australia and India. In Iran little number of studies on the topic has been undertaken and therefore insufficient information is available about the prevalence of the injuries. Considering the importance of the topic and the fact that most of such injuries are preventable, a better understanding of the etiology and epidemiology of the injuries has a vital importance in planning for reduction of their prevalence. The current study has aimed to identify the prevalence and epidemiology of the ocular trauma at the University Hospital Boo-Ali-Sina in the city of Sari between 2009 -2010. MATERIAL AND METHODS This is a retrospective study of 178 patients' case notes who were admitted through the ophthalmology emergency services between 2009 -2010 to the above center. A proforma was designed which contained the following parameters: ■ Demographic information; ■ Diagnosis and etiology of the injury. For the purpose of classification, the International Classification of Diseases (ICD-10) was used. Statistical analysis was undertaken using the SPSS version 19 software. RESULTS The number of ophthalmology ward admissions of the Boo-Ali-Sina University Hospital in 2009 and 2010 were 1787 and 1563 patients respectively. Of the above numbers, 178 patients were admitted due to the ocular trauma. All patients in the latter group were included in the study. Demographic information: Out of 178 patients, 135 were men and 43 women. The information has been summarized in the Table 1. Diagnosis and etiology of the injury: In 53.9% of the cases (96 patients) the ocular surface trauma was left sided, 38.8% (69 patients) right sided and in 7.3% (10 patients) it was bilateral. Furthermore, blunt object's trauma with the code of Y29 was reported in 35.4% patients, car accident with the code of V49 in 31 cases (17.4%) and other source of trauma was seen in 20.2% of the patients that etiology and mode of injuries were mainly comprised of cow kick, metal pieces, explosion, nail piercing, sickle contact, explosion, flower thorn, ball contact, iron filing, fist and IOFB. In 23 patients (12.9%) penetrating trauma and in 63 cases (35.4%) blunt trauma has been reported (table 2). The range of referral time from the injury to the center was between 20 minutes to 90 days with a mode of 1 hour. The discharge outcome has been documented as partially treated in 118 (66.3%) of cases. DISCUSSION The current study has focused on identification of the epidemiology and etiology of the significant eye injuries requiring admission, at the Boo-Ali-Sina University Hospital in Sari, Iran between 2009 -2010. The study shows that only one out of every nineteen patients referred to the above center, required admission to the hospital. Highest prevalence of the admissions belongs to the 25-34 years age group (19.7%). This is in concordance with the data from Mashhad Medical Sciences University (in Iran) ophthalmology center which reported the maximum prevalence of referrals between the age of 25-30 years. It is known that young men are more likely to sustain ocular trauma as a result of occupational hazards. Our study shows that the prevalence of the injuries in the age less than 10 years and elderly population was 12.4% and 4% respectively. As can be seen children under the age of 10 years also have a significant rate of ocular injuries. The data suggests that the other author's advice on education and preventative methods at working environments and schools may reduce the prevalence of the ocular injuries. The data also reveals a male to female ratio of 3 to 1. This is compatible with other similar studies. Most probably the difference is due to occupational hazards and more frequent involvement of males in violent activities. In terms of occupation of the patients our data shows that the occupational injuries amongst factory and construction workers occur at age groups of 25-34 years ( reported that in 28.8% of the factory workers ocular injuries have occurred due to lack of sufficient precautionary measures and mostly at the age group of 20-40 years old. The authors have divided ocular injuries into 5 categories: 1-foreign body related 2-Blunt trauma 3-chemical, electrical burns/injuries 4-occupational trauma and 5-Any other trauma not mentioned in the previous groups. They have divided the etiology into lack of equipment safety, faulty equipment, lack of precautionary measures, working in the dark environments, lack of occupational education and miscellaneous (such as age).They have proposed that education of the workers, protective equipment, regular servicing of the equipment, abstaining from overtime working and improvement of light in factories can reduce the prevalence of ocular injuries. In most studies ocular trauma is reported to occur most often at work. According to the study of Poon et al. in Tanzania, has a high rate of eye injuries in workers and farmers, who have an eye injury while working out. Also, studies in Singapore, Australia and India confirm the same has been done. Considering the prevalence of ocular trauma, appropriate education and the use of safety equipment are important measures to prevent the injuries. Our data shows that the prevalence of the injuries amongst the young workers is high and this necessitates the age group to be the target for such education. The education should include the vulnerable population at both extremes of age. The study was performed retrospectively and many of the desired pieces of information were not documented in the medical notes which reduce the range of conclusions which can be made from the data. This study was a state project in Iran and supported by a grant from Mazandaran University of Medical Sciences. This
Plasma plumes produced by laser ablation of Al with single and double pulse schemes We generated and characterized plasma with single and double picosecond laser pulses in order to study the plume dynamics and to control the plasma properties. The double-pulse scheme was found to be superior for the generation of a homogeneous plasma. The lateral expansion was prominent for irradiation schemes wherein energy of the first pulse is lower/equal to that of the second pulse. While the velocities of the fast and slow species were found to be nearly equal, the emission counts corresponding to slow species are larger for single pulse compared to the double pulse. Introduction Laser produced plasmas (LPP) are of great importance in a variety of applications including EUV generation 1, high-order harmonic generation, attosecond pulse generation, wake-field acceleration 9, nanoparticle and nanocluster generation 10,11 etc. LPPs are highly transient such that the parameters of the plume vary with time and space rapidly and therefore characterization of the plume is necessary for the above mentioned applications. The nature of expansion of LPP is essentially dependent on number density and temperature of the plume, which depends on parameters of the laser such as wavelength, pulse duration, energy and spot size, nature and pressure of the ambient gas along with material properties 12. Attaining a suitable number density at a specific high temperature is crucial for many applications, and it is challenging to achieve 13. Various irradiation schemes including single-pulse (SP) and double-pulse (DP) methods 14 were also employed previously to investigate parameters of plasmas. It has been reported that, while the temperature increases very slightly (usually < 10%), the line emission intensity from various species increases 15,16 for the optimum delay, ∼500-1000 picoseconds (ps), between pulses 17 for DP scheme. However, it is theoretically predicted that the temperature of the plasma could be increased up to 3 times if the preformed plasma is irradiated with a delayed pulse at ≤ 200 ps, but it has not been experimentally demonstrated to date 18. There are studies on DP using laser pulses of different wavelengths where a dramatic increase in the emission intensity and plasma temperature has been observed when an infrared (IR) laser is used to irradiate the pre-formed plasma 19, 20. Stratis et. al. 2001 21 used the orthogonal geometry (i.e. the plasma plume formed by the first pulse interacts with the second beam passing perpendicular to the plume expansion direction) in the pre-ablation mode and found an increase in overall size and a change in the shape of the plume. Characterization of such plumes are generally carried out using techniques such as optical emission spectroscopy (OES) and time-of-flight (TOF) to infer the spatio-temporal nature of the plume. Standard experimental techniques capable of uncovering the early expansion of the plasma plume are time-resolved shadowgraphy and schlieren photography 28. However, these techniques cannot be used to reconstruct the hydrodynamic expansion of the plume and its radiative characteristics. Therefore time-resolved plume imaging using an intensified charge coupled device (ICCD) would be a suitable choice to study the hydrodynamic expansion along with time-resolved OES for spectral information. This work compares the plume dynamics of Al plasmas generated by single and double pulse schemes. Experimental The plasma is generated using a 60 ps laser pulse at 800 nm from a mode-locked Ti: Sapphire laser (Odin II, Quantronix) focused to a spot size of ∼ 85 m using a 500 mm plano-convex lens onto the surface of a 99.99% pure 50 mm 50 mm 3 mm Al target (ACI Alloys Inc, USA), which is kept in nitrogen at a pressure of ∼ 10 −6 Torr. The laser operates at a repetition rate of 1 kHz whereas the experiment is performed for a predefined number of irradiations on the target by positioning a fast, synchronized mechanical shutter into the beam path. The target is translated by ∼200 m after each measurement to avoid repeated irradiation on the pit formed on the surface due to ablation by previous irradiations. Emissions from ions and neutrals in the range from 300 nm to 400 nm are recorded using a spectrometer (SP 2550, Princeton Instruments) equipped with a 13 m 13 m, 10241024 Gen II ICCD (Pi: MAX 1024 f, Princeton Instruments), with a spectral resolution of ∼ 0.02 nm. The layout of the experimental set up is shown in Fig. 1. Spectral lines in the emission spectra are compared with a standard NIST database 29 and emissions from neutral as well as ionized species are identified. Imaging of the plume is carried out by repeating the experiment for different gate delays/time delays (t d ) and for a given gatewidth/integration time (t w ) using the ICCD. The experiment is also repeated for two different geometries to explore the features of expanding plasma and we compare the same for SP and DP schemes. The DP scheme is implemented using a combination of a polarizing cube beam splitter and a half-wave plate for splitting the energy used in SP scheme into two pulses propagating along different paths. The second pulse travels a longer path, leading to an inter-pulse delay of ∼ 800 ps and follows the path of the first pulse after the pulses are combined in a collinear geometry (back to back irradiation at the same position). Also, measurements are repeated for various irradiation energies in order to find out the variation in nature of expansion and changes in the abundance of species. Results Time-resolved OES for various energies and for different positions along the expansion direction of the plume is carried out to understand the abundance of species in the plasma. Emission lines at 394 nm and 396 nm (for Al I), 358 nm (Al II) and 447.9 nm, 451.2 nm and 452.8 nm (for Al III) have been used in the present case for analysis. Emission from different species is found to be dependant on the irradiation energies. While only Al I emission is recorded for 100J, emission from both Al I and Al II is visible until 500 J. Al III emissions are visible along with Al I and Al II for all irradiation energies above 500 J which is consistent with the previous reports on the dependence of ion yield on laser fluence 30. It is also found that the emission from Al I saturates once emission from Al III appears in the OES. From OES measurements along the expansion direction for ∼ 600 J, the emission maximizes at ∼ 1 mm above the target surface for all the species. It is found that the emission from Al III has a larger spatial extent compared to Al I and Al II, which can be attributed to the generation of faster species via non-thermal processes (See Fig 2). Measurements on the plume dynamics have been carried out for various t d s up to 500 ns starting from 40 ns with 10% of t d as t w. This has been repeated for energies as shown in Fig. 3 for a) 100 J,b) 200 J, c) 300 J, d) 400 J, e) 500 J and for f) 600 J for SP. For 100 J, the plume is bright with counts per pixel increasing initially with time, decreasing soon after and eventually diluted to the surrounding after it has reached a certain distance from the target surface (namely, the plume length) and is ∼ 4.3 mm in ∼ 110 ns. Also, there is only one peak/component observed for 100 J with negligible expansion for the measurement parameters. The hottest spot, the point in the plume at which maximum emission occur, is observed to be at the same spatial point ∼ 1.5 mm above the target surface from t d = 40 ns until the plume gets diluted. On increasing the show a significant difference at later times. For all energies ≥ 200 J, the emission from slow species is larger for all measured t d s. Moreover, the emission count increases with energy for both fast and slow components, which is evident in the OES measurements (see Fig. 2). Also, an increase in the plume length has been observed when plumes corresponding to 100 J are compared with 200 J which is shown in Fig 3 a and b. The emission features for DP schemes for various energy combinations such as a) 100 J-500 J, b) 200 J-400 J, c) 300 J-300 J, d) 400 J -200 J and e) 500 J -100 J are shown in Fig. 4. The inter-pulse delay is fixed to ∼ 800 ps since the improvements in ion density has been reported for delays < 1 ns 17. The plume in DP scheme is found to be entirely different from SP not only in terms of the total emission counts but also the geometrical shape of the expanding plume. In the case of SP, the plasma is more directional with relatively low lateral expansion as indicated in Fig. 3 showing a trend similar to a femtosecond (fs) plasma 31. Whereas for DP scheme, the lateral expansion of the plume is evident resembling a ns plasma expansion (as shown in Fig. 4 a-e) 31, though the process leading to the shape of the plasma plume may be different. The splitting of the plume into two components; the fast and slow components is evident, but occurs little later, i.e. for larger t d s in all DP cases compared to all the SP schemes. The current observation implies that the fast emitting species are mostly either highly charged species or the result of recombination of the fastest species traveling at very high velocities. This is corroborated with the measured spectra from highly charged species at larger distances (see Fig. 2). Thus, it could be inferred that the fast components consist mostly of species from non-thermal origin whereas the slow component in general exhibits thermal nature 32. Further, the emission counts are higher for cases a, b and c of Fig. 4 compared to all other DP combinations and SP schemes. The variation in geometric shape of the plume for different irradiation schemes is also different. While the shape of the plume changes from cylindrical to more spherical with an increase in irradiation energy for SP, the DP scheme displays a better spherical shape for cases a, b and c with aspect ratio closer to 1, i.e. ∼ 1.4, than all the other cases presented here (please see the Figure 1 in supplementary file for more details). From these observations, it is clear that the plume dynamics shows distinctive features for SP and DP, more precisely, in the geometric shape of the plume and integrated emission intensities from the plasma. The emission intensity profile is also estimated by averaging the signal over several pixel rows from the recorded images to get a complementary dataset that yields information similar to optical time-of-flight (OTOF) measurements except the fact that current measurements provide a convolution of time-of-flights of all species that move within the plume, a representative figure of which is shown in Fig 5 (a more detailed information is available in Figure 2 and Figure 3 of the supplementary file). While the emission intensities for fast species are higher compared to its slow counterpart in all DP cases, the emission intensity of slow components is larger in SP schemes. The velocities of fast and slow components of the plasma calculated for SP and DP cases at later times can be found in Table. 1 and Table. 2 respectively. It is clear from Table 1 that the fast components in most cases are moving with nearly the 4/9 Figure 5. The emission intensity profiles are estimated from averaging the signal over several pixel rows from the recorded images to give a complementary dataset that yields information similar to optical time-of-flight (OTOF) measurements for SP at 100 J and 500 J and DP at 500 J-100 J and at 100 J-500 J. The emission counts are maximum for 100 J-500 J combination. same average velocity irrespective of the irradiation energies except for 100 J (where no evidence of fast species is detected). However, velocities of the slow component increase with an increase in energy for SP. Also, from the plume dimensions, plume length is found to be the same for SP at 600 J and for all DP cases; although there is a definite change in plume width for each case. To understand this further, the SP at 100 J, (Fig. 3 a) is compared with DP at 100 J-500 J (Fig.4 a.). The plume is found to concentrate along the plume axis at 100 J for all t w s and such a plume would expand to ∼ 80 m from the target surface (given the spot size is ∼ 85 m) assuming the maximum initial velocity to be ∼ 100 km/s in 800 ps. The situation is similar when the first pulse irradiates the target in the DP case a, producing a plasma plume. As soon as the second pulse arrives, the nature of expansion of the plume changes dramatically; the schematic of which is illustrated in Fig. 6. Fig. 6 (I) and (II) represents the irradiation of the target before and after the first pulse reaching the surface respectively. A plasma will be formed by the first pulse and the delayed pulse may interact with the plasma via inverse-bremsstrahlung (a process that depends on number density and temperature of the plume). The cross section of this process would be low due to the lower number density in the present case, leading to ineffective shielding of the second pulse from reaching the target surface. This would invoke processes such as: 1) laser-plasma ( Fig. 6 (III)), 2) laser-target ( Fig. 6 (IV)) and 3) plasma-plasma interactions (Fig. 6 (V)). The second plasma generated from the heated pit formed by the first pulse would then experience the pressure of a dynamic plasma produced by the first laser pulse. This could be compared to a situation similar to the expansion of a plasma plume at moderate pressures of ∼ 1 Torr, except that the pressure exerted by the pre-formed plasma plume is dynamic and can have a multitude of interactions as compared to the static ambient pressure. Hence, the two plasmas thus produced can interact with each other, producing shock waves and other possible collisional interactions, which make the expansion relatively complex. This results in a plasma plume with larger radial expansion than in SP cases due to enhanced collisions and energy exchange processes. The ablation efficiency from the target surface for cases a to c in DP is larger due to the increased ablation efficiency from a heated molten surface. Therefore it can be concluded that the lateral expansion would be prominent for irradiation schemes in which the first pulse energy is lower/equal to the second pulse. In general, due to the interaction of neutral and charged species in the plasma produced by the first pulse and with fast electrons produced by the second plasma, it is very likely to produce highly charged species in the DP case similar to the situation of larger irradiation energies in SP scheme. Figure 6. Schematic for generation of plasma in DP geometry. Here the first pulse produces a plasma (II) and the second pulse delayed by 800 ps then interacts with the plasma produced by the first laser pulse via mechanisms such as (III) laser-plasma, (IV) laser-target and (V) plasma-plasma interactions. In addition to the above DP scheme, irradiation at two spatial points (DP2) separated by 700 m on the target was also performed and showed distinctive features compared to SP and the above mentioned collinear DP cases (which will be called DP1 hereafter). Two pulses of equal energies (400 J/pulse) with zero delay between pules are used for DP2. Though an adiabatic expansion of plasma with the existence of two peaks (fast and slow) is evident in all cases, the plume expansion is found to have considerable variation in its shape for DP2 due to the interaction of the two plasmas, formed upon the irradiation at two different spatial points on the target surface, forming colliding plasmas. The plume expansion is found to be confined along the plume axis leading to a more cylindrical plasma plume that resembles the expansion features of a fs plasma plume (please see Figure 4 in the supplementary file for more details). The slow peaks are broadened spatio-temporally leaving the possibility for the generation of nanoparticles at a later stage. A more detailed study would be necessary in this regard as this method can be used to create more uniform thin films as the plume is more cylindrical compared to fs plasma. Discussion A picosecond laser plasma has been generated and characterized to investigate the dynamics of the plume so as to make it suitable for various applications in which the plume shape as well as abundance of certain species are crucial. From these measurements it is demonstrated that the features of the plume are governed and modified by the double pulse geometry, such that homogeneity of the plume could be enhanced, i.e. for t d ≤ 60 ns from 100 J-500 J to 300 J-300 J, at least in the earlier stages of expansion before plume splitting due to fast and slow species occurs. A detailed analysis of laser-plasma, laser-target and plasma-plasma interaction for the double pulse geometry is performed to understand the structural modifications in the plume. Optical emission spectroscopy facilitated the understanding of the abundance of species both spatially and temporally for given energies, while imaging with an ICCD was used to investigate the hydrodynamics of the plasma plume. From these measurements it is clear that the structure of the plasma plume is modified by employing a double pulse geometry which has been analyzed carefully to understand the lateral expansion of the plume. While the scheme DP2 is predicted to favor generation of nanoparticles at a later stage, the plume in DP1 scheme exhibits symmetry in expansion showing a homogeneity useful for applications such as high-order harmonic generation that demand phase-matching. Methods A plasma is generated using 60 ps laser pulses at 800 nm from a mode-locked Ti:Sapphire laser (Odin II, Quantronix) focused onto a solid Al target, which is kept in nitrogen ambient at pressure ∼ 10 −6 Torr. Optical emission spectroscopy and plasma plume dynamics have been recorded using a high resolution spectrometer equipped with Gen II ICCD (Pi: MAX 1024 f, Princeton Instruments). Imaging of the plume is carried out by repeating the experiment for different gate delays/time delays and for a given gate width/integration time. The experiment is also repeated for two different geometries, single pulse and double pulse, to explore and compare the features of an expanding plasma plume.
Jumping into one of the fiercest tech industry battles in recent memory, the U.S. Supreme Court on Monday agreed to hear the five-year-old patent fight between Apple and Samsung — raising the stakes in a case that has centered on claims the South Korean tech giant copied the iPhone to build its own smartphone empire. In a brief order, the justices announced they will consider a key part of Samsung’s appeal of its loss to Apple, a 2012 verdict that eventually forced the maker of Galaxy smartphones and other popular devices to pay hundreds of millions of dollars in damages. The Supreme Court is likely to hear arguments in the case during the next term that starts in October. The justices, in deciding to take the case at last Friday’s closed door conference, agreed to hear one of two questions posed by Samsung’s appeal: how damages should be assessed for patent violations when the technology involved is just one of many ingredients that go into a device like an iPhone. The Supreme Court indicated it would not be reviewing a second issue raised by Samsung involving the law surrounding design patents, which were in play in the trial because of findings that the iPhone’s basic look and feel had been duplicated. For Apple, the high court’s decision solidifies a finding that Samsung copied some iPhone technology, but threatens to further stagger the Cupertino company’s relentless legal campaign to prove Samsung mimicked Apple’s smartphones and tablets. The appeals courts have repeatedly stripped away many of Apple’s early triumphs in the federal court fight, and losing in the Supreme Court on the damages issue would essentially force Apple to send money back to Samsung. “Apple’s win (in the first trial) may be secure, but the amount of damages it is ultimately awarded is likely to be significantly reduced,” said Brian Love, a Santa Clara University law professor. Tech giants such as Google, Facebook and Hewlett Packard Enterprise had urged the Supreme Court to take up Samsung’s appeal of its patent loss to Apple over the copying of iPhone technology, warning that the outcome against Samsung “will lead to absurd results and have a devastating impact on companies” because of the implications of how patent law is applied to technology products such as smartphones. Legal scholars also had pushed the Supreme Court to tackle the damages question. The Washington, D.C.-based U.S. Federal Circuit Court of Appeals last year rejected Samsung’s arguments in a ruling largely backing Apple — leaving the Supreme Court as the only legal option left for Samsung to try to overturn the adverse jury verdict. Samsung maintains that a three-judge Federal Circuit panel erred when it left intact a jury’s 2012 verdict that the South Korean company’s smartphones and tablets infringed Apple’s design patents. That part of the verdict — which has been pared from an original $1 billion judgment — accounts for the $548 million in damages Samsung still had to pay Apple from their first trial. U.S. District Judge Lucy Koh rebuffed Samsung’s effort to stall paying Apple until the Supreme Court appeal was resolved, forcing Samsung to provide the money to Apple in December. Koh is scheduled to hold yet another retrial on remaining damages issues on March 28, with perhaps hundreds of millions of dollars still at stake. But that trial may now be scratched with the Supreme Court set to address legal uncertainty surrounding how to assess damages in the evolving smartphone industry. Koh on Monday indicated she will quickly consider Samsung’s request to put that trial on hold while the Supreme Court hears the broader appeal. In the Supreme Court appeal, Samsung suggested it would be repaid the money it already has sent to Apple if it prevails on the damages question. That question could guide the tech industry on the value of design patents for products such as the iPhone, which Samsung maintains involved such basic features as rounded corners and colorful icons that they were irrelevant to consumer choice and profits. Samsung warned the Supreme Court that companies such as Apple should not be entitled to a “windfall” for basic patent designs, but Apple has countered that its iPhone features were vital to establishing its pre-eminence in the marketplace. Other tech leaders have largely sided with Samsung on the issue thus far, expressing concern about being unfairly punished by being forced to relinquish profits for one design in products that may have dozens of patented ingredients. Samsung appealed a San Jose jury’s August 2012 verdict that it violated Apple’s patent or trademark rights in 23 products, such as the Galaxy S2 smartphone, as well as originally about $930 million in damages awarded to the iPhone maker. The case, known as “Apple I,” was the first of two trials between the feuding tech titans. Another federal jury later found Samsung copied iPhone technology in more recent products but awarded $120 million in damages, a fraction of what Apple sought. A federal appeals court in February overturned the verdict in that second trial, handing Samsung its most important legal win to date. Apple is considering whether to appeal that ruling.
Having Our Say: The Delany Sisters' First 100 Years New York Times article On September 22, 1991, an article written by Amy Hill Hearth ("Two 'Maiden Ladies' With Century-Old Stories to Tell"), was published in The New York Times, introducing the then-unknown Delany sisters to a large audience. Sarah "Sadie" L. Delany and A. Elizabeth "Bessie" Delany were two civil rights pioneers who were born to a former slave, and had many stories to share about their lives and experiences. Among those who read Hearth's story was a New York book publisher who asked her to write a full-length book on the sisters. Hearth and the sisters agreed to collaborate, working closely for two years to create the book. Book Having Our Say: The Delany Sisters' First 100 Years was published by Kodansha America in New York in September 1993, and was on the New York Times bestseller lists for 105 weeks. The book documented the oral history of the Delany sisters and was compiled by same The New York Times reporter that created the original article, Amy Hill Hearth. Book synopsis Having Our Say presents an historically accurate, nonfiction account of the trials and tribulations the Delany sisters faced during their century of life. The book offers positive images and details of African-American (they preferred "colored") life in the 1890s. The book chronicles the story of their well-lived lives with wit and wisdom. It begins with an idyllic childhood in North Carolina. The sisters had a unique and privileged upbringing. They were raised on the campus of St. Augustine's School (now St. Augustine's University) in Raleigh, North Carolina. Their father, the Rev. Henry B. Delany, was the Vice-Principal of the school. He was born into slavery, but eventually became the first African-American Episcopal Bishop elected in the United States. Their mother, Nanny Logan, was a teacher and administrator. Her parents were a free African American woman and a white Virginia farmer. The legislation of Jim Crow laws eventually prompted the Delany sisters move to Harlem. Sadie arrived in New York in 1916, while Bessie relocated two years later. The sisters were successful career women and civil rights pioneers in their own right. They survived encounters with racism and sexism in different ways, with the support of each other and their family. The sisters set their sights high, with both earning advanced college degrees at a time when this was very rare for women, especially women of color. Both were successful in their professions from the 1920s until retirement. Sadie attended Pratt Institute, then transferred to Columbia University where she earned a bachelor's degree in education in 1920, followed by a master's in education in 1925. She was the first African-American permitted to teach Domestic Science at the high school level in the New York City public school system. She retired in 1960. Bessie was a 1923 graduate of Columbia University's School of Dental and Oral Surgery. She was the second black woman licensed to practice dentistry in the state of New York. She retired in 1956. The Delany sisters lived together in Harlem, New York for many years, eventually moved to the Bronx while it was still rural, and finally moved to Mt. Vernon, New York, where they bought a house with a garden on a quiet street. Neither ever married, and the two lived together all of their lives. The reason we've lived this long is because we never married. We never had husbands to worry us to death! —Bessie Delany in Having Our Say Legacy In all editions combined, the book has sold more than five million copies, according to Hearth. The book went on to inspire several plays, including a Broadway play in 1995 and a CBS television film in 1999. The book has been translated into six languages. In 1995, the book was recognized as one of the "Best Books of 1994" by the American Library Association. The book was also presented with the Christopher Award for Literature and an American Booksellers Book of the Year (ABBY) Honor Award. Follow-up books After the publication of the book, the Delany sisters received numerous letters from people seeking advice, life direction, and encouragement. Raised with Southern charm, the sisters believed they should answer each and every letter. Etiquette gave way to practicality and the sisters, again with Hearth, wrote, The Delany Sisters' Book of Everyday Wisdom. Published in 1994, the book offered recipes, as well as photos of the sisters doing yoga. After Bessie's death in 1995, Sadie and Hearth wrote a third book called On My Own At 107: Reflections on Life Without Bessie. The book follows Sadie through the first year after Bessie's death. It includes watercolor illustrations, by Brian M. Kotzky, of Bessie's favorite flowers from her garden. The book was a national bestseller. In 2003, Hearth wrote The Delany Sisters Reach High, an illustrated children's biography of the Delany sisters, focusing on their childhood. Published by Abingdon Press, the book is illustrated by the award-winning artist, Tim Ladwig. It is part of the educational curriculum target for first through third grade reading classes. Broadway play In 1995, Emily Mann, artistic director of the McCarter Theatre in Princeton, New Jersey, adapted Hearth's book for the stage. The play adaptation, Having Our Say, debuted on April 6, 1995, at the Booth Theatre on Broadway in New York City, and later toured the United States. Delany novella In 1995 the Delany Sisters' nephew, novelist and critic Samuel R. Delany, published a novella, "Atlantis: Model 1924" (in Atlantis: Three Tales), which includes characters based on the two sisters and lightly fictionalized family stories about them not included in Having Our Say. CBS telefilm In 1999, Emily Mann, who had previously adapted the book to the Broadway stage, wrote a screenplay for CBS television. The executive producers included Camille O. Cosby, Jeffrey S. Grant, and Judith R. James. The telefilm starred Ruby Dee as Bessie Delany, Diahann Carroll as Sadie Delany, and Amy Madigan as Amy Hill Hearth. The film first aired on CBS on April 18, 1999, just three months after Sadie died. In 2000, the film was honored with the Peabody Award for Excellence in Television and the Christopher Award for Outstanding TV and Cable Programming.
The consumer price index (CPI) inflation rate fell to a 10-month low of 3.69 per cent in August as food prices declined at a steeper rate in urban areas compared to July and remained almost flat in rural areas. However, fuel inflation rose in August due to higher international crude prices. Food items play a key role in the CPI inflation as these have over 45 per cent weight in the index. The cut in the rates of the goods and services tax (GST) on over 100 items from July 27 also pulled down the inflation. The core inflation exhibited across the board also declined to 5.9 per cent in August from 6.3 per cent in July. However, fuel and light inflation rose to 8.47 per cent in August from 7.96 per cent in the previous month. The base-effect led to a fall in housing inflation to a nine-month low of 7.59 per cent in August. Going forward, economists said the looming impact of revised minimum support prices (MSPs), surge in crude oil prices and sharp weakening of the rupee against the dollar would push up the inflation in the second half, a risk that may weigh in the minds of the Reserve Bank of India (RBI) in its policy review next month. “On a balance, the scales appear tipped towards a third consecutive rate hike in the October 2018 policy review, along with a change in stance to withdrawal of accommodation, unless crude oil prices and the rupee record an appreciable reversal in the intervening period. However, the decision to hike the repo rate in October 2018 is unlikely to be unanimous,” said Aditi Nayar, principal economist at ICRA said. However, Devendra Pant, chief economist at India Ratings, said a robust GDP growth in the first quarter coupled with declining inflation trajectory should make the RBI optimistic about the economy and hold on the policy rate in the forthcoming policy review in view of two back-to-back hikes. However, he also believed that there may possibility be another rate hike this financial year. Food inflation declined to 0.29 per cent in August from 1.30 per cent in the previous month. Food prices continued to fall in urban areas. The deflation here increased to 1.21 per cent from 0.36 per cent in these months. In rural areas also, food inflation declined substantially to 1.22 per cent from 2.18 per cent in the previous period. Food prices represented another problem in the Indian economy — rural distress. The government’s decision on procurement prices may help the rural distress a bit. Prices of vegetables, pulses, sugar continued to indicate deflation in August.
Spectral Statistics Beyond Random Matrix Theory Using a nonperturbative approach we examine the large frequency asymptotics of the two-point level density correlator in weakly disordered metallic grains. This allows us to study the behavior of the two-level structure factor close to the Heisenberg time. We find that the singularities (present for random matrix ensembles) are washed out in a grain with a finite conductance. The results are nonuniversal (they depend on the shape of the grain and on its conductance), though they suggest a generalization for any system with finite Heisenberg time. A great variety of physical systems are known to exhibit quantum chaos. The common examples are atomic nuclei, Rydberg atoms in a strong magnetic field, electrons in disordered metals, etc. Chaotic behavior manifests itself in the energy level statistics. It was a remarkable discovery of Wigner and Dyson, that these statistics in a particular system can be approximated by those of an ensemble of random matrices (RM). Here we consider deviations from the RM theory taking an ensemble of weakly disordered metallic grains with a finite conductance g as an example. The results seem to be extendible to general chaotic systems. There are two characteristic energy scales associated with a particular system: a classical one E c and a quantum one. The quantum energy scale is the mean level spacing ∆. In a chaotic billiard, for example, E c is set by the frequency of the shortest periodic orbit. Well developed chaotic behavior can take place only if E c ≫ ∆. In a disordered metallic grain the classical energy is the Thouless energy E c = D/L 2, where D is the diffusion constant, and L is the system size. For a weakly disordered grain the two scales are separated by the dimensionless conductance g = E c /∆ ≫ 1. For frequencies ≪ E c the behavior of the system becomes universal (independent of particular parameters of the system ). In this regime in the zeroth approximation the level statistics depend only on the symmetry of the system and are described by one of the RM ensembles: unitary, orthogonal or symplectic. One of the conventional statistical spectral characteristics is the two-point level density correlator where is the Hamiltonian of the system, is a perturbation, x is the dimensionless perturbation strength and (, + x) = Tr( − − x) is the x-dependent density of states at energy. It is convenient to introduce the dimensionless frequency s = /∆ and the dimensionless two-level correlator R(s, x) = ∆ 2 K(, x). Dyson determined R(s, x = 0) for RM. For example, R(s, o) in the unitary case equals to and is plotted in the insert in Fig. 1. Perhaps the most striking signature of the Wigner-Dyson statistics is the rigidity of the energy spectrum. Among the major consequences of this phenomenon are: a) the probability to find two levels separated by ≪ ∆ vanishes as → 0; b) the level number variance in an energy strip of width N∆ is proportional to ln N rather than N; and c) oscillations in the correlator R(s, 0) in Eq. decay only algebraically. In the two level structure factor S(, x) = ∞ −∞ ds exp(i s)R(s, x) the reduced fluctuations of the level number manifest themselves in the vanishing of S(, 0) at = 0, and the algebraic decay of the oscillations in R(s, 0) leads to the singularity in S(, 0) at the Heisenberg time = 2. In the unitary case, e. g. At ≪ 2 this Dyson result was obtained by Berry for a generic chaotic system by use of a semiclassical approximation. To the best of our knowledge nobody succeeded in analyzing the behavior of S(, 0) around = 2 using this formalism. Wigner-Dyson statistics become exact in the limit g = E c /∆ → ∞. We consider corrections to these statistics for finite g. One of the better understood systems in this respect is a weakly disordered metallic grain. For frequencies much smaller than E c the statistics are close to universal ones, the corrections being small as (s/g) 2. At s ≫ 1 the monotonic part of R(s, x) can be obtained perturbatively R p (s, where are eigenvalues (in units of ∆) of the diffusion equation in the grain, = 2 for the unitary ensemble and = 1 for the orthogonal and symplectic ensembles. At this point we can define where 1 is the smallest nonzero eigenvalue. Perturbation theory allows one to determine S(, 0) at small times ≪ 1. Since the oscillatory part of R(s, x) is non-analytic in 1/s it can not be obtained perturbatively. In this Letter we obtain the leading s ≫ 1 asymptotics of R(s, x) retaining the oscillatory terms and monitor how the singularity in S(, 0) at the Heisenberg time is modified by the finite conductance g. We make use of the nonterturbative approach that is valid for arbitrary relation between s and g. The oscillatory part R osc (s, x) ≡ R(s, x) − R p (s, x) for the unitary (u), orthogonal (o) and symplectic (s) cases equals to where y 2 = s 2 + x 4, and P (s, x) is the spectral determinant of the diffusion operator Note that Eq. expresses R p (s, x) through the Green function of this operator. Thus, regardless of the spectrum, R p (s, x) and R osc (s, x) are related: It follows from Eq. that R osc (s, x) together with P (s, x) decays exponentially at s ≫ g. As a result, the singularity in S(, 0) at the Heisenberg time is washed out: S(, 0) becomes analytic around = 2. The scale of smoothening of the singularity is 1/E c ( see Fig. 1). At 1 ≪ s ≪ g the sum of Eq. and Eq. gives the leading high frequency asymptotics of the universal result, for s ≫ g it coincides with the perturbative result R P (s, x) of Ref.. In a closed d-dimensional cubic sample (diffusion equation with Dirichlet boundary conditions) = g 2 n 2, where n = (n 1,..., n d ) and n i are non-negative integers. For s ≫ g and d < 4 we obtain the asymptotics P (s, 0) → exp{−(s/g This result was shown in Ref. to be valid even for s < 1. Thus, it is natural to assume that for the unitary ensemble the sum of Eq. (5a) and Eq. gives the correct g → ∞ asymptotics at arbitrary frequency. This is related to the absence of higher order corrections to the leading term of the perturbation theory S(, 0) ∝ in the unitary case. Now we sketch the derivation of our results. Consider a quantum particle moving in a random potential V ( r). The perturbation acting on the system is a change in the potential V ( r). Both V ( r) and V ( r) are taken to be white noise random potentials with denotes ensemble averaging and is the density of states per unit volume. The dimensionless perturbation strength x 2 is assumed to be of order unity. We use the supersymmetric nonlinear -model introduced by Efetov, and follow his notations everywhere. One can show that for the system under consideration the -model expression for K(, x) is given by The 8 8 supermatrix Q( r) obeys the nonlinear constraint Q( r) 2 = 1 and takes on its values on a certain symmetric space H = G/K, where G and K are groups. The large frequency asymptotics of K(, x) can be obtained from Eq. (9a) by use of the stationary phase method. The conventional perturbation theory corresponds to integrating over the small fluctuations of Q around, where the matrix P describes these small fluctuations. Q = is not the only stationary point on H. This fact to the best of our knowledge was not appreciated in the literature. The existence of other stationary points makes the basis for our main results. It is possible to parameterize fluctuations around a point Q 0 in the form Q = Q 0 (1 + iP 0 )(1−iP 0 ) −1. Expanding the Free Energy F J in Eq. (9b) in P 0 we would obtain the stationarity condition ∂F J /∂P 0 = 0. This route however is inconvenient because the parametrization of P 0 will depend on Q 0. Instead we perform a global coordinate transformation on H that maps Q 0 to, Q 0 → T −1 0 Q 0 T 0 =. We note that the matrices and −k belong to H, and the corresponding terms in Eq. (9b) can be viewed as symmetry breaking sources. This transformation changes the sources, but allows us to keep the parametrization of Eq. and preserves the invariant measure. Introducing the notation Q = T −1 0 T 0 and Q k = T −1 0 kT 0 we rewrite Eq. as The stationarity condition ∂F (Q )/∂P | P =0 = 0 implies that all the elements of Q in the AR and RA blocks should vanish (this can be seen from Eq. ). Here we discuss in detail only the calculation for the unitary ensemble. The calculation for the other cases proceeds analogously, and we just point out the important differences from the unitary case. Consider now the unitary case. The 4 4 supermatrices B andB in Eq. are given by where a, b and their conjugates are ordinary variables, and 1, 2 and their conjugates are grassmann variables. The only matrix besides that satisfies the stationarity condition is Q = = −k. In this case Q k = −. All other matrices from H contain nonzero elements in the AR and RA blocks. Both stationary points contribute substantially to K(, x). Consider the contribution of Q = to K(, x) first. We substitute Q = −k and Q k = − into Eq. and expand both the Free Energy F (Q ) and the pre-exponent to the second order in B andB. Expanding B( r) in the eigenfunctions of the diffusion operator ( r) as B( r) = ( r)B, and introducing E = is + + x 2 we arrive at the following expression for the dimensionless density-density correlator: We have to keep the perturbation strength x 2 finite to avoid the divergence of the integral over a 0 caused by the presence of the infinitesimal imaginary part in. For the non parametric case we should take the x 2 → 0 limit after the integral in Eq. is evaluated. Since the Free Energy in Eq. contains no Grassmann variables in the zero mode they have to come from the pre-exponent. Therefore out of the whole square of the sum in the pre-exponent only the terms containing all four zero mode Grassmann variables contribute. In these terms the prefactor does not contain any variables from non-zero modes. Thus, the evaluation of the Gaussian integrals over non-zero modes yields the superdeterminant of the quadratic form in the exponent. Supersymmetry around is broken by s, therefore this superdeterminant differs from unity and is given by P (s, x) of Eq.. The correctly ordered integration measure for Grassmann variables is d 1 d * 1 d * 2 d 2. Evaluating the integral we arrive at Eq. (5a). In quasi-1D for closed boundary conditions and x = 0 the spectral determinant P (s, 0) can be evaluated exactly, and from Eq. (5a) we obtain For Q = the same procedure as used above leads to Eq., which coincides with the result of Ref.. The behavior of S(, 0) at = 0 and = 2 is associated respectively with R p (s, 0) (Eq. ) and R u osc (s, 0) (Eq. (5a)). In other words the singularity at the Heisenberg time is determined by the contribution to R(s, 0) from. It is clear that the cusp in S(, 0) at = 2 will be rounded off because R u osc (s, 0) decays exponentially at large s. The scale of the smoothening is of order 1/g. The Fourier transform of Eq. ( see Fig. 1 ) is Even though S u 1D (2 + t, 0) appears to be a function of |t|, it is regular at t = 0. We can also estimate S u (2, 0) in any dimension. It is proportional to 1/g of Eq. and is given by Consider now T-invariant systems. For the orthogonal ensemble there are still only two stationary points on H: and = −k. To determine the contribution of the-point we use the formula Eq. with Q = and Q k = − and Efetov's parametrization for the perturbation theory. The calculations are analogous to those for the unitary ensemble and lead to Eq. (5b). The contribution of Q = gives Eq.. At = 2 the third derivative of S(, 0) for the orthogonal ensemble has a jump. This singularity also disappears at finite g. In the symplectic case there are three types of stationary points which correspond to singularities in the structure factor S(, 0) at = 0,, 2. The = 2 singularity corresponds to Q = = −k, and its contribution to R(s, x), given by the second term in Eq. (5c), is exactly the same as R o osc (s, x). The stationary point Q = corresponds to the = 0 singularity in S(, 0) and leads to Eq.. The = singularity corresponds to a degenerate manifold of matrices Q on H Q = diag( m, 1 1 2, − m, −11 2 ), Q k = −kQ, where 1 1 2 is a 2 2 unit matrix, m = m x x + m y y, m 2 = 1 and x,y are Pauli matrices in the time-reversal block. The calculation proceeds as before and leads to the first term in Eq. (5c). In quasi-1D we can obtain the leading contribution to the structure factor S(, 0) The result is plotted in Fig. 2. In all dimensions the logarithmic divergence in the zero mode result is now cut off by finite g, and S s (, 0) ∝ ln g. In conclusion we mention several points about our results. 1) Equation describes the deviation of the level statistics of a weakly disordered chaotic grain from the universal ones. This deviation is controlled by the diffusion operator. This operator is purely classical. It seems plausible that the nonuniversal part of spectral statistics of any chaotic system can be expressed through a spectral determinant of some classical system-specific operator. If so, the relation Eq. should be universally correct! 2) The formalism used here should be applicable even to the systems weakly coupled to the outside world (say through tunnel contacts). As long as the level broadening ( = ℜ + i) is smaller than ∆x 2 the integration over the zero mode variables in Eq. is convergent. The integral over the other modes is always convergent provided < E c. Thus, the presence of a perturbation can effectively "close" a weakly coupled system. Under these conditions Eq. remains valid after the substitution cos(2s) → exp(−2/∆) cos(2s) and x 2 → x 2 − /∆. 3) The classification of physical systems into the three universality classes (unitary, orthogonal and symplectic) is, of course, an oversimplification. In practice there is always a time scale which determines the crossover from one ensemble to another. For example if a system is subjected to a magnetic field for very short times it will still effectively remain orthogonal. On the other hand, the long time behavior will be unitary. The characteristic time is set by the strength of the magnetic field. For a disordered metallic grain in a magnetic field this characteristic time is given by l 2 H /D. For frequencies larger than D/l 2 H the system effectively becomes orthogonal. This implies that even if we neglect the spatially nonuniform fluctuations of the Q-matrix the cusp in S(, 0) at = 2 will be washed out on the scale of ∆l 2 H /D ( although there will still remain a jump in the third derivative of S(, 0) ). For the system to behave as unitary for frequencies of order E c the magnetic length l H has to be shorter than the size of the system. Spin-orbit interaction that causes the orthogonal-to-symplectic crossover can be considered in a similar way. 4) The rounding off of the singularity in S(2, 0) is also present in the random matrix model with preferred basis. Note that our results differ from those in Ref. substantially. This means that finite g is not equivalent to finite temperature for the corresponding Calogero-Sutherland model. We are grateful to D. E. Khmel'nitskii, B. D. Simons and N. Taniguchi for numerous discussions throughout the course of this work.
Long-term bull Jeremy Siegel, who correctly predicted 2017's historic rally, isn't convinced strong earnings will push stocks back into record territory. The Wharton finance professor sees anemic returns compared with last year. "It's going to be a flat to slightly upward tilting year as good earnings collide with what I think will be higher interest rates both by the Fed and in the Treasury market," he said Monday on CNBC's "Trading Nation." "This market is going to struggle this year." Siegel predicts the Federal Reserve's plan to shrink billions of dollars of Treasury holdings will put pressure on the market. It comes as the Fed continues its interest rate-hike policy. According to the latest CNBC Fed Survey, Wall Street is expecting three rate hikes this year. "Three-point-two-five percent on the 10-Year [Treasury] will give stocks a pause in 2018," Siegel said. Current rates hovered around 2.82 percent. His best-case scenario is a stock-market gain of no more than 10 percent. Last year, the Dow surged 25 percent. "These gains that people are talking about — 10 to 15 percent a year this year and maybe next year, I just don't think they're going to be realized," Siegel said. With just under 10 percent of S&P 500 companies reporting first quarter earnings, 71 percent of the reports have come in above estimates. As of Monday's close, actual reported earnings per share are up 33 percent versus first quarter 2017. Despite the encouraging numbers, Siegel believes this year's benefits from the corporate tax cut will wane. "Firms are actually going to lose depreciation deductions in future years. So, it's going to be great in 2018," he said. "2019 — you're going to have to have a growing economy to generate earnings gains. It's not going to be anywhere near as easy as it was this year." But Siegel wants to make one thing clear: He's not a bear. "I'm not predicting a bear market. Valuations are still very attractive for long-term investors. We're selling around 18 times this year's earnings," Siegel said. "I wouldn't sell out there." --CNBC's Juan Aruego contributed to this article.
Asymmetric transformations of acyloxyphenyl ketones by enzyme-metal multicatalysis. A multipathway process comprising several enzyme- and metal-catalyzed reactions has been explored for the asymmetric transformations of acyloxyphenyl ketones to optically active hydroxyphenyl alcohols in the ester forms. The process comprises nine component reactions in three pathways, all of which take place by the catalytic actions of only two catalysts, a lipase and a ruthenium complex. The synthetic reactions were carried out on 0.2-0.6 mmol scales for eight different substrates under an atmosphere of hydrogen (1 atm) in toluene at 70 degrees C for 3 days. In most cases, the yields were high (92-96%) and the optical purities were excellent (96-98% ee), This work thus has demonstrated that enzyme-metal multicatalysis has great potential as a new methodology for asymmetric transformations.
This historic landing of a spacecraft on a comet on Wednesday turned out to be not one but three landings as the craft hopped across the surface. Because of the failure of a thruster that was to press it against the comet’s surface after touching down, the European Space Agency’s Philae lander, part of the $1.75 billion Rosetta mission, bounded up more than half a mile before falling to the surface of Comet 67P/Churyumov-Gerasimenko again nearly two hours later, more than half a mile away. That is a considerable distance across a comet that is only 2.5 miles wide. Philae then bounced again, less high, and ended up with only two of its three legs on the surface, tipped against a boulder, a wall of rock or perhaps the side of a hole. “We are almost vertical, one foot probably in the open air — open space. I’m sorry, there is no air around,” Jean-Pierre Bibring, the lead lander scientist, said at a news conference on Thursday.
Can Deep Clinical Models Handle Real-World Domain Shifts? The hypothesis that computational models can be reliable enough to be adopted in prognosis and patient care is revolutionizing healthcare. Deep learning, in particular, has been a game changer in building predictive models, thereby leading to community-wide data curation efforts. However, due to the inherent variabilities in population characteristics and biological systems, these models are often biased to the training datasets. This can be limiting when models are deployed in new environments, particularly when there are systematic domain shifts not known a priori. In this paper, we formalize these challenges by emulating a large class of domain shifts that can occur in clinical settings, and argue that evaluating the behavior of predictive models in light of those shifts is an effective way of quantifying the reliability of clinical models. More specifically, we develop an approach for building challenging scenarios, based on analysis of \textit{disease landscapes}, and utilize unsupervised domain adaptation to compensate for the domain shifts. Using the openly available MIMIC-III EHR dataset for phenotyping, we generate a large class of scenarios and evaluate the ability of deep clinical models in those cases. For the first time, our work sheds light into data regimes where deep clinical models can fail to generalize, due to significant changes in the disease landscapes between the source and target landscapes. This study emphasizes the need for sophisticated evaluation mechanisms driven by real-world domain shifts to build effective AI solutions for healthcare. Introduction The role of automation in healthcare and medicine is steered by both the ever-growing need to leverage knowledge from large-scale, heterogeneous information systems, and the hypothesis that computational models can actually be reliable enough to be adopted in prognosis and patient care. Deep learning, in particular, has been a game changer in this context, and recent successes in predictions with both aggregated electronic health records (EHR) as well as raw clinical measurements (e.g. ECG, EEG) have definitely provided strong evidence to support this hypothesis. While designing computational models that can account for the complex biological phenomena and all possible interactions during disease evolution, e.g. cancer modeling (), is not possible yet, building predictive models for patient health is a significant step in the direction of personalized care. This has resulted in communitywide curation of datasets, often comprised of a concise set of predictor variables carefully selected from routine tests and patient meta data. However, a not-so-trivial challenge in making statistical models useful in practice is to suitably prepare and transform patient data from an unseen target domain, in accordance with the custom data used for training. Furthermore, there is a fundamental trade-off in predictive modeling with healthcare data. While the complexity of the space of disease conditions naturally demands the expansion of the number of predictor variables, this often makes it extremely challenging to identify reliable correlations in the data, thus rendering the models highly specialized to the training datasets. In other words, despite the availability of large-sized datasets used for building models, we are still operating in the small data regime, wherein the curated data is not fully representative of what the model might encounter when deployed. Furthermore, existing deep learning solutions rarely provide meaningful uncertainties, thus making it difficult to distinguish between noisy correlations picked up by the model and a systematic shift in the data characteristics in a certain operating region. In this paper, we formalize these challenges by emulating a large class of domain shifts that can occur in clinical settings, and argue that evaluating the behavior of predictive models in light of those shifts is an effective way of quantifying the effectiveness of models. Dealing with complex domain shifts is essential for adopting machine learning models in practice, and in the recent years, this problem has been well studied for even cases where the target domain does not have access to labeled data. Referred to as unsupervised domain adaptation, this class of techniques attempts to align the distributions of the source and target domains such that knowledge from labeled source domain data can be maximally utilized to design a model for the target domain. More specifically, adversarial training based approaches, e.g. domain adversarial neural networks (DANN) () and virtual adversarial domain adaptation (VADA) (), have been highly effective in domain shifts in computer vision. In this paper, for the first time, we study the effectiveness of clinical models through unsupervised adaptation to realworld domain shifts. To this end, we use the openly available MIMIC-III EHR dataset () for phenotyping, to construct several scenarios (source-target pairs) that represent real-world domain shifts. For example, one possible scenario can constitute biases in the age distribution of patients in source and target domains. Subsequently, we build deep ResNet models, following their recent success in many clinical problems (), along with unsupervised domain adaptation to build meaningful predictive models for the target domain. Using rigorous empirical studies, we obtain several key inferences. In many domain shifts, a pre-trained model from the source domain generalizes poorly to the target domain. In cases where the domain shift constituted a systematic shift in distribution of the predictor variables, for example train on patients with a combination of disease conditions and generalize to patients who present only a subset of those diseases, unsupervised domain adaptation produces highly effective predictive models. However, we find crucial cases where the domain shift is challenging for the source models to generalize, since the correlations picked up by the model do not apply to a large population of unobserved data, thus evidencing the smalldata learning in clinical settings. In summary, this paper emphasizes the need for domain-shift based model evaluation to build reliable predictive models. Deep Diagnostic Models with Clinical Time-Series This section discusses challenges in building data-driven solutions for healthcare, current art in deep learning based clinical modeling, mainly using EHR data, as well as an overview of evaluation strategies used in practice. Predictive Modeling: Clinical time-series modeling broadly focuses on problems such as anomaly detection, tracking progression of specific diseases or detecting groups of diseases by formulating them as binary classification tasks. Modeling multi-variate clinical time-series is often constrained by inherent data-related challenges such as long range temporal dependencies, missing measurements, and varying sampling rates. Deep learning provides a data-driven approach to exploiting relationships between a large number of input measurements while producing robust diagnostic predictions (). In particular, Recurrent Neural Networks (RNN) have become a de-facto solution for dealing with sequential data in tasks such as intensive care unit (ICU) readmission ( Neural Computers (DNC) have been used for treatment recommendations, wherein very long term dependencies need to be modeled. In addition, prior-knowledge through medical ontologies can be utilized to further improve representation learning,. Despite the availability of a wide-variety of solutions, deep Residual Networks with 1−D convolutions (ResNet-1D), have been empirically deemed to be a popular solution for modeling sequence data (Bai, Kolter, and Koltun 2018). In this work, all domain adaptation studies are carried out using this architecture. Model Evaluation: Selecting unbiased and meaningful metrics is crucial for validation of diagnostic models; however there is no consensus on the optimal metric for all applications. In addition, the problem formulation greatly impacts model evaluation and in-turn its clinical utility ( However, these metrics do not inform about their behavior with systematic shifts in the population characteristics, which are more severe in the clinical domain compared to any other application. Consequently, rigorous evaluation of models under constrained scenarios is necessary to understand model generalizability, especially given that it is challenging to obtain representative data in healthcare. Proposed Work: Gaining insights into disease progression and the relationships between different disease conditions is a treatment imperative. The key to ensuring computational models are truly effective in diagnosis is to incorporate the ability to understand dependencies between diseases, which we refer to as the disease landscape. For example, using domain experts to group diseases based on their chances of cooccurring in patients could be a potential approach towards organizing the space of diseases. However, such domainknowledge dependent, rule-based systems designed by medical experts fail to provide a scalable and generalizable solution. Interestingly, the disease landscape can vary significantly across different parts of the population, thus rendering the pre-trained predictive models ineffective when deployed in a new environment. Hence, we propose to explicitly design challenging domain shifts, driven by the changes in disease landscapes across data subsets constructed using constraints from the real-world, in order to rigorously evaluate the broad applicability of modeling solutions. To the best of our knowledge, this is the first effort to study the model performances using the structure in the outcome space. Dataset Description All experiments for the proposed study are carried out using the MIMIC-III dataset (), which is the largest publicly available database of de-identified EHR that includes clinical time-series collected from patients in the ICU. We used version 1.4 of the database which includes a variety of data types such as diagnostic codes, survival rates, length of stay, and more all acquired from 46, 520 subjects over a period of about ten years. More recently, this dataset has been benchmarked for different predictive tasks resulting in a cohort of 33, 798 unique patients with a total of 42, 276 hospital admissions and ICU stays (). For our study, we focus on the task of acute care phenotyping that involves retrospectively predicting the likely disease conditions for each patient given all their ICU stay information. Preparation of such large amounts of diverse content involves organizing each patient's data into episodes containing both time-series events as well as episode-level outcomes like diagnosis, mortality, age etc. Here, the timeseries data includes a total of 17 measurements such as capillary refill rate, systolic, diastolic and mean blood pressure, glucose, heart rate, oxygen saturation, temperature, height, weight, pH, and Glascow Coma Scale (GCS) parameters like eye opening, motor and verbal response. Phenotyping is typically a multi-class problem as each patient is diagnosed with several disease conditions. However, in this work we design a number scenarios for representing real-world domain shifts by posing binary classification problems to detect the presence or absence of subsets of related conditions from the disease landscape. Each scenario is comprised of a pair of source and target domains that are chosen based on a scenario-specific constraint (e.g. gender distribution). The dataset contains 25 disease categories, 12 of which are critical such as respiratory/renal failure, 8 are chronic conditions such as diabetes, atherosclerosis, with the remaining 5 being 'mixed' conditions such as liver infections and complications during surgery. We construct disease landscapes on data subsets from scenario-specific constraints, and accordingly build the source and target domains. The list of domain-shifts considered and the corre-sponding dataset sizes are shown in Table 1 Scenario Design Building machine learning solutions targeted to the healthcare domain presents a huge opportunity towards digitizing medicine. Accordingly, the performance of novel deep learning based clinical models is becoming increasingly effective in disease diagnosis and treatment planning. Though these models are heavily data-driven, rarely do we study how these solutions actually adapt to domain-shifts in terms of population-specific biases and the factors influencing dependencies between disease conditions. As a result, these highly parameterized models can be easily biased towards the dominant population. In this section, we provide details about a multitude of such scenarios that are motivated by the implicit biases and imbalances present in clinical data. We study changes in the disease landscape across the shifts in population characteristics, based on attributes of clinical time-series datasets, and create scenarios to study the adaptability of clinical models. Note that, in our study, we assume that we have access to labeled data only in the source domain, and the data available in the target domain is unlabeled. A primary contribution of this work is the careful design of datasets that encompass a variety of domain shifts often faced in clinical workflows. Approach: As mentioned in the previous section, our scenarios are based on posing phenotyping as a binary classification problem, i.e., detection of presence or absence of a selected set of disease conditions. Given that it is likely that each patient can be associated with multiple diseases, we first construct the disease landscape within a constrained population, as determined by the scenario. To this end, we utilize information sieve (Steeg and Galstyan 2016), a recent information-theoretic technique for identifying latent factors in data that maximally describe the total correlation between variables. Denoting a set of multivariate random variables, dependencies between the variables can be quantified using the multivariate mutual information, also known as total correlation (TC), which is defined as where H(X i ) denotes the marginal entropy. This quantity is non-negative and zero only when all the X i 's are independent. Further, denoting the latent source of all dependence in X by Y, we obtain T C(X|Y ) = 0. Therefore, we solve the problem of searching for a feature Y that minimizes T C(X|Y ). Equivalently, we can define the reduction in T C after conditioning on Y as T C(X; Y ) = T C(X) − T C(X|Y ). In our context, X corresponds to the set of disease outcomes, represented as binary variables. The optimization begins with X, constructs Y 0 to maximize T C(X; Y 0 ). Subsequently, it computes the remainder information not explained by Y 0, and learns another feature, Y 1, that infers additional dependence structure. This procedure is repeated for k layers until the entirety of multivariate mutual information in the data is explained. This hierarchical decomposition represents the disease landscape, revealing the complex dependencies between outcome variables. For example, the force-based layout in Figure 1 provides a holistic view of the landscape for the entire MIMIC-III dataset used in our experiments. Here, each circle corresponds to a latent factor and their sizes indicate the amount of total correlation explained by that factor, and the thickness of edges reveal contributions of the diseases to that factor. Such an organization of the disease conditions enables us to identify groups of related diseases, and by selecting non-overlapping groups, we can create scenarios where there is systematic shift between source and target. For example, one can construct a scenario where the goal is to detect diseases in cluster 2, while patients with disease conditions from cluster 0 are observed only in the target domain. Patients who presented disease conditions that are uncorrelated to cluster 0 are considered as negatives. This analysis can be carried out for every scenario, by splitting the population based on specific attributes (see Table 1) that are all potential sources of domain-shift between source and target. In the rest of this section, we describe the scenarios in detail. (i) Unobserved Diseases: In order to accurately detect if a data sample is diseased or not, in an ideal case the training data must consist of enough examples of every possible disease. Nonetheless, models are trained to confidently detect a subset of diseases present in an available dataset. In a situation where such a model has to be deployed in a new environment, say at a different hospital, it is likely that the population contains disease conditions or their variants previously unseen by the model. To study the generalization of models to such unobserved conditions, we create two scenarios, each with a different set of diseases in the source domain, while the target domain was fixed to be source conditions plus a set of new diseases not observed in the source. The Resp-to-CardiacRenal scenario is made up of source data containing diseases from cluster 1 in Figure 1. Whereas the corresponding target data includes source diseases in addition to conditions from cluster 0, which make up the set of new unseen diseases. Correspondingly, the Cerebro-to-CardiacRenal scenario has source data with diseases pertinent to cluster 2, and target data to include this source disease list and the same unseen diseases from cluster 0 as mentioned in the Resp-to-CardiacRenal scenario. (ii) Age Bias: Due to the 'small data' situation in clinical ML, curated datasets can be imbalanced based on population characteristics. In this scenario, we create a dataset biased by patient age and formulate two scenarios. One referred Older-to-Younger where the source data comprises of all patients who are 60 years and above, and the target data includes remaining patients who are below 60 years of age. While, the second scenario named Younger-to-Older represents the reverse condition with source containing patients younger than 60. Once the population is divided into two groups based on the chosen age criteria, we construct the corresponding disease landscapes and pick a cluster of dis-eases in the source landscape for detection. However, due to the domain shift, the landscape can change significantly in the target. Hence, in the target domain new correlations that emerge with respect to the source diseases are included to the list of diseases to be detected. (iii) Gender Bias: Similar to the previous case, in this scenario we explore the domain shift caused by biases in gender. We create two scenarios, Male-to-Female, i.e. source data consists only of patients who identify as male and target has only patients who are female. While, the second scenario Female-to-Male is comprised of female-only source data and male-only target data. (iv) Racial Bias: An imbalance in the distribution of patients' ethnicities is another likely cause of a domain shift. To model such a shift, we create a scenario White-to-Minority that contains prevalent racial groups such as white American, Russian, and European, that we deem to be majority, in the source data, while constructing the target data comprising of patients belonging to minority racial groups like Hispanic, South America, African, Asian, Portuguese, and others marked as unknown in the MIMIC-III dataset. (v) Disease Subsets: In general, there exists differences in diagnostic procedures for detection of various diseases. As a common practice, lab tests for some two diseases could be conducted together based on the likelihood of them cooccurring in an ICU situation, giving rise to a dataset containing patients with both diseases. In practice, these models would be required to detect the presence or absence of one of the diseases explicitly. Therefore, we design a scenario called Dual-to-Single, where we create a dataset to test the ability of models trained on patients who are associated with two disease conditions simultaneously to a target domain containing patients who have only either of the two. Here, we divide patients into four groups, namely: those that have only cardiac diseases, only renal diseases, neither cardiac or renal diseases, and, both cardiac and renal diseases. We do not construct the landscapes for this case, instead, patients that have neither cardiac or renal related diseases are treated as class 0, while the rest are considered as class 1. (vi) Disease Supersets: Similar to the previous domain shift characterized by disease subsets, an alternate situation could occur with models having to adapt to predicting patients that have two diseases, while having been trained on patients who only had one disease. Such a scenario named Singleto-Dual is explored using a source dataset comprising of patients diagnosed as having either cardiac only or respiratory only disease, and a target dataset with patients that have both cardiac and respiratory-related diseases. (vii) Noisy Labels: Variabilities in diagnoses between different experts is common in clinical settings (Valizadegan, Nguyen, and Hauskrecht 2013). Consequently, it is important to understand the impact of those variabilities on behavior of the model, when adopted to a new environment. We extend the Resp-to-CardiacRenal scenario by adding uncertainties to the diagnostic labels in the source domain and study its impact on the target domain. In particular, we simulate two such scenarios by randomly flipping 10% and 20% of the labels in the source to create 10% Label Flips and 20% Label Flips scenarios. (viii) Sampling Rate Change: Another commonly observed issue with time-series data across sources is the variability in sampling rates, which affects amount of information captured and corresponding analysis results in a chosen window of time. The Sampling Rate Change scenario encapsulates such a domain shift, where we create a source-target pair by extending the Resp-to-CardiacRenal scenario to comprise of measurements collected at two different sampling rates. The source contains time-series sampled at 96 hours while the target is sampled at 48 hours. (ix) Missing Measurements: Clinical time-series is routinely plagued by missing measurements during deployment, often due to resource or time limitations, and can be typically handled by either imputing values or removing those measurements entirely from the dataset. We demonstrate this domain shift by extending the Dual-to-Single scenario, wherein we assume that the set of measurements, pH, Temperature, Height, Weight, and all Verbal Response GCS parameters, were not available in the target dataset. We impute the missing measurements with zero and this essentially reduces the dimensionality of valid measurements from 76 to 41 in the target set. (x) Adversarial Examples: Machine learning models including deep neural networks have been shown to be prone to adversarial perturbations, i.e. systematic changes in the measurements that can significantly alter the prediction. In this scenario, we test the robustness of VADA under adversarial perturbations. Specifically, we obtain adversarial samples from the source-only trained model, and simulate a realistic case where a deployed model is updated periodically using unsupervised domain adaptation. An adversary poisons the incoming unlabeled target data used for the next round of adaptation using the available source data and the current deployed model. In particular, we use the Fast Gradient Sign Method (FGSM) (Goodfellow, Shlens, and Szegedy 2014) to demonstrate the effect of poisoning the target data on the adaptation performance in the Older-to-Younger scenario described before. FGSM perturbs the input example using the sign of the loss function gradient as x adv = x + sign (∇ x L(x, y)). ( 2) The adversarial examples generated by FGSM lie at the surface of the ∞ ball of radius around the input x. Methods In this section, we describe the predictive model used in our study and the techniques employed for performing domain adaptation. As described earlier, we assumed that we have access to only unlabeled data in the target, and hence we resort to unsupervised domain adaptation. Architecture: The ResNet-1D model used in our study is comprised of 7 residual blocks which transform the raw time-series input into feature representations. Each residual block is made up of two 1-D convolutional and batch normalization layers with a kernel size of 3 and about 64 or 256 filters, a dropout layer and a leaky-ReLU activation layer. Further, a 1-D max pooling layer is used after the third and seventh residual block to aggregate the features. In addition, In other words, we attempt to learn representations that are domain invariant but can preserve discriminative properties useful for the predictive modeling. Specifically, DANN uses an adversary or a discriminator D and matches the distribution of features for the source and target domains, where g is a measurable feature generator mapping and g#P denotes the push-forward of a distribution P. The overall objective of domain adaptation can be formulated as where h is the prediction function, L y (f ; P s ) := E x,y∼Ps is the cross-entropy loss for the labeled source examples (assuming f outputs the probabilities of classes and y is the label vector), L D (, ) is the loss term measuring the distance between the two distributions. This term results in minimizing the variational form of the Jensen-Shannon (JS) divergence between source and target feature distributions. While this formulation relies on the assumption that alignment of the domains will enable the use of h for both domains, there is evidence it can fail in practice (). Hence, more recently, Virtual Adversarial Domain Adaptation (VADA), which extends DANN by employing additional regularization in the form of the conditional entropy L ce (.;.). was proposed (). In particular, the regularization term can be defined as Minimizing this cost on the unlabeled target samples pushes the predictor boundaries away from the high density regions. In addition, VADA incorporates virtual adversarial training (VAT) to avoid unlabeled samples which can manifest in predictor model with high capacity. This is achieved using a cost function to ensure that predictions within a neighborhood in the target domain is smooth. Hence, the overall objective function of VADA can be obtained as: Results and Discussion In this section, we present empirical results from our study -performances of the ResNet-1D model trained on the source data when subjected to perform predictions on a variety of target scenarios listed in Table 1. In order to obtain a comprehensive evaluation of the models, we considered three summary metrics, namely weighted average AUROC, AUPRC and the Mathews Correlation Coefficient (MCC). While AUROC and AUPRC are commonly used in clinical model evaluation, MCC is considered to be a balanced measure for binary classification problems, even when the Table 2: Comparison of generalization performance obtained using models trained with only the labeled source data, and the models from unsupervised domain adaptation (VADA) with the target data as well. We report three metrics, AUROC, AUPRC and MCC, for each of the scenarios. where tp, tn, f p, f n denote the total number of true positives, true negatives, false positives and false negatives from the binary classification task. The appropriateness of a metric depends entirely on the requirements of an application; for example in some cases false negatives can be less detrimental than false positives, while in other cases reducing the number of false positives can be highly critical. Table 2 details the performance of both approaches for all the scenarios. The first and crucial observation is that, in most cases, unsupervised domain adaptation produces significant improvements to the generalization performance, and this corroborates the results reported in the literature for computer vision tasks. However, the improved generalization can still be insufficient for deploying the model in practice, thus emphasizing regimes of high uncertainty for the chosen model. Figure 2 illustrates a summary of our study, indicating which domain shifts could be handled by deep clinical models and which ones could not. More interestingly, this inference can change drastically depending on the metric of choice, which is a well-known issue in the healthcare domain. Note, we do not tune the hyper-parameters for training the VADA models for each scenario separately; instead we set the D, s and t parameters to 0.01, which worked reasonably well across all scenarios. AUPRC metric: With the standard AUPRC metric ( Figure 2(b)), we observe that the racial bias (White-to-Minority) was challenging to handle, while the gender bias (Male-to-Female), and age bias (Older-to-Younger) were partially difficult to generalize for the deep ResNet-1D model. Interest-ingly, the model generalizes exceedingly well with disease landscape changes in Younger-to-Older and Female-to-Male scenarios. Also, in practice, it is known that quality of labels are central to the success of predictive models. However, surprisingly, we observe that limited noise in the labels (10% Label Flips) results in improved generalization (compared to Resp-to-CardiacRenal). However, the performance degrades with the increase in the amount of noise (20% Label Flips). Another striking observation is the robustness of deep models to sampling rate changes, missing measurements, and inclusion of unobserved disease conditions in the target data. Finally, we expected the behavior of VADA to be affected by the presence of adversarial examples in the target dataset; but our results showed that presence of adversaries slightly improved the performance. MCC metric: Broadly, we observe that the AUROC and AUPRC demonstrate a similar trend in our case, since the positive and negative classes are well balanced. However, MCC tends to produce different insights, since both false positives and false negatives must be low simultaneously for the score to be high. As it can be seen from Figure 2(a), the model was the most effective in Single-to-Dual, where it could detect combinations of conditions. In contrast to the AUPRC metric, with MCC, we observe that the age bias can affect the model's generalization drastically, and the performance goes down for the Dual-to-Single scenario. However, there is agreement between the metrics in model's ability to handle sampling rate change and label noise. Finally, the deep model is surprisingly very effective (in terms of both f p and f n) with inclusion of unobserved diseases in the target domain, which is conventionally believed to be an extremely challenging task. In summary, the proposed approach of studying model adaptability with complex domain-shifts produces interest-
Lightlike limit of the boosted Kerr black holes in higher-dimensional spacetimes Deriving the gravitational field of the high-energy objects is an important theme because the black holes might be produced in particle collisions in the brane world scenario. In this paper, we develop a method for boosting the metric of the stationary objects and taking the lightlike limit in higher-dimensional spacetimes, using the Kerr black hole as one example. We analytically construct the metric of lightlike Kerr black holes and find that this method contains no ambiguity in contrast to the four-dimensional case. We discuss the possible implications of our results in the brane world context. In this section, we consider the lightlike limit of the Kerr black hole with one Kerr parameter a that is boosted in the parallel direction to the spin for total spacetime dimension number D ≥ 5. We introduce the Kerr-Schild coordinate (t,x 1,x 2,z, i ) in which the metric becomes dt +r r 2 + a 2 (x 1 dx 1 +x 2 dx 2 ) + r 2 + a 2 (x 1 dx 2 −x 2 dx 1 ) +z where i = 1,..., D − 4, and Y andr are defined via with X 2 =x 2 1 +x 2 2. The total mass (the Arnowitt-Deser-Misner mass) M and the angular momentum J are given by where G D and D−2 denote the gravitational constant of the D-dimensional spacetime and the (D − 2)-dimensional area of the unit sphere, respectively. The singularity is rotating in the (x 1,x 2 )-plane. In the boost, we fix the energy p = M and the Kerr parameter a. Here, is the usual factor given by ≡ 1/ √ 1 − v 2 with the velocity of the Kerr black hole v. For convenience, we introduce P =. Now we consider the Lorentz boost in thez direction: z = (−vt + z), where j = 1, 2. The boosted metric becomes ds 2 = ds 2 F + r 7−D r 4 + a 2 (z 2 + Y 2 ) dt 2 +z 2 dz 2 where ds 2 F = −dt 2 + j dx 2 j + dz 2 + i dy 2 i. In the limit v → 1, this is reduced to f ((z − vt), X, Y ) = Pr 7−D r 4 + a 2 (z 2 + Y 2 ) 1 +z The strategy for obtaining this limit is as follows. We find the primitive function such that Then the limit can be obtained by As we will see later, is reduced to The key point for writing down the analytic form of the lightlike metric is whether we can obtain this primitive function or not. Note that (X, Y ) comes from the even part of f with respect toz, and the odd part of f contributes only to((z − t), X, Y ). Because we can show that the primitive function of the odd part becomes a finite value independent of z in the limit v → 1 for all D ≥ 4, it does not affect the result. Hence we omit the term 2z/r in eq. hereafter. Now we construct the lightlike Kerr metric by finding the primitive function in the higher-dimensional cases. Using eq., we find the following relations: Using these relations, we derive where and are given by Now we introduce R =r 2. Becausez = 0 corresponds to R =, we deriv where I (D) Because taking the limit v → 1 for t = z corresponds to R → ∞ and each integral is finite at R = ∞ for D ≥ 5, we derive The function (X, Y ) is expressed by the elliptic integrals, although we do not show explicitly here because it is quite complicated. The primitive functions I (D) i (R) necessary for this calculation are given in Appendix A. In summary, the metric of the lightlike Kerr black hole has the following form: The spacetime is flat except at z = t. The delta function indicates that the two coordinates in the regions z > t and z < t are discontinuously connected and there is the distributional Riemann tensor at z = t, which we call the gravitational shock wave. The function (X, Y ) characterizes the structure of this shock wave and we call it the potential of the shock wave hereafter. Before looking at the behavior of the potential for D ≥ 5, we would like to see what occurs in our method in the D = 4 case. In this case, we should be careful in the following two points. First, there is no y i direction and we need not consider I and I 2 are given by and we find On the other hand, adopting the other choice in whichz = 0 is equivalent to R = 0, I The unit of X and Y is a, and the unit of is P |a| D−4. In the case of D = 5, diverges at Y = 0, X = ±a, which indicates that the singularity is a one-dimensional ring. In other cases, diverges at Y = 0, −|a| < X < |a|. This implies that the singularity is a two-dimensional disk. which lead to The latter result is defined only in the region |X| < |a|. Because the former result has the same value at |X| = |a| as that of eq., lim v→1 f can be extended to the region |X| > |a| using eq.. Since we can eliminate the 1/|z − t| term using the proper coordinate transformation, we find that this result is the same as the previous result of using the relation P = 2G 4 p. At this point, we should point out the ambiguity of this method for D = 4. For example, we can consider the following form instead of eq. : with arbitrary function g(X). This leads to instead of eq.. This ambiguity is the reason why the previous authors have developed the various methods other than just boosting the metric. In fact, there would not exist the method to determine this factor g(X) rigorously without checking the consistency with the boosted energy-momentum tensor or the boosted Riemann tensor. Hence the above calculation is just the formal derivation of the lightlike Kerr black hole metric. However, we would like to point out that there is no ambiguity for the higher-dimensional cases, because the primitive function is finite in the limit v → 1. Hence in the higher-dimensional cases, it is sufficient to consider the boost of the metric and we need not calculate the energy-momentum tensor or the Riemann tensor of this system. Now we discuss the properties of the potential (X, Y ) in the higher-dimensional cases. Figure 1 shows the behavior of (X, Y ) for D = 5,..., 8. In the D = 5 case, the potential logarithmically diverges at X = ±a. Hence the shock wave contains a one-dimensional ring-shaped singularity. For D ≥ 6, the potential diverges at Y = 0, −|a| ≤ X ≤ |a|. Hence the singularity is a two-dimensional disk in these cases. This is interpreted as follows: For D = 5, the black hole horizon does not exist in the resulting spacetime because we fixed a in the boosting. The singularity becomes naked and the potential diverges at the singularity. While for D ≥ 6, the horizon exists for arbitrary M and a. Hence the resulting gravitational shock wave contains an extremely oblate horizon, on which the potential diverges. The function behaves like log Y for D = 6 and 1/Y D−6 for D ≥ 7. One can understand this potential behavior using the Newtonian analogy. The Newtonian potential of n-dimensional matter distribution in N -dimensional space is given by ∼ 1/r N −n−2 for N − n > 2 and ∼ − log r for N − n = 2. Because the shock wave is (D − 2) dimensional, the behavior of the potential and the dimensionality of the singularity have the same relation as the Newtonian gravity. Hence, we see that these results are consistent with our intuition and we expect that also in general cases we can guess the behavior of (X, Y ) to a certain extent, even if the calculation of the primitive function of f cannot be proceeded. III. LIGHTLIKE BOOST IN THE TRANSVERSE DIRECTION OF THE SPIN In this section, we consider the lightlike limit of the boosted Kerr black hole in the transverse direction to the spin. In this case, we use a coordinate system (t,x,,z i ) in which the metric of the Kerr black hole becomes where i = 1,..., D − 3, and Z andr are defined by The singularity is rotating in the (x,)-plane and we will consider the boost in thex direction. Again we fix the energy p = M and the Kerr parameter a in the boost and use the notation P =. By the Lorentz transformation the metric becomes where ds 2 F = −dt 2 + dx 2 + dy 2 + i dz 2 i. In the limit v → 1, this is reduced to where we have omitted the odd terms concerningx with the same reason as the parallel case. The method for obtaining this limit is the same as the parallel case. We find the primitive function and take the limit as follows: Again is reduced to and we obtain Now we find the primitive function. Using eq., we find the following relations with which we derive where and are given by Becausex = 0 corresponds to R ≡r 2 =, we obtai where Because taking the limit v → 1 for t = z corresponds to R → ∞ and each integral is finite at R = ∞ for D ≥ 5, we find The function can be expressed in terms of the elementary functions for even D and in terms of the elliptic integrals for odd D. The formulas necessary for this calculation are given in Appendix B. Now we consider the D = 4 case before looking at the behavior of. In this case, I 1 is given by and we show I Although we should calculate two cases ay < 0 and ay > 0 separately because I 2 (∞) and I 3 (∞) contain |ay|, they give the same result: This coincides with the previous results of. Again we should mention that there remains the ambiguity in the form of eq., because we can substitue the arbitrary function g(y, z) as follows: Hence we are required to check the consistency with the boosted energy-momentum tensor or the boosted Riemann tennsor and thus the above derivation is just the formal one. For the higher-dimensional cases, there is no such ambiguity because all integrals, and are finite for R → ∞. Figure 2 shows the potential of the shock wave for D = 5,..., 8. For all D ≥ 5, the potential diverges at Z = 0, −a ≤ y ≤ a. Hence in the transverse case, the singularity has the shape of a one-dimensional segment for all D ≥ 5. This is interpreted as follows. In the boost, the ring singularity becomes naked for D = 5 and the horizon becomes extremely oblate for D ≥ 6. This singularity or the horizon is infinitely Lorentz-contracted in the direction of the motion. This leads to the segmental shape of the singularity in the gravitational shock wave. The behavior of the potential around the singularity is ∼ − log |Z| for D = 5, and ∼ 1/|Z| D−5 for D ≥ 6. Similarly to the parallel case, we see that the relation between the potential behavior near the singularity and the dimensionality of the singularity is the same as the Newtonian gravity. The behavior of the potential is not symmetric to the y = 0 plane. This is the manifestation of the fact that the lightlike Kerr black hole carries the angular momentum J = −2ap/(D − 2). The rotating singularity has the velocity in the direction of the boost at (y, Z) = (a, 0) and it has the velocity in the converse direction of the boost at (y, Z) = (−a, 0). Correspondingly, the divergence of the potential at (y, Z) = (a, 0) is stronger than the one at (y, Z) = (−a, 0). IV. SUMMARY AND DISCUSSION In this paper, we have developed the method for taking the lightlike limit of the boosted Kerr black holes in the higher-dimensional spacetimes. We analyzed two cases, the boosts in the parallel direction and in the transverse direction to the spin. Our method is quite simple: just boosting the metric without considering the energy-momentum tensor or the Riemann tensor. The problem is reduced to taking the limit v → 1 of the function f of eqs. and. We solved this problem by introducing the primitive function of f, taking the limit v → 1 of, and then differentiating it. Because the primitive function is finite for the higher-dimensional cases, there is no ambiguity which occurred in the four-dimensional case. Hence, we consider that the boost of the metric is the simplest and easiest method in the higher-dimensional cases. We discuss whether this method works for the general systems. Also in the general systems, the lightlike limit of the metric would have the form like eq. or eq.. The problem is whether we can construct the primitive function of f. If not, we cannot analytically derive the metric of the lightlike system. This is the limitation in our method. But even if we cannot find the primitive function of f, there remains the possibility that we can numerically construct the potential. Namely, if is reduced to the form like eqs.,, and, it is easy to calculate the potential at each point by numerical integrals. Hence, our analysis shows the direction of the calculation at which we should aim in general cases. We also would like to mention that we can guess the resulting potential to some extent even if we fail the above directions, because the results are quite consistent with our intuition. We can intuitively imagine the shape of the infinitely Lorentz-contracted source and guess the shape of the potential by the Newtonian analog. Now we discuss the possible implications of our results in the context of the brane world scenario. There are several discussions that the Kerr black hole might provide the gravitational field of the elementary particle with spin, especially motivated by the fact that the Kerr-Newman black hole has the same gyromagnetic ratio as that of the electron. We consider that this scenario is unlikely, because the electron in this model has the characteristic scale a, whose value is similar to its Compton wave length. Furthermore, we do not have the criterion for determining a in the case of a massless spinning particle. However, we can expect that the spin has an effect on the gravitational field of the elementary particle, although the spin angular momentum and the orbital angular momentum are different. This expectation comes from the fact that the sum of the spin and orbital angular momentum is the conserved quantity. It is natural that the conserved quantity affects the spacetime structure. This expectation is strengthened by the following observation. If the black hole forms in the head-on collision of spinning particles, the produced black hole would have the angular momentum that corresponds to the sum of two spins. In order that the black hole has the angular momentum, the gravitational field of each particle should not be the same as that of the massless point particle given in. Because the lightlike Kerr black hole has the quite different gravitational field from that of the massless point particle, we can also expect that the spin of a particle has a large effect on its gravitational field. Hence it would affect the condition of the black hole formation in the high-energy particle collisions. To clarify this, the investigation of the Einstsein-Dirac system would be useful. As the first step, the classical field treatment would be appropriate. The formalism of this system was recently developed by Finster, and Finster et al. constructed the spin singlet state of two fermions. The generalization to the spin eigenstate of one particle and its lightlike boost would be an interesting next problem. Now we turn to the discussion about another possible implication, the relation between the Kerr black holes and the strings. In the string theory, there are several discussions that some black holes should be regarded as the elementary particles. Our formalism would also be applicable to these black hole solutions. Furthermore, Nishino and Nishino and Rajpoot discussed the fact that the Kerr black hole might be the classical model of the gravitational field around the closed string, simply because the Kerr black hole has the distributional energy-momentum tensor on its singularity. Hence, our boosted Kerr solution might have an implication for the black hole formation in the collision of high-energy strings with characteristic length scale a. Investigatiton of the properties of the black hole formation in the collision of two lightlike Kerr strings requires the analysis of the apparent horizon, which we have not done. But using the fact that each incoming Kerr string has a singularity with the characteristic scale a, we can discuss the condition for the black hole formation as follows. For some impact parameter b, we write the horizon shape in the case of the two-point-particle collision which has been calculated in. Then we draw on it the shape of the singularity of the lightlike Kerr strings with Kerr parameter a. If the singularity crosses the horizon, the apparent horizon would not form for such b and a. Under this assumption, we can calculate the maximal impact parameter of the black hole production as a function of a. Figure 3 shows the expected maximal impact parameter b max for the black hole production in the collision of two Kerr strings which are boosted in the direction of the spin. If the Kerr parameter a (i.e., the length of the string) is small compared to the gravitational radius of the system energy r h (2p), it does not affect the black hole formation. But b max decreases with the increase in a, and becomes zero at a ∼ r h (2p). We can write a similar figure in the case of two lightlike Kerr strings which are boosted in the transverse direction to the spin. Hence, the condition for b max = 0 is written as a r h (2p). If the Planck energy is O(TeV) and the string length a is similar to the Planck length, the gravitational radius for the two particle system becomes r h (2p) ∼ a if the incoming energy is (few) TeV. Hence the string length might have the effect such that it prevents the black hole formation. We should mention that the above discussion is not rigorous, because we do not know the validity of the assumption we have used. We can also have the totally different expectation, because in the higher-dimensional spacetime, the arbitarily long black hole can form as clarified by Ida and Nakao. The higher-dimensional effect might lead to the formation of the long apparent horizon in the high-energy string collision depending on the shape of two strings. If this is true, the cross section for the black hole production might be enhanced. Furthermore, we can also expect the formation of the black ring, which is the higher-dimensional black hole whose horizon topology is S 1 S D−3. (See for the five-dimensional solution.) If the shape of two strings at the instant of the collision is similar to the ring, the ring-shaped apparent horizon that surrounds the two strings might form. Although the author and Nambu discussed one possible method to produce the black rings in particle systems in the previous paper, it required more than two particles. If the black ring forms due to the effect of the string length, it would be the more interesting phenomena. To clarify the effect of the string length on the black hole formation in the high-energy string collisions would be the interesting problem to be tackled as the next step.
Sexuality in Women of Childbearing Age Women of childbearing age have health-care needs related to sexuality. The health-care needs that are most obvious are the need for contraception and the need for the prevention and treatment of vaginal and sexually transmitted infections. Although providers may have questions related to sexual activity, sexual orientation, sexual practices, sexual satisfaction, and intimate partner violence on patient history forms, they often offer little discussion on issues related to sexuality unless the patient raises the issues. Womens sexuality is intensely personal and individual. Changes may occur in sexuality during pregnancy or as a response to infertility. These changes may be physical or emotional. During her prepregnancy and prenatal care, a woman may meet with a range of health-care providers, including childbirth educators, lactation consultants, nurses, midwives, and physicians. It is within the scope of practice of each of these clinicians to address sexuality concerns, validate womens feelings, and provide suggestions of modifications in sexual practices to meet womens needs for sexual expression within the range of activities that are safe and acceptable.
Experimental evaluation of achromatic phase shifters for mid-infrared starlight suppression. Phase shifters are a key component of nulling interferometry, one of the potential routes to enabling the measurement of faint exoplanet spectra. Here, three different achromatic phase shifters are evaluated experimentally in the mid-infrared, where such nulling interferometers may someday operate. The methods evaluated include the use of dispersive glasses, a through-focus field inversion, and field reversals on reflection from antisymmetric flat-mirror periscopes. All three approaches yielded deep, broadband, mid-infrared nulls, but the deepest broadband nulls were obtained with the periscope architecture. In the periscope system, average null depths of 4x10(-5) were obtained with a 25% bandwidth, and 2x10(-5) with a 20% bandwidth, at a central wavelength of 9.5 mum. The best short term nulls at 20% bandwidth were approximately 9x10(-6), in line with error budget predictions and the limits of the current generation of hardware.
Sayer Ji, Contributor Activist Post New research on the DNA-damaging effects of the popular herbicide known as Roundup® indicates that it can do significant harm to fish even after short-term, environmentally low concentration exposures in the parts per billion range (μg /L).[i] Published this month (July, 2012) in the journal Environmental Monitoring and Assessments researchers set out to evaluate the genotoxic effects that the herbicide Roundup®, and its primary ingredient glyphosate, can have on a species of catfish known as Corydorasa paleatus. When exposed to minute concentrations of Roundup (at a concentration of 6.67 μg/L, corresponding to 3.20 μg/L glyphosate) for 3,6 and 9 days, the following effects were observed: “[T]he comet assay showed a high rate of DNA damage in group exposed to Roundup(®) for all treatment times, both for blood and hepatic cells.” The researchers summarized their findings: We conclude that for the low concentration used in this research, the herbicide shows potential genotoxic effects. Future research will be important in evaluating the effects of this substance, whose presence in the environment is ever-increasing. First, it is very important to understand how low a μg /L or parts per billion concentration is. One way to visualize it is to think of one drop in one billion drops of water, or what amounts to one drop of water in an entire swimming pool.[ii] Owing to the fact that glyphosate is now an ubiquitous contaminant in our environment, having been found in most U.S. air and rain samples tested,[iii] as well as being measured beyond the limit of quantification (higher than 2.5 ug/L) in 41% of 140 groundwater samples collected from Catalonia Spain, this new finding has profound implications for environmental and human health alike. With over 88,000 tons of the stuff used in the US in 2007, according to the USGS, and with an ever-expanding volume being applied to increasingly glyphosate-resistant GM crops, the problem of exposure is only going to get worse in the future – that is, if we continue to support the growth of glyphosate-dependent GM crop industries. Monsanto, the originator of glyphosate and its most popular branded formulation, Roundup®, once marketed their herbicide “as safe as table salt,” and claimed it was “highly biodegradable.” These claims have now been disproved. Like Agent Orange, another Monsanto co-creation (along with Dow Chemicals, and several other government contractors), this herbicide exhibits a broad range of biocidal (life-killing) properties. (Note: the concept of a “weedkiller” is absurd. These chemicals do not selectively kill only targeted plants. Also, a “weed” is simply a plant whose virtues we have yet discovered [Emerson], or which is encroaching on a plant we favor). Some of the ways in which glyphosate formulations have been experimentally confirmed to disrupt, mutate, alter or kill are as follows:
In recent years, along with the discovery and development of organic semiconductor materials, an organic thin film transistor device, in which an organic material instead of an inorganic material is used for carrier transportation, has been prepared, and performance of the device is being promoted gradually. The fundamental structure and function of an Organic Thin Film Transistor (OTFT) are basically the same as those of a traditional Thin Film Transistor (TFT), and the difference lies in that, an organic semiconductor is used by it as the working substance. Regarding a traditional inorganic thin film transistor, it is a field effect transistor of Metal Oxide Semiconductor (MOS) type, and its semiconductor material is usually inorganic silicon. While an organic semiconductor material is used in an organic thin film transistor to replace an inorganic semiconductor material in MOS. As compared with an existing amorphous silicon or polysilicon TFT, an OTFT has the following features. It has low processing temperature, usually below 180° C., so that the energy consumption is decreased significantly, and it is suitable for flexible substrates. In addition, it also has greatly simplified technological process, substantially reduced cost, wide material sources, and big developing potentialities. It is possible for OTFTs to find applications in many electronic products, such as, active matrix displays, smart cards, labels for commodity price and inventory classification, large-area sensor arrays, etc. As illustrated in FIG. 1, the construction of an organic thin film transistor in prior art includes a base substrate 10, a gate electrode 11 located on the base substrate 10, a gate insulating layer 12 located on the gate electrode 11, a source electrode 13 and a drain electrode 14 that are located on the gate insulating layer 12, and an organic semiconductor active layer 15 located on the source electrode 13 and the drain electrode 14. A photolithography process is usually used for formation of the organic semiconductor active layer 15 in prior art. In the photolithography process, formation of a channel of the organic semiconductor layer is mainly achieved in such a way that the organic semiconductor layer is patterned by using an etching method. But when the channel of the organic semiconductor layer is formed by using an etching method, a photoresist solvent for the etched layer will have an effect on the channel, such as surface dissolution. In addition, channel edges after etch may be affected by an etching media, and for example, edge overetch, oxidation, injection of ions from sidewall or other effect results. In general, such a processing method will affect performance of the channel, and will lead to increasing of a leakage current of the organic thin film transistor. In summary, when an organic semiconductor layer is patterned by using an etching method in prior art, a photoresist solvent for the etched layer will make an impact on the organic semiconductor layer, leading to degradation in performance of the organic thin film transistor device, and reduction in its service life.
Bioglass-Incorporated Methacrylated Gelatin Cryogel for Regeneration of Bone Defects Cryogels have recently gained interest in the field of tissue engineering as they inherently possess an interconnected macroporous structure. Considered to be suitable for scaffold cryogel fabrication, methacrylated gelatin (GelMA) is a modified form of gelatin valued for its ability to retain cell adhesion site. Bioglass nanoparticles have also attracted attention in the field due to their osteoinductive and osteoconductive behavior. Here, we prepare methacrylated gelatin cryogel with varying concentration of bioglass nanoparticles to study its potential for bone regeneration. We demonstrate that an increase in bioglass concentration in cryogel leads to improved mechanical property and augmented osteogenic differentiation of mesenchymal cells during in vitro testing. Furthermore, in vivo testing in mice cranial defect model shows that highest concentration of bioglass nanoparticles (2.5 w/w %) incorporated in GelMA cryogel induces the most bone formation compared to the other tested groups, as studied by micro-CT and histology. The in vitro and in vivo results highlight the potential of bioglass nanoparticles incorporated in GelMA cryogel for bone regeneration. Introduction Bone regeneration following surgical removal of bone and injuries from sports, trauma, and disease is a challenging obstacle in clinical treatment because it can easily change course and impair the patient's quality of life. To augment bone regeneration, autografts and allografts are widely used to treat bone defects, and they are often applied clinically via the Masquelet technique and the Reamer-Irrigator-Aspirator system, either independently or in combination. Although using bone grafts during these procedures possess desirable attributes, such as molecular cues, osteogenic cells, and niche for bone regrowth, retrieving these bone grafts still requires invasive and possibly fatal surgeries. Using allografts could compensate for morbidity, but there is the possibility of transferring diseases from the donors. In this aspect, application of synthetic biomaterial-based Synthesis of Methacrylated Gelatin GelMA was synthesized by first dissolving type A porcine skin gelatin (Sigma-Aldrich, St. Louis, MO, USA) into Dulbecco's phosphate buffer saline (DPBS; Sigma-Aldrich, St. Louis, MO, USA) at 60 °C for 1 h to reach 10% (w/v) uniform solution. An 8% (v/v) of methacrylic anhydride (Sigma-Aldrich, St. Louis, MO, USA) was then added to 10% (w/v) gelatin solution at a rate of 0.5 mL/min under stirred condition and reacted for 3 h at 50 °C. The resulting solution was 5-diluted with warm Dulbecco's phosphate buffer saline (Gibco, Waltham, MA, USA) and dialyzed using 12-14 kDa cutoff dialysis tube (Sigma-Aldrich, St. Louis, MO, USA) in water for 7 days at 40 °C. The dialyzed solution was placed at −80 °C for a day and lyophilized for another 7 days. After lyophilization, white, foamtype sponge was collected and stored at −20 °C for further use. Preparation of Nanobioglass Nanobioglass (SiO2:CaO:P2O5 (mol) = 55:40:5) was prepared by the following protocol, with modifications. Briefly, 9.826 mL of tetraethyl orthosilicate (TEOS) diluted in 60 mL of EtOH was added to 120 mL of Ca(NO3)24H2O solution (270 mM). The pH of the solution was adjusted to 1-2 with citric acid, and the reaction mixture was stirred vigorously for 12 h. Homogeneous solution was added dropwise to 1500 mL of ammonium dibasic phosphate solution (5.44 mM). During this step, the pH of the solution was maintained at 11 using ammonium hydroxide. The mixture was stirred for 48 h and aged for 24 h, following which the precipitate was separated by centrifugation. The precipitate was washed in EtOH and three times in water. The precipitate was suspended in 2% PEGwater solution and stirred for 30 min. It was freeze-dried for 48 h and then sintered at 700 °C for 3 h. The annealed bioglass was stored at room temperature until further use. Synthesis of Methacrylated Gelatin GelMA was synthesized by first dissolving type A porcine skin gelatin (Sigma-Aldrich, St. Louis, MO, USA) into Dulbecco's phosphate buffer saline (DPBS; Sigma-Aldrich, St. Louis, MO, USA) at 60 C for 1 h to reach 10% (w/v) uniform solution. An 8% (v/v) of methacrylic anhydride (Sigma-Aldrich, St. Louis, MO, USA) was then added to 10% (w/v) gelatin solution at a rate of 0.5 mL/min under stirred condition and reacted for 3 h at 50 C. The resulting solution was 5-diluted with warm Dulbecco's phosphate buffer saline (Gibco, Waltham, MA, USA) and dialyzed using 12-14 kDa cutoff dialysis tube (Sigma-Aldrich, St. Louis, MO, USA) in water for 7 days at 40 C. The dialyzed solution was placed at −80 C for a day and lyophilized for another 7 days. After lyophilization, white, foam-type sponge was collected and stored at −20 C for further use. Preparation of Nanobioglass Nanobioglass (SiO 2 :CaO:P 2 O 5 (mol) = 55:40:5) was prepared by the following protocol, with modifications. Briefly, 9.826 mL of tetraethyl orthosilicate (TEOS) diluted in 60 mL of EtOH was added to 120 mL of Ca(NO 3 ) 2 4H 2 O solution (270 mM). The pH of the solution was adjusted to 1-2 with citric acid, and the reaction mixture was stirred vigorously for 12 h. Homogeneous solution was added dropwise to 1500 mL of ammonium dibasic phosphate solution (5.44 mM). During this step, the pH of the solution was maintained at 11 using ammonium hydroxide. The mixture was stirred for 48 h and aged for 24 h, following which the precipitate was separated by centrifugation. The precipitate was washed in EtOH and three times in water. The precipitate was suspended in 2% PEG-water solution and stirred for 30 min. It was freeze-dried for 48 h and then sintered at 700 C for 3 h. The annealed bioglass was stored at room temperature until further use. solution, 0.5% (w/v) of ammonium persulfate (Sigma-Aldrich, St. Louis, MO, USA) and 0.25% (v/v) of N,N,N,N -tetramethylethylenediamine (Sigma-Aldrich, St. Louis, MO, USA) were added to initiate polymerization. Then, the solutions were pipetted to 200 L mold (cylindrical shape-diameter: 8 mm and height: 3 mm) and placed at −20 C for 24 h to slow down polymerization reaction while maximizing ice crystal fragments for larger pores. After gelation, ice fragments were removed via lyophilization, and cryogels were obtained. Cryogels were stored at −80 C for further use. Swelling Ratio and Mechanical Property of Scaffold Swelling ratio test was performed to investigate water retention of cryogels according to bioglass concentration. For swelling tests, dry weights of each cryogel were measured and transferred to DPBS (Sigma-Aldrich, St. Louis, MO, USA) for a day to swell. After 24 h, cryogels in DPBS were collected, and swelling weights of each cryogels were measured. The average swelling ratios of cryogels were calculated based on the following equation: where W s is the swell weight of cryogels and W d is dried weight of cryogels. Cryogels were swollen in DPBS for 24 h and tested for Young's modulus using the universal testing machine (Universal testing machine, EZ-SX, Shimadzu, Kyoto, Japan). Gels were compressed with the loading rate of 1 mm/min. The result was obtained by the stress-strain curve, and Young's modulus was calculated by the measurement of the slope linearly increased in the region of the stress-strain curve. The dataset was then analyzed using the equation: Young's modulus =, where represents stress and represents strains. Degradation by Collagenase Degradation rates of 10% (w/v) GelMA cryogels, GelMA-0.5% (w/v) bioglass cryogels, GelMA-1.5% (w/v) bioglass cryogels, and GelMA-2.5% (w/v) bioglass cryogels were measured by placing them in 1 unit/mL of collagenase II solution (Worthington Biochemical). Samples were incubated in 37 C, and old collagenase solutions were replaced with new collagenase solutions every day. Cryogels were removed from the collagenase solutions, and water on the surface of cryogels was removed before swollen weights of cryogel were measured at a time point. Ion Release Analysis Bioglass powder was autoclaved before measuring ion release level to remove any contaminants. Then, GelMA cryogels, GelMA-0.5% bioglass cryogels, GelMA-1.5% bioglass cryogels, and GelMA-2.5% bioglass cryogels were fabricated (n = 3) and placed at 24 well plates and filled with 1 mL of simulated body fluid solution containing 58.43 g of NaCl, 2.77 g of CaCl 2, and 1.39 g of NaH 2 PO 4 H 2 O (all chemicals from Sigma-Aldrich, St. Louis, MO, USA) in 1 L of deionized water (Sigma-Aldrich, St. Louis, MO, USA) as prepared. The solution was collected at day 1, 3, 5, and 7 and centrifuged at 4000 rpm for 30 min. Solutions were then filtered using 200 nm pore sizes of syringe membrane (Acrodisc ® ) after samples were collected. Ion release rate of Ca 2+, Si 4+, and P 3+ ions were measured with inductively coupled plasma atomic emission spectrometer (ICP-AES, Optima 8300, PerkinElmer, Waltham, MA, USA). To measure ion release rates of bioglass-embedded cryogels in deionized water, the same protocol was used. Scanning Electron Microscopy Internal structures of cryogels with different concentration of bioglass were observed via scanning electron microscopy (FE-SEM, JSM-6701F, JEOL, Tokyo, Japan). Cross sections of cryogels were fixed on mounts and coated with platinum at 20 mA for 100 s. Then, internal structures were analyzed by Field emission electron microscopy (FE-SEM; JSM-6701F, JEOL, Tokyo, Japan). Cell Viability Cell viability was observed via Live/Dead viability kit (InvitrogenTM, Waltham, MA, USA). 5 10 5 cells of hTMSCs were seeded per cryogels and were incubated for 24 h. After 24 h, cryogels were washed with DPBS three times and 1 mL of DPBS, including 2 L calcein AM and 1 L ethidium homodimer-1, was added to each cryogel. Cryogels were incubated for 30 min and was measured by confocal microscope (Confocal Laser Scanning Microscope, LSM 720, Carl Zeiss, Oberkochen, Germany) Real Time-PCR RNAs were extracted from the cell-laden GelMA and GelMA-bioglass cryogel (n = 3) with Trizol (Life Technology, Waltham, MA, USA). The concentration of RNA that was extracted was measured by NanoDrop spectrometer (ND-2000; NanoDrop Technologies) and reverse-transcribed into cDNA using TOPscriptTMReverse Transcriptase Kit (Enzynomics). Real time-PCR was performed using ABI StepOnePlusTM real-time PCR system (Applied Biosystems). cDNA samples were analyzed for relative gene expression of GAPDH, OCN, Collagen I, and Runx2 when GAPDH was used as a house-keeping gene. Relative gene expressions of interests were calculated using -2 ∆∆Ct method. Primer sequences that were used in RT-PCR were: GAPDH (Forward Alizarin Red Staining After 21 days of differentiation, cells were washed with DPBS twice and fixed using 4% paraformaldehyde for 15 min at room temperature. Fixed cells were stained with 2% alizarin red staining solution for 20 min and washed three times with distilled water for 5 min each. Calvarial Defect Surgical Procedure All experiments were carried out in accordance with the Guide for the Care and Use of Laboratory Animals by Seoul National University (Approval No. SNU-141229-3-6). All operations were performed under Zoletil 50 (Virbac, Carros, France) and Rompuninj (Bayer, Leverkusen, Germany) anesthesia to Polymers 2018, 10, 914 6 of 17 minimize animal suffering. Twelve female balb-C mice (OrientBio Co., Seoul, Korea) were used for calvarial defects. Mice were caged and handled in a sterile room at 22 C and 50% humidity with 12 h of light and dark cycles. Before calvarial defect surgery, mice were under anesthesia via intraperitoneal injection. Under anesthesia, incision was made on forehead, and 4-mm diameter of calvarial defect was performed using a trephine bur attached to hand drilling machine. Cryogels were transplanted to defected sites, and mice were collected after eight weeks of transplantation. Microcomputed Tomography Analysis Defected sites of mice were collected and fixed with 4% paraformaldehyde solution. Images of surgical sites were obtained using Skyscan 1172 at 59 kV of operation source voltage, 167 A of source current, and 40 ms of an exposure time. The projected images were reconstructed into 3D images for further analysis using ReCon MicroCT from Skyscan. Histological Analysis After fixing defected area and surrounding tissue of skulls in 4% paraformaldehyde solution for 24 h, skulls were decalcified using 14% ethylene diaminetetraacetic acid (EDTA) at pH 7.4 for 4 days. Then, skulls were embedded in paraffin solutions and longitudinally sectioned at a thickness of 5 m. Sectioned samples were deparaffinated using xylene solution and gradually washed with tap water. Samples were stained with H&E staining and Masson's trichrome (MTC) staining and analyzed using light microscope (Olympus, Tokyo, Japan). Statistical Analysis Quantitative data in this paper are presented in mean ± standard deviation. The statistical significance was determined using one-way analysis of variance (ANOVA) with * p < 0.05, ** p < 0.001, and *** p < 0.0001. Synthesis of Methacrylated Gelatin and Bioglass Gelatin methacrylate was synthesized to provide cross-linking sites to conventional gelatin because it required glutaraldehyde, which was reported to be cytotoxic for forming permanent chemical cross-linking. Methacrylation of gelatin was confirmed using 1 H NMR, as the characteristic peaks of acrylic hydrogen were present at 5.3 and 5.5 ppm ( Figure S1). Then, bioglass nanoparticles (BGN) were prepared by sol-gel synthesis route ( Figure 2a). The size of bioglass was measured using scanning electron microscopy (SEM). As shown in Figure 2b, the morphology of synthesized bioglass particles was round, amorphous, and around 53.83 ± 13.01 nm in size. The Fourier-transform infrared spectroscopy (FT-IR) was used to further analyze the structure of bioglass (Figure 2c). The FT-IR spectrum showed characteristic bending vibration of phosphate group peaks at 561 and 603 cm −1 and Si-O-Si stretching band at 1080 cm −1. After synthesizing BGN, its bioactivity postimplantation was tested. BGN was immersed for 14 days in simulated body fluid (SBF) solution and compared with prepared BGN by X-ray powder diffraction (XRD). As shown in Figure 2d, same hydroxyapatite peaks as those noted by Constantz et al. and Ishikawa et al. were presented in BGN soaked in SBF solution, confirming the bioactivity of synthesized BGN. Characterization of Bioglass-Embedded Methacrylated Gelatin Cryogel GelMA cryogels were synthesized via chemically cross-linking GelMA solution using ammonium persulfate and N,N,N,N-tetramethylethylenediamine with various concentration of BGN ( Figure 3a). Then, the swelling ratio of BGN-embedded GelMA cryogels was measured. As shown in Figure 3b, the descending trend of swelling ratio was observed from 8.30 to 6.37 as the concentration of bioglass in cryogels was increased. Furthermore, Young's modulus of all groups was measured through analyzing stress vs. strain graph from the mechanical testing. As expected, Young's modulus of cryogel was increased as the concentration of BGN also augmented from 84.67 ± 16.17 to 178 ± 47.70 kPa (Figure 3c). Characterization of Bioglass-Embedded Methacrylated Gelatin Cryogel GelMA cryogels were synthesized via chemically cross-linking GelMA solution using ammonium persulfate and N,N,N,N -tetramethylethylenediamine with various concentration of BGN ( Figure 3a). Then, the swelling ratio of BGN-embedded GelMA cryogels was measured. As shown in Figure 3b, the descending trend of swelling ratio was observed from 8.30 to 6.37 as the concentration of bioglass in cryogels was increased. Furthermore, Young's modulus of all groups was measured through analyzing stress vs. strain graph from the mechanical testing. As expected, Young's modulus of cryogel was increased as the concentration of BGN also augmented from 84.67 ± 16.17 to 178 ± 47.70 kPa (Figure 3c). The degradation rates of cryogel were tested for their dependency on the bioglass concentration in the presence of collagenase II. Collagenase II was widely used to test degradation times of GelMAbased cryogel. The degradation time was not significantly different between the control group and the experimental group due to the relatively low concentration of BGN compared to that of GelMA ( Figure S2). The remaining mass percentage after degrading GelMA cryogel for 7 days was 49.80 ± 2.91%, while remaining mass percentages of cryogels for 0.5%, 1.5%, and 2.5% BGN were 66.33 ± 8.73%, 60.48 ± 2.69%, and 64.02 ± 6.11%, respectively. Because the increase in Young's modulus seemed to be linked to the increase in bioglass concentration incorporated to cryogel, internal structures of cryogels were observed to ensure that The degradation rates of cryogel were tested for their dependency on the bioglass concentration in the presence of collagenase II. Collagenase II was widely used to test degradation times of GelMA-based cryogel. The degradation time was not significantly different between the control group and the experimental group due to the relatively low concentration of BGN compared to that of GelMA ( Figure S2). The remaining mass percentage after degrading GelMA cryogel for 7 days was 49.80 ± 2.91%, while remaining mass percentages of cryogels for 0.5%, 1.5%, and 2.5% BGN were 66.33 ± 8.73%, 60.48 ± 2.69%, and 64.02 ± 6.11%, respectively. Because the increase in Young's modulus seemed to be linked to the increase in bioglass concentration incorporated to cryogel, internal structures of cryogels were observed to ensure that the differences in porosity between experimental and control groups were negligible. The internal structures of BGN-embedded GelMA cryogels were observed using SEM (Figure 4a). Then, the SEM images of lyophilized BGN-embedded GelMA cryogels were analyzed further with ImageJ for average pore area. As shown in Figure 4b, there was no significant difference among the groups with different concentration of BGN. Since BGN was physically mixed with GelMA solution when cryogel fabrication was performed, both the experimental groups and the control group possessed similar internal structures. Furthermore, cell permeabilities of scaffolds were examined via seeding green fluorescent protein (GFP)-tagged HELA cells on GelMA and GelMA-bioglass cryogels and using confocal microscopy after 24 h (Figure 4c). Then, in order to quantify the cell permeabilities of scaffolds, the number of cells was counted and, expectedly, all cryogel groups had uniform cell permeability Hydroxyapatite Formation in SBF Solution Various studies have confirmed the bioactivity of intact bioglass nanoparticle; yet, bioactivities of cryogels in which BGN are embedded have not been emphasized. First, in order to measure Si 4+ and Ca 2+ ion release from bioglass-embedded cryogels, BGN-embedded cryogels were incubated in deionized water for 7 days and measured using inductively coupled plasma atomic emission spectrometer. As shown in Figure 5a, as the concentration of nanobioglass in cryogel increased, released amount of Si 4+ and Ca 2+ ion increased as well. To measure the bioactivity, BGN-embedded cryogels were incubated in SBF solution for 7 days, and phosphorus and silicon ion release kinetics were measured. Based on the incubation time, the amount of released silicon ions increased as the amount of released phosphorus ions decreased (Figure 5b). Hydroxyapatite Formation in SBF Solution Various studies have confirmed the bioactivity of intact bioglass nanoparticle; yet, bioactivities of cryogels in which BGN are embedded have not been emphasized. First, in order to measure Si 4+ and Ca 2+ ion release from bioglass-embedded cryogels, BGN-embedded cryogels were incubated in deionized water for 7 days and measured using inductively coupled plasma atomic emission spectrometer. As shown in Figure 5a, as the concentration of nanobioglass in cryogel increased, released amount of Si 4+ and Ca 2+ ion increased as well. To measure the bioactivity, BGN-embedded cryogels were incubated in SBF solution for 7 days, and phosphorus and silicon ion release kinetics were measured. Based on the incubation time, the amount of released silicon ions increased as the amount of released phosphorus ions decreased (Figure 5b). and Ca 2+ ion release from bioglass-embedded cryogels, BGN-embedded cryogels were incubated in deionized water for 7 days and measured using inductively coupled plasma atomic emission spectrometer. As shown in Figure 5a, as the concentration of nanobioglass in cryogel increased, released amount of Si 4+ and Ca 2+ ion increased as well. To measure the bioactivity, BGN-embedded cryogels were incubated in SBF solution for 7 days, and phosphorus and silicon ion release kinetics were measured. Based on the incubation time, the amount of released silicon ions increased as the amount of released phosphorus ions decreased (Figure 5b). After seven days of incubation, internal structures of each group were observed through SEM for possible hydroxyapatite formation. As shown in Figure 5a, were able to find formed hydroxyapatite particles from SEM image (Figure 6a). Furthermore, calcium to phosphate ratio on the surfaces of cryogels was calculated to be 1.63, which was very close to the textbook value of calcium to phosphorus ratio of hydroxyapatite (Figure 6b). Then, the relationship between the formed hydroxyapatite and concentration of BGN was analyzed with XRD. Figure 6c shows that the intensity of hydroxyapatite concentration increased as the concentration of BGN increased. Among the groups, 2.5% BGN-embedded cryogel resulted in the highest intensity. After seven days of incubation, internal structures of each group were observed through SEM for possible hydroxyapatite formation. As shown in Figure 5a, were able to find formed hydroxyapatite particles from SEM image (Figure 6a). Furthermore, calcium to phosphate ratio on the surfaces of cryogels was calculated to be 1.63, which was very close to the textbook value of calcium to phosphorus ratio of hydroxyapatite (Figure 6b). Then, the relationship between the formed hydroxyapatite and concentration of BGN was analyzed with XRD. Figure 6c shows that the intensity of hydroxyapatite concentration increased as the concentration of BGN increased. Among the groups, 2.5% BGN-embedded cryogel resulted in the highest intensity. Figure 6. Bioactivity of bioglass (a) SEM images of GB0.5%, GB1.5%, and GB2.5% cryogels after immersing in SBF solution for seven days. Black arrow represents formed hydroxyapatite; scale bar = 1 m; (b) EDS mapping of GB2.5%; (c) XRD of G, GB0.5%, GB1.5%, and GB2.5% cryogels. Cell Viability Analysis Cell viability tests were performed to investigate the cytotoxicity of groups for further cellular responses. Human tonsil-derived mesenchymal stem cells were seeded on BGN-embedded cryogels and after 24 h of incubation, live/dead viability test was proceeded. Fluorescence images of each group after incubating (hTMSCs) on cryogels for 24 h were observed (Figure 7a). Then, cell viabilities of each group were calculated based on fluorescence images. The percentage of cell viability was slightly decreased as the concentration of bioglass increased; however, all groups showed over 90% of cell viability (Figure 7b). For the seeding efficiency of hTMSCs on GelMA and bioglass nanoparticles incorporated cryogels, the percentage of seeding efficiency decreased as the concentration of bioglass increased (Figure 7c). This indicates that increase of bioglass concentration Figure 6. Bioactivity of bioglass (a) SEM images of GB0.5%, GB1.5%, and GB2.5% cryogels after immersing in SBF solution for seven days. Black arrow represents formed hydroxyapatite; scale bar = 1 m; (b) EDS mapping of GB2.5%; (c) XRD of G, GB0.5%, GB1.5%, and GB2.5% cryogels. Cell Viability Analysis Cell viability tests were performed to investigate the cytotoxicity of groups for further cellular responses. Human tonsil-derived mesenchymal stem cells were seeded on BGN-embedded cryogels and after 24 h of incubation, live/dead viability test was proceeded. Fluorescence images of each group after incubating (hTMSCs) on cryogels for 24 h were observed (Figure 7a). Then, cell viabilities of each group were calculated based on fluorescence images. The percentage of cell viability was slightly decreased as the concentration of bioglass increased; however, all groups showed over 90% of cell viability (Figure 7b). For the seeding efficiency of hTMSCs on GelMA and bioglass nanoparticles incorporated cryogels, the percentage of seeding efficiency decreased as the concentration of bioglass increased (Figure 7c). This indicates that increase of bioglass concentration lead less cells to be seeded on the surface of GelMA. However, all groups showed over 30% of cell seeding efficiency. Enhanced Osteogenic Responses on hTMSCs on GelMA-Bioglass cryogel After confirming hydroxyapatite formation and cytotoxicity, osteogenic potential of BGNembedded cryogels was verified by seeding hTMSCs on top of cryogels and culturing the cells with osteogenic medium for 7 and 14 days to measure relative osteogenic gene expression. On both day 7 and day 14, quantitative real-time PCR analysis of osteogenic markers confirmed that hTMSCs seeded on cryogels with higher concentration of bioglass (1.5% and 2.5% of BGN) augmented bonerelated gene expressions compared to GelMA cryogel at day 14-7.84 (1.5% BGN) and 13.13 (2.5% BGN) fold increase in osteocalcin, 13.56 (1.5% BGN) and 29.92 (2.5% BGN) fold increase in collagen I, and 6.47 (1.5% BGN) and 12.83 (2.5% BGN) fold increase in Runx2 (Figure 8). However, the difference in gene expressions of GelMA cryogel and GelMA-0.5% bioglass cryogel was not significant on day 7 and day 14. Enhanced Osteogenic Responses on hTMSCs on GelMA-Bioglass cryogel After confirming hydroxyapatite formation and cytotoxicity, osteogenic potential of BGN-embedded cryogels was verified by seeding hTMSCs on top of cryogels and culturing the cells with osteogenic medium for 7 and 14 days to measure relative osteogenic gene expression. On both day 7 and day 14, quantitative real-time PCR analysis of osteogenic markers confirmed that hTMSCs seeded on cryogels with higher concentration of bioglass (1.5% and 2.5% of BGN) augmented bone-related gene expressions compared to GelMA cryogel at day 14-7.84 (1.5% BGN) and 13.13 (2.5% BGN) fold increase in osteocalcin, 13.56 (1.5% BGN) and 29.92 (2.5% BGN) fold increase in collagen I, and 6.47 (1.5% BGN) and 12.83 (2.5% BGN) fold increase in Runx2 (Figure 8). However, the difference in gene expressions of GelMA cryogel and GelMA-0.5% bioglass cryogel was not significant on day 7 and day 14. Further investigating cellular responses of hTMSCs, the cells were cultured on petri dishes in osteogenic medium with similar concentration of bioglass for 21 days. Then, the amount of calcium deposition of each sample was measured using alizarin red staining ( Figure S3). The result was similar to that of quantitative real-time PCR-higher calcium deposition was observed as bioglass dosage increased. In vivo Bone Regeneration after Implanting Bioglass-Embedded GelMA Cryogel To examine the optimal concentration of BGN embedded in GelMA cryogels for effective bone regeneration, in vivo studies were conducted. GelMA-bioglass cryogels were implanted to calvarial defect areas of balb/c mice. Then, bone regeneration of the defected areas was evaluated using microCT eight weeks after transplantation (Figure 9a). Corresponding to in vitro cell studies, bone volume/ total volume (BV/TV) of cryogels containing the highest bioglass concentration was 3.89fold higher compared to the control group (Figure 9b). In addition, cell penetration on the scaffold was evaluated via histology. Histological analysis of H&E staining and Masson's trichrome staining (MTC) showed that as the concentration of bioglass increased, more regenerated bone tissues along with collagen were found in the defect area where the nanobioglass-embedded cryogels were implanted (Figure 9c). Histological analysis and microCT confirmed that GelMA-2.5% bioglass cryogel enhanced bone regeneration the most compared to other experimental groups. Further investigating cellular responses of hTMSCs, the cells were cultured on petri dishes in osteogenic medium with similar concentration of bioglass for 21 days. Then, the amount of calcium deposition of each sample was measured using alizarin red staining ( Figure S3). The result was similar to that of quantitative real-time PCR-higher calcium deposition was observed as bioglass dosage increased. In Vivo Bone Regeneration after Implanting Bioglass-Embedded GelMA Cryogel To examine the optimal concentration of BGN embedded in GelMA cryogels for effective bone regeneration, in vivo studies were conducted. GelMA-bioglass cryogels were implanted to calvarial defect areas of balb/c mice. Then, bone regeneration of the defected areas was evaluated using microCT eight weeks after transplantation (Figure 9a). Corresponding to in vitro cell studies, bone volume/ total volume (BV/TV) of cryogels containing the highest bioglass concentration was 3.89-fold higher compared to the control group (Figure 9b). In addition, cell penetration on the scaffold was evaluated via histology. Histological analysis of H&E staining and Masson's trichrome staining (MTC) showed that as the concentration of bioglass increased, more regenerated bone tissues along with collagen were found in the defect area where the nanobioglass-embedded cryogels were implanted (Figure 9c). Histological analysis and microCT confirmed that GelMA-2.5% bioglass cryogel enhanced bone regeneration the most compared to other experimental groups. Polymers 2018, 10, x FOR PEER REVIEW 13 of 18 Discussion Synthetic calcium phosphate-based materials have been widely studied for regenerating defected bone area. Among them, bioglass is valued for its ability to enhance osteogenic differentiation by forming hydroxyapatite and releasing osteoinductive ions; various studies have demonstrated the invaluable effects of osteoinductive ions on bone regeneration process. In addition to supplying calcium phosphate, providing extracellular matrix-like environment enhances osteoregeneration during the bone healing process. Because it is capable of mimicking natural ECM and has a porous structure, cryogel is widely used in tissue engineering. From this idea, we embedded bioglass into GelMA cryogels in an attempt to provide an osteogenic-friendly environment ( Figure 1). Furthermore, the macroporous structure of GelMA would provide cancellous bone-like structure where host stem cells can migrate. A closer look at the microstructure of GelMA-based cryogel showed that the overall pore size was unaffected by the addition of bioglass (Figure 4a). However, physical adhesion between GelMA and bioglass may have contributed to the bioglass concentration-dependent reduction in swelling ratio and increase in mechanical properties of cryogel. Pluharova et al. have previously demonstrated that ion-dipole interaction persists between amide group and calcium ions. Thus, this physical adhesion in combination with iondipole interactions from calcium ions from bioglass and amide groups in GelMA may have contributed to the overall physical characteristics of cryogels. Mineralization of inorganic minerals plays an important role in maintaining bioactivities in body fluids. The mechanism of surface chemistry and bioactivity of bioglass powder in SBF solution has been widely studied. From the results obtained in this study via testing bioactivities of bioglass-embedded cryogels, formation of hydroxyapatite was observed via XRD. However, the ion concentration threshold seemed to be present according to XRD. The amount of mineral deposited was gradually increased with the concentration of bioglass until 1.5% and sharp increase with 2.5% bioglass concentration (Figure 6c). This result suggested that the minimum amount of bioglass was required to trigger hydroxyapatite formation in early periods since the rate of hydroxyapatite formation is depended on presence of Ca 2+ and P 3+ ions. This result is similar to the findings of Hench Discussion Synthetic calcium phosphate-based materials have been widely studied for regenerating defected bone area. Among them, bioglass is valued for its ability to enhance osteogenic differentiation by forming hydroxyapatite and releasing osteoinductive ions; various studies have demonstrated the invaluable effects of osteoinductive ions on bone regeneration process. In addition to supplying calcium phosphate, providing extracellular matrix-like environment enhances osteoregeneration during the bone healing process. Because it is capable of mimicking natural ECM and has a porous structure, cryogel is widely used in tissue engineering. From this idea, we embedded bioglass into GelMA cryogels in an attempt to provide an osteogenic-friendly environment ( Figure 1). Furthermore, the macroporous structure of GelMA would provide cancellous bone-like structure where host stem cells can migrate. A closer look at the microstructure of GelMA-based cryogel showed that the overall pore size was unaffected by the addition of bioglass (Figure 4a). However, physical adhesion between GelMA and bioglass may have contributed to the bioglass concentration-dependent reduction in swelling ratio and increase in mechanical properties of cryogel. Pluharova et al. have previously demonstrated that ion-dipole interaction persists between amide group and calcium ions. Thus, this physical adhesion in combination with ion-dipole interactions from calcium ions from bioglass and amide groups in GelMA may have contributed to the overall physical characteristics of cryogels. Mineralization of inorganic minerals plays an important role in maintaining bioactivities in body fluids. The mechanism of surface chemistry and bioactivity of bioglass powder in SBF solution has been widely studied. From the results obtained in this study via testing bioactivities of bioglass-embedded cryogels, formation of hydroxyapatite was observed via XRD. However, the ion concentration threshold seemed to be present according to XRD. The amount of mineral deposited was gradually increased with the concentration of bioglass until 1.5% and sharp increase with 2.5% bioglass concentration (Figure 6c). This result suggested that the minimum amount of bioglass was required to trigger hydroxyapatite formation in early periods since the rate of hydroxyapatite formation is depended on presence of Ca 2+ and P 3+ ions. This result is similar to the findings of Hench et al. as the authors discovered that the rate of hydroxyapatite formation is depended on concentration of bioglass. Ion release rates of Si 4+, Ca 2+, and P 3+ ions from bioglass immersed in SBF solution revealed that Si 4+ ions were released to form silanol, enhancing overall hydroxyapatite formation by forming apatite nuclei ( Figure 5). After forming apatite nuclei, the saturation of Ca 2+ ions was reached in the solution. Then, the hydroxyapatite layer was formed via recapturing and adsorbed P 3+ ions, which were released to the solution on the apatite nuclei. Thus, the concentrations of P 3+ ions decreased as time passed. Furthermore, the mineralized surfaces of cryogels were measured via EDS mapping, and only the GelMA-2.5% bioglass group showed apatite with 1.63 Ca 2+ /P 3+ ratio; this is similar to the theoretical Ca 2+ /P 3+ value of 1.67 (Figure 6b). This result suggests a possible direct cryogel-bone bonding, which may increase the in vivo bone generation. Osteogenic effects of bioglass-embedded GelMA cryogels on hTMSCs were measured using RT-PCR and alizarin red staining. In our studies, osteogenic genes-OCN, Runx2, and Col I-were further up-regulated according to the concentration of bioglass on day 7 and day 14 (Figure 8), and similar trend was observed when calcium deposition was measured after 21 days via alizarin red staining ( Figure S3). Osteogenic differentiation of hTMSCs was enhanced due to ion dissolutions from bioglass in osteogenic medium since similar trend of Ca 2+ and Si 4+ ion release rates were observed according to bioglass dosage embedded in cryogel ( Figures 5 and 8). Calcium ion released from bioglass enhanced mineralization, inducing guiding stem cells to osteoblasts. Furthermore, S. Maeno et al. also supported that Ca 2+ ion induces osteogenic differentiation in both monolayer and 3D culture. Our study confirmed that osteogenic differentiation of hTMSCs is dependent on the concentration of bioglass incorporated in cryogel. Si 4+ ion-a bioglass component-is well known to promote bone formation and calcification in the early stages, though high concentrations of Si 4+ ion increase cytotoxicity. In our studies, large amounts of silicon ion had not been incorporated into bioglass; however, as expected, cryogel with the highest concentration of bioglass showed the most cytotoxicity among the groups (Figure 7b). All groups showed over 90% of cell viability, which suggests that although increased concentration of Si 4+ leads to higher cytotoxicity, it would be safe to use for the concentration that was used in this study. Based on in vitro results, cryogels were implanted for eight weeks and measured to examine if bone regeneration in a calvarial defect improved. As demonstrated by in vitro results, bioglass helped bone regeneration of defected area. According to microCT data, cryogels with the highest concentration of bioglass nanoparticles had the most bone healing compared to other cryogel groups ( Figure 9). Furthermore, higher concentration of collagen was observed via Masson's trichrome staining and H&E staining in the highest bioglass concentration group. In conclusion, bioglass has an osteoinductive effect in bone healing process. Conclusions From our study, we demonstrated that bioglass-incorporated GelMA cryogels effectively promoted bone healing. Tested concentrations of bioglass-embedded cryogel showed enhanced mechanical strength without affecting the porosity and cytotoxicity. Furthermore, it not only induced osteogenic differentiation of hTMSCs but was also bioactive, forming hydroxyapatite on its surface. Thus, by manipulating the dose-dependent nature of bioglass, bioglass-incorporated cryogel could be a potential candidate in clinical application for bone tissue engineering.
Q: Are Written and Spoken English distinct languages? First of all, I am not a linguist, but I was thinking the other night that being literate was almost the same as being bilingual. My reasoning is that sign language is distinct from written and spoken English, and sign language is a "visual" language in the same way that written English is. Also, homonyms and homophones would be entirely analagous, if these are considered different langauges. It seems to me that the only reason they could be considered the same language, is that they follow each other so closely. That is, there are many one-to-one literal translations between them. This distinction is relevant in natural language processing contexts, because it would mean that "understanding" a language and "translating" between languages would be distinct problems. I believe that this could reduce the problem space substantially. Surely, I am not the first person to think this, but as I have no formal training on the subject; I'm not sure what the real story is. So, Are written and spoken languages considered distinct? If not, why not? A: Sign languages and spoken languages are both real, natural languages, learnable and normally learned automatically by children -- without any special training -- if they are used in the speech community the children grow up in. Written languages, however, are always representations of some spoken language; one has to be specially trained in the technology of literacy. An analogy is horses vs cars -- horses are natural, cars are technological. Horses have drawbacks and limitations compared with cars as a means of transportation, but one big advantage of horses is that they can make more horses without a factory, or even any education. The fact that most people in a community are literate does not automatically result in the children of that community becoming literate, the way a common language generates new speakers (or signers). They have to go to school -- no school, no literacy. But they all speak or sign, whether they're literate or not. A: As far as I can surmise, written and spoken are not quite two distinct languages. A written language and a spoken language are, in general, two different media of communication. What keeps them from being distinct, however, is that a written language is a representation of a spoken languages. Each word in a written language corresponds to a word in a spoken language. The first paragraph of the wiki article on written language states this in detail: A written language is the representation of a language by means of a writing system. Written language is an invention in that it must be taught to children; children will pick up spoken language (oral or sign) by exposure without being specifically taught. A written language exists only as a complement to a specific spoken language, and no natural language is purely written. However, extinct languages may be in effect purely written when only their writings survive. So in effect, a written and a spoken language are two separate representations of the same language. In theory a human language can exist with only one or the other--the Iroquois languages are among the most famous languages which historically have no written form. But because of the one-to-one correlation of words, they are best thought of as a pair, in effect constituent to what we call a single language. Sign language, on the other hand, does not have a one-to-one correlation to any spoken language. Moreover, American and British Sign Languages are two entirely different languages, even though they are both spoken in English speaking countries. A sign language is neither a written nor an orally spoken language. Each one is a language of its own. A: Written English comes in many varieties. When people write like they speak, their written language is more or less a transcription of what they say orally. When they write the kind of formal expository prose that is usually required for PhD dissertations, they write a variety that normally follows rules that speakers don't follow when they speak. This frequently yields stuffy, stilted prose -- what's often derisively called academic prose, and represents the worst side of the language: verbose, boring, complex, pretentious, and just plain undecipherable (cf. postmodernist parlance) -- or clear, easy-to-grok prose. Formal written English is a separate language only in the sense that it must conform to rigid rules laid out in great and sometimes confusing and conflicting detail and arbitrary rules in style manuals -- and not all style manuals agree. All the grammar anxiety about language comes from confusing the spoken and written forms. Although they are essentially the same language, they are different varieties and require some different skills, and proficient speaking doesn't guarantee proficient formal writing (and vice versa). I remember reading somewhere that Thomas Jefferson wasn't a good speaker. He certainly was an excellent writer of formal expository prose, though. The language taught in textbooks is a written language and usually artificial. When I studied German, I was assured that what my textbooks taught me was something called "Schuldeutsch", a language found only in textbooks and not on the lips of native speakers of German. When I studied Japanese, I was assured that my textbooks were teaching me something called "Nihongo" and not the everyday language spoken by the Japanese in Japan. When I started teaching English as a foreign language, I decided that the textbooks I was using were teaching students an artificial variety of English -- and listening to the tapes and CDs that accompanied those texts confirmed how phony that variety of English was, so I started writing all my own materials using my idiolect and using "authentic" English as spoken and written by other native speakers (TV news reports mostly). The other answers provided here are excellent. I especially like John Lawler's sentence "... we might even think of English writing as corresponding to a dialect of English that has not been spoken anywhere since around 1500." That just about sums up the difference between spoken and written English. Most of us don't speak like we write or write like we speak. Speaking is natural and a naturally acquired skill; writing is an unnatural act and mechanical and "must be carefully taught".
Unravelling the complex interplay between drought and conflict <p>Climate change will likely exacerbate droughts, increase regional water demands and affect agricultural yields. In addition, projected population growth combined with lack of &#160;&#8216;good&#8217; governance is likely to enhance the negative impacts of droughts and crop failure in the future as agriculture increasingly expands onto marginal lands. There is a global concern about these trends, because crop failure, droughts, increasing pressure on suitable agricultural land and rangeland for livestock, and changes and quality of governance can also increase the risk of conflict and (organized) violence.</p><p>In this presentation we explore the strength and impact of the climate-conflict trap., We use historical drought simulations and future drought projections to study the link between conflict and drought. Conflict data are taken from the Uppsala Conflict Data Program and combined with hydrological simulations from the global hydrological model PCR-GLOBWB.</p><p>The results show that drought occurrence is expected to increase under all climate scenarios, with stronger impacts for the higher emission scenarios. &#160;On the other hand, at the global scale conflicts are likely to reduce as increased economic wealth compensates for the increased climate vulnerability.</p><p>This work helps us to better understand the interplay between the natural hydrological system and society. To better understand unsustainable and potentially devastating pathways for the coming decades, we have the greater aim to start unravelling the complex dynamics between changes in drought, society and risk of conflicts.</p>
Do Quality Improvement Collaboratives Improve Antimicrobial Prophylaxis in Surgical Patients? TO THE EDITOR: The conclusions stated in the abstract of the article by Kritchevsky and colleagues do not accurately represent the study's findings. In fact, there is substantial reason to conclude that the collaborative was of meaningful clinical value. For example, improvement was statistically significant in 4 of the 6 mean performance measures in the intervention group but in only 3 of 6 measures in the feedback-only (control) group. Moreover, improvement in the mean all-or-none measure was 15.6% greater in the intervention group than the control group, which exceeds the 15% difference sought in the power calculation for the primary outcome. Although that particular observed difference was not statistically significant, the upper bound of the CI for the all-or-none measure does not exclude a between-group advantage as great as 49% for the intervention group. Although the article did not include the changes observed in each hospital, the wide CIs for many of the before and after measures (for example, 15.6 percentage points for the primary outcome in the control group, which is 20.8% of the baseline rate) indicate substantial heterogeneity of response to both interventions among individual hospitals, including quite large responses in some hospitals. Heterogeneity of response to either biological or social interventions is not simply noise ; rather, it serves as the starting point for understanding the mechanisms involved in those responses. The real value of this study therefore seems to lie in the discovery of that heterogeneity and the opportunity for explaining itwhy feedback alone or the feedback-plus-collaborative model was effective, for whom, and in what circumstances as much as in the demonstration of whether, in the aggregate, each intervention worked. Numbers are not explanations, so it is fortunate that Kritchevsky and colleagues have pinpointed in the discussion several local and external factors that could explain the differential effect of the interventions within each study group (for example, diversity of staff training and experience, variation in total and type of staff participation, and variable involvement of institutional leadership). Unfortunately, however, the study was not designed to explain how the best individual hospital performers differed from the worst, and why. For example, we have no way of knowing whether the history of successful improvement efforts differed among participating hospitals, or exactly which staff participated in the collaborative and why, and how they were involved in the care processes being changed. Changing health care work is a complex, context-bound social process. Evaluation of any improvement intervention therefore requires clarity regarding the theory that underlies its selection; recognition of the contributions of individual stakeholders to its effects; awareness of the sequence and execution of its individual steps; recognition of the power relationships among people involved in its implementation; understanding of the multiple realities that affect its execution: timing, culture, resource allocation, staffing, and competing priorities; and awareness that most interventions start by being adapted to fit local circumstances and that the interventions and the contexts in which they are implemented evolve over time in response to the resulting changes. For their findings to be meaningful, future studies of improvement interventions will need to include these elements in their study designs.
Sleep tight a centralised system for melatonin prescribing Objective To describe our experience of developing a centralised system for the prescription of melatonin in children. Methods Melatonin is used widely to help initiate sleep in children. A recent randomised controlled trial demonstrated that melatonin brought sleep onset forward by an average of 20 min and prolonged sleep by an average of over 40 min.1 The cost of paediatric melatonin prescriptions in our Trust April November 2010 was £231 236.38. (Extrapolated annual spend £462 472.76). As a part of our cost improvement processes, prescribing practices and charges were reviewed. The cost of melatonin to us from community prescriptions (FP10) varied widely depending on the individual pharmacy charge, the formulation prescribed and brand of melatonin dispensed. There was no standard quality control process. Hospital pharmacists and clinicians produced a guideline for the prescription of melatonin in paediatric patients, rationalising the formulations and preparations available. Families were informed that melatonin would be prescribed only from the hospital pharmacy. Results Families have been very co-operative with the new procedures. Issues arose in the first few weeks due to a significant shortage of melatonin in stock relative to the amounts of melatonin being prescribed. One batch of melatonin 3 mg MR capsules (imported from the USA) contained 5.6 mg of melatonin. This product is not required to meet pharmacopeial standards and, unlike melatonin capsules supplied within UK, does not have an upper limit for actual value. Following discussion, the decision was made to stop using this product. Patients are managed on the standard release capsules and licensed 2 mg MR tablets (circadin) alone. Following implementation of inhouse melatonin dispensing the total spend on melatonin for January to March 2011 was £52 506.64. The predicted annual expenditure is therefore £210 026.56 (cost saving £252 446.20 per year). Conclusion The centralised prescribing of an unlicensed medication has rationalised prescribing practice and ensured quality control procedures. It has allowed the development of a database allowing easier monitoring and audit of prescribing and is projected to result in significant cost savings. Learning points from the process include the importance of Accurate estimation of the likely monthly demand perhaps by auditing prescribing practice of all clinicians for one month prior to implementation in order to gain a baseline. Clear identification of each stage in the prescription pathway from the family requesting a prescription to collecting the medication. Proactive communication to all pharmacy staff about the reasons for change, so families are given consistent information. This experience is being used to inform the centralisation of prescribing of other specials within our Trust and may be extrapolated to similar services within other Trusts.
Static Analysis Framework Based on Multi - Agent System The progress of the methodology of static analysis has recently been rapidly consistent. Advancement is gradually attracting debugging and detailed programs by applying the intelligent approach to static analytical techniques. This paper introduces a novel framework for static analysis techniques based on a multi-agent system. The work investigates an innovative approach for multiple static analysis methods in an intelligent way under a distributed environment. To verify the proposed framework, we implement the traditional method of static analysis based on multi-agent services and determines a head track for autonomous and efficient static analysis techniques. Our approach has used agent characteristics in a multi-agent context. All agent services communicate with one another, adjustments were made to the environment of the agent, various restrictions were applied, and the systems effects were calculated. In order to demonstrate the respective methods strengths and weaknesses, each system has been analyzed using several performance metrics. This paper develops multiple static analysis is applied to enhanced system reliability and scalability. Theses provide server-side static analyzes. Our solution shows high performance within a reasonable time duration.
Riled by the Thursday upset across the Bay in which Stanford women’s basketball (18-3, 8-2 PAC-12) lost to UC Berkeley (14-7, 5-5 PAC-12) by a single measly point, 81-80, the Cardinal brought their A-game this Saturday at Maples Pavilion, slaughtering the Golden Bears 75-50 after holding the lead for the entirety of the game. Sliding three spots in the rankings this week to No. 11, Stanford women’s basketball (6-1, 0-0 Pac-12) dropped a close one, its first loss of the season, against the No. 24 Gonzaga Bulldogs (8-1, 0-0 WCC) by a score of 73-79 this past Sunday in Spokane, Washington. In day two of the Rainbow Wahine Showdown in Honolulu, Hawai’i, No. 8 Stanford women’s basketball (5-0) improved their undefeated season record with a 71-49 victory over American University (3-2). Led by senior forward Alanna Smith’s career-high 25-point performance, the Cardinal produced 26 points in the fourth quarter to outscore the Eagles. In a game reminiscent of last season’s round of 32 playoff game, No. 8 Stanford (4-0) defeated Florida Gulf Coast University (2-2) 88-65 on opening day of the Rainbow Wahine Showdown on Friday. Junior forward Nadia Fingall recorded a career-high 24 points as the Cardinal outlasted the Eagles. Winning their first three games in the first time in three years, the No. 7 Stanford Cardinal (3-0, 0-0 Pac- 12) mercilessly crushed the University of San Francisco Dons (1-1, 0-0 West Coast Conference) with a stunning 34 point lead Thursday Night at Maples Pavilion, ending the phenomenal game with a final score of 96-62. Coming off two stellar wins last week, the No. 7 ranked Stanford women’s basketball team looks to extend their two game win streak against University of San Francisco tonight at 7PM at Maples Pavilion. Coming off a season opening win against UC Davis, No. 7 Stanford women’s basketball (1-0) plays host to Idaho (1-0) on Sunday afternoon.
I was disappointed to read that like with all other breakthroughs, this is not a “now” solution. The headline caught my eye because this is a close-to-home issue for me. When you lose a parent and sibling to this disease, you are pretty much walking on thin ice in your senior years. I have looked for information on preventive care and supplements recently, but all of them have different ingredients. It’s my guess that maybe none of them really are the solution. I was disappointed to read that like with all other breakthroughs, this is not a “now” solution. This is not good news for people for whom the clock is ticking. A lot of valued and loved people have been lost to this disease. I’m sure there would never be a shortage of people who would be willing to have the treatments available to prevent this destructive disease. I have never heard of any treatment other than the usual: diet, no stress, adequate sleep, etc., so I don’t know what they are treating Alzheimer’s with at present. That’s news to me.
Case-control study of reproductive risk factors on breast cancer - a pilot study. AIM ; To study the effect of reproductive factors o n breast cancer risk in Niger Delta women. SETTING: University of Benin Teaching Hospital and Central Hospital Benin. DESIGN: Case-control study. SUBJECTS: Seventy one women diagnosed with breast c ancer and their age-matched controls. FINDINGS: Mean age of cases was 46.7 years, control was 45.7 years. Family history of breast cancer RR2.03 P 0.12 CI 1.71-2.40, Parity less than three RR1.42 P,0.077 CI 1.032-1.964,cumulative breastfeeding history less than two years RR 1.28, P 0.21, CI 0.93-1.77 and early menarche less than 1 3 years RR 1.29, P 0.18, CI 0.91-1.84 were associated with a higher risk of breast cancer. However none of the factors attained statistical significance. CONCLUSION: Our pilot study indicates family histo ry of breast cancer, parity less than three, cumulative breastfeeding duration less than two yea rs and early menarche are associated with higher breast cancer risk in Niger Delta women.
Dietary dihydromyricetin supplementation enhances antioxidant capacity and improves lipid metabolism in finishing pigs. Nowadays, chronic diseases have become a potential danger to human health and are highly concerning. Given that pigs are a suitable animal model for human nutrition and metabolism for its similar anatomical and physiological properties to those of humans, this study has used 24 castrated male Duroc Landrace Yorkshire (DLY) pigs as experimental subjects to explore the effects of dietary dihydromyricetin (DHM) supplementation on the antioxidant capacity and lipid metabolism. Results showed that dietary 300 and 500 mg DHM kg-1 diet supplementation increased the serum total superoxide dismutase (T-SOD) level, serum and liver reduced glutathione (GSH), muscle catalase (CAT) level and serum high-density lipoprotein cholesterol (HDL-C) level, and reduced the liver malondialdehyde (MDA) level and muscle triglyceride (TG) level in finishing pigs. Western blot analysis showed that dietary DHM supplementation activated the nuclear-related factor 2 (Nrf2) and AMP-activated protein kinase (AMPK)/acetyl-CoA carboxylase (ACC) signals. Real-time quantitative PCR analysis showed that dietary DHM supplementation upregulated the mRNA levels of lipolysis and fatty acid oxidation-related genes, and down-regulated the mRNA expression of lipogenesis-related genes in finishing pigs. Together, we provide evidence that dietary DHM supplementation improved the antioxidant capacity and lipid metabolism in finishing pigs.
U.S. Pat. No. 8,079,547 to Rivault, et al., describes exemplary flotation systems especially useful for helicopters and other vessels. The flotation systems may include both floats and life rafts if desired. A single actuator may cause inflation of both a float and a raft; alternatively, separate activators may be employed. Noted as well in the Rivault patent is that “automatic or manual activators could be used for redundancy or back-up purposes.” See Rivault, col. 2, 11. 18-19. Commonly-owned U.S. Pat. No. 7,644,739 to Vezzosi, et al., discloses examples of actuators for inflatable structures. One type of existing main actuation system identified in the Vezzosi patent includes a container of pressurized gas and “a cable and pulley system routed through [an] aircraft.” See Vezzosi, col. 1, 1. 30. According to the Vezzosi patent, “[w]hen a pull handle or similar device associated with the system is activated,” a valve opens and “the pressurized gas is discharged from the container and into the life raft causing its rapid inflation.” See id. at 11. 31-35. No discussion of using a pulley system to obtain mechanical advantage is included, however. Nevertheless, the contents of the Rivault and Vezzosi patents are incorporated herein in their entireties by this reference.
Oscillons and Quasi-breathers in the \phi^4 Klein-Gordon model Strong numerical evidence is presented for the existence of a continuous family of time-periodic solutions with ``weak'' spatial localization of the spherically symmetric non-linear Klein-Gordon equation in 3+1 dimensions. These solutions are ``weakly'' localized in space in that they have slowly decaying oscillatory tails and can be interpreted as localized standing waves (quasi-breathers). By a detailed analysis of long-lived metastable states (oscillons) formed during the time evolution it is demonstrated that the oscillon states can be quantitatively described by the weakly localized quasi-breathers.It is found that the quasi-breathers and their oscillon counterparts exist for a whole continuum of frequencies. I. INTRODUCTION Nonlinear wave equations (NLWE) lie at the heart of many fields in physics including hydrodynamics, classical integrable systems, Field Theories, etc. Let us give a prototype class of NLWE for a real scalar field, in 1 + n dimensional space-time: where the real function, F (), defining the theory is given for example as F () = ( 2 − 1) for the "canonical" 4 model, and F () = sin() for the Sine-Gordon (SG) model. A particularly interesting class of solutions of NLWE is the class of nonsingular ones exhibiting spatial localization. Such spatially localized solutions have finite energy and can correspond to static particle-like objects or to various traveling waves. In Field Theory localized static solutions have been quite intensively studied in a great number of models in various space-time dimensions, see e.g. the recent book of Sutcliffe and Manton. Spatially localized solutions with a non-trivial time dependence (i.e. not simply in uniform motion) of NLWE are much harder to find. In fact the simple qualitative argument stating that "anything that can radiate does radiate", indicates that the time evolution of well localized Cauchy data leads generically to either a stable static solution plus radiation fields, or the originally localized fields disperse completely due to the continuous loss of energy through radiation. Since Derrick-type scaling arguments exclude the existence of localized static solutions in scalar field theories given by Eq. in more than two spatial dimensions, for n > 2 one does not expect to find localized states at all at the end of a time evolution. Simplifying somewhat the above, one could say that in the absence of static localized solutions, localized initial data cannot stay forever localized. In fact for localized initial data there is a time scale which is the crossing time, c, (the time it takes for a wave propagating with characteristic speed to cross the localized region) and a priori one would expect the rapid dispersion of initially localized Cauchy data within a few units of c. One of the rare examples of a time-periodic solution, staying localized forever is the famous "breather" in the 1 + 1 dimensional Sine-Gordon (SG) model. In the 1 + 1 dimensional "canonical" (with a double well potential) 4 theory the pioneering work, based on perturbation theory by Dashen, Hasslacher and Neveu indicated the possible existence of breather-like solutions. A completely independent numerical study by Kudryavtsev has also indicated that suitable initial data evolve into breather-type states. These results stimulated a number of investigations about the possible existence of nonradiative solutions in the 1 + 1 dimensional 4 model. After a long history Segur and Kruskal and Vuillermot have finally established that in spite of the above mentioned perturbative and numerical indications time-periodic spatially localized finite energy solutions (breathers) do not exist. Even if genuine localized breathers in the 1 + 1 dimensional 4 model are absent, in view of the perturbative and numerical evidence for the existence of long living states "close" to genuine breathers, it is a natural question how to describe them. Boyd has made a detailed study of time periodic solutions which are only weakly localized in space (i.e. the field possesses a slowly decreasing oscillatory tail) but as long as the amplitude of the oscillatory "wings" are small they still have a well defined core. Boyd has dubbed such solutions "nanopterons" (small wings), we refer to his book for a detailed review. Quite interestingly Bogoluvskii and Makhankov have found numerical evidence for the existence of spatially localized breather-type states in the spherically symmetric sector of 4 theory in 3+1 dimensions. These breather-like objects observed during the time evolution of some initial data are called nowadays "oscillons". Most of the observed oscillon states are unstable having only a finite lifetime. They lose their energy by radiating it (slowly) to infinity. More recent investigations started by Gleiser have revealed that oscillons do form in a fairly large class of scalar theories in various spatial dimensions via the collapse of field configurations (initial data) that interpolate between two vacuum states of a double well potential. Such a spherically symmetric configuration corresponds to a bubble, where the interpolating region is the bubble wall that separates the two vacuum states at some characteristic radius. These works have led to a renewed interest in the subject. It has been found that oscillons have extremely long lifetimes which is already quite remarkable and makes them of quite some interest. These long living oscillon states seem to occur in a rather generic way in various field theories involving scalar fields in even higher dimensional space-times and according to oscillons are also present in the nonabelian SU bosonic sector of the standard model of electroweak interactions at least for certain values of the pertinent couplings. Such oscillons might have important effects on the inflationary scenario as they could form in large numbers retaining a considerable amount of energy. In a recent study of a 1 + 1 dimensional scalar theory on an expanding background exhibited very long oscillon lifetimes, while in Ref., extremely long living oscillons have been exhibited in a 1 + 2 dimensional sine-Gordon model. The sophisticated numerical simulations by Honda and Choptuik of the Cauchy problem for spherically symmetric configurations in the 4 theory in 3 + 1 dimensions,, have revealed some interesting new features of the oscillons. In particular in Ref. it has been found that by a suitable fine-tuning of the initial data the lifetime of the oscillons could be increased seemingly indefinitely, and it has been conjectured that actually an infinitely long lived, i.e. non-radiative, spatially localized solution exists. Furthermore the existence of such a solution would even provide the explanation of the "raison d'tre" and of the observed genericness of long lived oscillons. The eventual existence of a non-radiative breather in this simple and "generic" 4 model in 3 + 1 dimensions would be clearly of quite some importance. It should be noted, that in quite a few spatially discrete models, localized time-periodic "discrete breathers" have been shown to exist and they are being intensively studied. One of the motivation of our work has been to clarify if non-radiative breathers indeed exist and to find them directly by studying time-periodic solutions of the NLWE. Our numerical results led us to conclude that no localized (with finite energy) time-periodic solutions exist in the 4 model. On the other hand this study has led us to understand the oscillon phenomenon better and we present a simple but quantitatively correct scenario explaining some important properties of the oscillons (such as their existence and their long lifetimes). Our scenario is based on the existence of a special class of time periodic solutions which are weakly localized in space. Such solutions (which have infinite energy) will be referred to as quasi-breathers (QB). In the present paper we make a detailed study of oscillons in the (already much studied) 4 model in 3 + 1 dimensions. Using a previously developed and well tested time evolution code where space is compactified, thereby avoiding the problem of artificial boundaries, we compute some long-time evolutions of Gaussian-type initial data. We observe that long living (6000 − 7000 in natural units) oscillon states are formed from generic initial data. These oscillons radiate slowly their energy and for short (as compared to their total lifetime) timescales they can be characterized by a typical frequency,. This frequency increases slowly during the lifetime of the oscillon and when reaches a critical value c ≈ 1.365 there is a rapid decay. By fine-tuning the initial data one can achieve that the oscillon state instead of rapidly decaying evolves into a near time-periodic state, whose frequency is nearly constant in time and 1.365 < < 1.412. The existence of such near-periodic states (referred to as resonant oscillons) has been already reported in Ref., and we also observe an increase of the lifetime of these states without any apparent limit by fine-tuning the parameters of the initial data to more and more significant digits. There are, however, also some discrepancies. We find that clearly distinct near-periodic states for various values of the pulsation frequency 1.365 < < 1.412 exist. According to our results there is little doubt that for any value of in this range a corresponding near-periodic oscillon state exists. Our data clearly shows that the near-periodic states also radiate, although very weakly. The radiation becomes weaker and weaker as → √ 2. On the other hand we have implemented a multi domain spectral method in order to find directly stationary, time periodic solutions of the NLWE and compare them to the long living oscillon states obtained from the time evolution. This makes it possible to attack the problem of finding directly the putative time periodic breather(s). (One could easily generalize our method for the quasi-periodic case.) We find that there is a large family of time periodic solutions which are only weakly localized in space, in that they have a well defined core, and an oscillatory tail decreasing as ∝ 1/r. We single out a special family among them by minimizing the amplitude of their oscillatory tail which definition comes close to minimizing the energy density of the oscillatory tail. These solutions are the closest to a breather and for that reason we call them quasi-breathers. They seem to exist for any frequency, 0 < < √ 2, although in this paper we exhibit QBs only with 1.30 < < √ 2. The amplitude of the oscillatory tail of the QBs becomes arbitrarily small as the frequency approaches the continuum threshold defined by the mass of the field, → √ 2. Our numerical evidence speaks clearly against the possible existence of a truly localized, breather-like solution periodic in time, for the frequency range 1.30 ≤ ≤ 1.412 contrary to the claims of Ref.. In view of these conflicting numerical findings it is now highly desirable to try to find an analytical proof or disproof of the existence of a localized non-radiative solution to settle this issue. We do not expect the situation being qualitatively different from the 1 + 1 dimensional case, and although the proofs of Refs. are not applicable for the 3 + 1 dimensional case, we see no reason that their negative conclusion would be altered. More importantly, we believe to have made a step towards understanding the mechanism behind the existence of such long-living oscillonic states without the need to invoke genuine breather solutions, which even if they would exist would be clearly non-generic, while the QBs seem to be generic. The total energy of the quasi-breathers is divergent, due to the lack of sufficient (exponential) spatial localization, hence they are not of direct physical relevance. Nevertheless a careful numerical study shows that the oscillons produced during the time evolution of some suitable Cauchy data are quantitatively very well described by the quasi-breathers. By comparing the Fourier decomposition of an oscillon state at some instant, t characterized by a frequency, (t), obtained during the time evolution with that of the corresponding QB, we have obtained convincing evidence that the localized part of the oscillon corresponds to the core of the QB of frequency (t). What is more, the oscillatory tail of the QB describes very well the standing wave part of the oscillon. Our oscillon scenario is based on this analysis and leads us to propose that any oscillon contains the core and a significant part of the oscillatory tail of the corresponding QB. The time evolution of an oscillon can be approximatively described as an adiabatic evolution through a sequence of QBs with a slowly changing frequency (t). This oscillon scenario is based on the existence and of the genericness of the QBs. This paper is organized as follows. In Sec. II we study the time evolution of localized, Gaussian-type initial data in 4 theory and investigate some important aspects of the oscillon solutions. In Sec. III we present an infinite system of coupled ordinary differential equations (ODE's) obtained by the Fourier-mode decomposition of the NLWE Eq. and discuss some of its properties. Section IV is devoted to the description of the spectral methods used to solve this system. In particular, we carefully deal with the asymptotic behavior of the Fourier-modes. Various convergence tests are exhibited. The quasi-breathers are discussed in Sec. V, the results on our oscillon scenario are discussed in Sec. VI and conclusions are drawn in Sec. VII. II. TIME EVOLUTION A. The nonlinear wave equation of the 4 theory We consider the following 4 theory in 1 + 3 dimensions whose action can be written as where is a real scalar field, ∂ t = ∂/∂t, ∂ i = ∂/∂x i and i = 1, 2, 3. In this paper we shall restrict ourselves to spherically symmetric field configurations, when the corresponding NLWE is given by The energy corresponding to the action can be written as where denotes the energy density It is easy to see that the finiteness of the total energy is guaranteed by → ±1 + O(r −3/2 ) as r → ∞. B. Numerical techniques We briefly outline here the main ideas for the implementation of our evolution code to solve numerically the Cauchy problem for Eq.. Assuming → 1 as r → ∞ we introduce the new field, as Then the NLWE Eq. takes the form: The next step is to compactify in the spacelike directions by a suitable coordinate transformation of r. This way we can guarantee that our computational grid, associated with a finite-difference scheme, covers the entire physical spacetime, at least in principle. Specifically, a new radial coordinate, R, is introduced in the following way: where is an arbitrary positive constant. In the new radial coordinate, R, the entire Minkowski spacetime is covered by the coordinate domain 0 ≤ R < 1 while spacelike infinity is represented by the 'hypersurface' R = 1. The R = const 'lines' represent world-lines of 'static observers', i.e. integral curves of the vector field (∂/∂t) a. In the compactified representation the field equation,, reads as where is given by We remark that the spacelike compactification used here is a simplified variant of the conformal transformation used in. There instead of the t = const. hypersurfaces the initial data are specified on hyperboloids. Furthermore Minkowski spacetime is compactified mapping null infinity to finite coordinate values. Since in the present case the scalar field,, is massive, i.e. never reaches null infinity the hyperboloidal compactification is not essential. In order to obtain a system of first order equations we introduce the independent variables t = ∂ t and R = ∂ R. Then equation can be rewritten as which together with the integrability condition ∂ tR = ∂ Rt and the defining equation ∂ t = t form a strongly hyperbolic system of first order differential equations for the three variables, t and R (see e.g. ). The initial value problem for such a first order system is known to be well-posed. Note that the relation ∂ R = R is preserved by the evolution equations, and therefore it corresponds to a constraint equation. In order to solve the initial value problem for equation we discretize the independent variables t and R. A simple uniform grid with steps ∆t and ∆R is introduced. Spatial derivatives are calculated by symmetric fourth order stencils. Time integration is done using the 'method of lines' in a fourth order Runge-Kutta scheme, following the recipes proposed by Gustafsson et al. A dissipative term proportional to the sixth derivative of the field is added in order to stabilize the evolution. Since this dissipative term is also chosen to be proportional to (∆R) 5, it does not reduce the order of the applied numerical method, in other words, its influence is decreased by the increase of the used resolution. The numerical methods and the actual numerical code we use for calculating time evolution in this paper are also based on those developed in. A few non-physical grid points are introduced for both negative radii R < 0 and for the region "beyond infinity" R > 1. Instead of calculating the time evolution of the R < 0 points, the symmetry property of about the origin R = 0 is used to set the function values at each time step. Similarly,, being a massive field, decays exponentially towards infinity, consequently all the field values, t and R are set to zero for R ≥ 1 during the entire evolution. This takes care of the spacelike infinity = 0 in equation. Therefore it is possible to use symmetric stencils exclusively when calculating spatial derivatives. The grid point at the origin R = 0 needs special treatment since the last term as it stands on the right hand side of equation is apparently singular. However, this term, when evaluated in terms of the original (non-singular) field value have zero limit value at R = 0. Although compactifying in spatial direction restricts the coordinate R to a finite domain, grid points in our numerical representation get separated by larger and larger physical distances as we approach R = 1. This far region, where shells of outgoing radiation cannot be represented properly, moves out to higher and higher physical radii as ∆R decreases. Numerical simulations with increasing number of grid points demonstrate that wave packets of outgoing massive fields get absorbed in the transitional region without getting reflected back into the inner domain. In this way our numerical simulations still give a good description of the field behavior precisely in the central region for very long time periods. The simple but physically non-uniform grid together with the help of the dissipation term appears to absorb outgoing radiation in a similar way to the explicit adiabatic dumping term method applied by Gleiser and Sornborger. Furthermore, because of the very low inward light velocity in the asymptotic region R ≈ 1 the field behavior in the central region is correctly given for a long time period even after the appearance of numerical errors at R ≈ 1. Simulating time evolution of oscillons up to their typical maximal lifetime of t = 7000 using spatial resolution of 2 13 points took usually a week on personal computers. However, because of the need of several runs when fine-tuning parameters in the initial data, we mostly used typical resolutions of 2 12 spatial points. The parameter in the coordinate transformation was set to = 0.05 in our simulations in order to concentrate approximately the same number of grid points to the central oscillon region and to the far away region where the massive fields form high frequency expanding shells. A Courant factor of ∆t ∆R = 1 turned out to be appropriate to obtain stable simulations with our choice of. The convergence tests confirmed that our code does provide a fourth order representation of the selected evolution equations. Moreover, we monitored the energy conservation and the preservation of the constraint equation ∂ R = R. Most importantly, we compared the field values which can be deduced by making use of the Green's function and by the adaptation of our particular numerical code to the case of massive linear Klein-Gordon fields. The coincidence between the values in the central region provided by these two independent methods for long time evolutions (t ≈ 10 4 measured in mass units) made it apparent that the phenomena described below should be considered as true physical properties of the investigated non-linear field configurations. C. Oscillons Following Refs. we start with the following Gaussian-type initial data: with c and ∞ being the field values at the center r = 0 and at infinity r = ∞ while r 0 is the characteristic size of the bubble at which the field values interpolates between c and ∞. By fixing ∞ = −1 as in but varying r 0 and c, Eq. provides a two-parameter family of smooth and suitably localized initial data. For a large open subset of the possible initial parameters c and r 0, after a short transitional period the field evolves into a long living localized nearly periodic state, named oscillon by Copeland, Gleiser and Mller. Although these configurations live much longer than the dynamical time scale expected from the linearized version of the problem (i.e. light crossing time), their lifetime is clearly not infinite. The energy of these oscillating states is slowly but definitely decreasing in time, and after a certain time period they quickly disintegrate. For the time dependence of the energy in a compact region see Fig. 3 of. We illustrate on Fig. 1 two such oscillon states with rather different lifetimes. The parameter dependence of the lifetime is illustrated on Figs. 6, 7 and 8 of. We note that since the final decaying period is relatively short, furthermore its time dependence is almost the same for each particular choice of initial data, the lifetime plots are quite insensitive of the precise definition of how one measures the lifetime of a given configuration. In our calculations the lifetime was defined by observing when the value of the oscillating field at the center r = 0 falls (and remains) below a certain prescribed value (e.g. = −0.95). Already Copeland, Gleiser and Mller have noticed that a delicate fine structure appears in the lifetime plot. This peculiar dependence on the precise value of the initial parameter r 0 has motivated the detailed investigation of Honda and Choptuik (see Figs. 4 and 5 of Ref. ). The calculations have shown that fixing the value c the lifetime increases without any apparent upper limit when the parameter r 0 approaches some element of a large set of discrete resonance values r * i. For example, in the case c = 1 Honda and Choptuik have found 125 peaks on the lifetime plot between 2 < r 0 < 5. These fine-tuned oscillons at a later stage during their evolution develop into a state which is very close to a periodic (non-radiating) one. On Fig. 2 we show the field value at the center for typical oscillons close to a chosen peak. The seemingly stable almost periodic stages of the fine-tuned oscillon configurations will be referred as near-periodic states. The closer the initial parameters are to the critical value the longer the lifetime of the with two different sets of initial parameters. It is important to note that the field oscillates with a non-constant period 4.6 < T < 5. For generic oscillons the period T shows a decreasing tendency towards T ≈ 4.6. near-periodic state becomes. Although the period T of the underlying high frequency oscillations remains constant to a very good approximation during a chosen near-periodic state, different near-periodic states, corresponding to various peaks on the lifetime curve, oscillate with clearly distinct periods in the range 4.446 < T < 4.556. Generic oscillon states and even the not near-periodic initial part of fine-tuned oscillons pulsate with longer period, in the range 4.6 < T < 5. Generic oscillons with initial data between two neighboring resonance values r * i < r 0 < r * i+1 may have already quite a long lifetime without developing into a near-periodic state. Plotting the field value at the center r = 0 one can see a low frequency modulation of the amplitude (called shape mode by Honda and Choptuik) of the high frequency basic oscillating mode (see upper plot of Fig. 2). Oscillons between the next two resonance values r * i+1 < r 0 < r * i+2 are distinguished from those in the previous interval by the fact that they possess exactly one more or one less peak associated with these low frequency oscillations on the envelope of the field value (t, 0) before they disperse. Longer living supercritical states arise when one closely approaches a critical value from one of the possible two directions. Then the last low frequency modulation peak gets shifted out to a later and later time as one goes towards the resonance, making room for a near-periodic state between the last two modulations. Close to the resonance, but on the other side of it, the same long living near-periodic state appears, now called subcritical, with the only difference that at the end the field disperses without forming a last low frequency modulation peak. D. Results It was shown in Ref. that close to a resonance the oscillon lifetime obeys a scaling law where the scaling exponent has specific values for each resonance, while the constant is also different for subcritical and supercritical states. Although the lifetime appears to increase without any limit by fine-tuning the initial parameter, in practice it is very difficult to achieve very long lifetimes because one cannot represent numbers very close to the resonance value due to the limitation implied by the applied machine precision. Achieving longer lifetime for the near-periodic state is possible by using high precision arithmetics, although then there is a considerable increase of the computation time which limits the applicability of this approach. Using "long double" variables on an SGI machine, or applying the software "doubledouble" on a personal computer, we could calculate with twice as many significant digits than standard double precision computer variables can represent (i.e. 32 instead of 16). This way we could improve the fine-tuning and double the observed lifetime of near-periodic states. On Fig. 3 we plot the scaling law of the lifetime for oscillons near three different resonances. Instead of choosing three resonances with c = 1, we calculated states close to the first peak on the lifetime curves corresponding to three different values of c. The reason for this was that the normal, slowly but evidently decaying, oscillon state is the shortest near the first peak (i.e. no modulation on the contour curve), thereby we could concentrate computational resources on the near-periodic state. In order to clarify what we mean by fine-tuning the parameter r 0 to 16 (or 32) digits and what is the actual error of the quantities, we present a table on the precise location of the first peak for c = 1 when performing fine-tuning with five different numerical resolutions. For each spatial resolution we could achieve approximately the same lifetime of approximately = 1100 when the parameter r 0 approximated a resolution dependent value to 16 digits. The convergence of the data indicates that the actual position of the peak is at r * 0 = 2.281370594 with an error of 10 −9. of the first peak for c = 1 using various resolutions. The number of spatial grid points used for a specific fine-tuning is ni = 2 i. The error is estimated as i = |r * (i) 0 − r * 0 |. The convergence factor is defined as Our numerical simulations clearly show that the different resonance peaks on the lifetime plot correspond to different near-periodic states. In fact, a one-parameter family of distinct near-periodic states appears to exist. This statement is in marked contrast with the claim of Honda and Choptuik in, where in Sec. III-A they claim that the oscillation origin is shown for the same three states during the time period where the oscillation is almost periodic (i.e. during the near-periodic states). The frequency of the oscillations along with its time dependence at some radius r =r has been determined from our numerical results by minimizing the following integral for the oscillation period using some suitably chosen integration interval determined by t 0. This procedure, applying polynomial interpolation, yields significantly more precise frequency values than the direct use of the Fast Fourier Transform method, especially when the time step by which our evolution code outputs data is not extremely short. Another advantage of our procedure is it being much less sensitive to the particular choice of the sampling interval (in this case 2t 0 ) than the FFT algorithm. The relatively small value of t 0 (for example t 0 = 10) makes it possible to monitor relatively sharp changes in the time dependence of the frequency. All three near-periodic states show a low frequency change of with a decaying amplitude, with the lowest frequency modulation belonging to the state with the highest. The amplitudes on Fig. 4 would show a similarly decaying slight modulation if we would plot them individually in time intervals where the oscillations are almost time-periodic. These observations suggest that near-periodic states of frequency also contain a superposition of states with frequencies ± ∆ where ∆ / ≪ 1. In the next sections we shall see that at least the core part of an oscillon of frequency can be extremely well described by a weakly localized quasi-breather of the same frequency. We emphasize, that the three chosen near-periodic states are typical, i.e. the near-periodic states of all fine-tuned oscillons (including all peaks belonging to c = 1) are qualitatively and quantitatively very similar to them. Actually, the frequency of all calculated near-periodic states fell into the interval spanned by the frequency of the two chosen first peaks belonging to c = −0.4 and c = −0.7. On the third plot of Fig. 5 we can also see a slow but steady decrease of the frequency. For the other two states, with frequencies closer to √ 2 no such behavior is apparent in the time interval we could simulate. This slow decrease is also manifested in the energy of the configuration. On Fig. 6 we plot the time dependence of the energy contained inside spheres with three subsequent radii. The decrease of the energy indicates that the near-periodic state slowly loses its energy, consequently it cannot be exactly time-periodic. Looking at the energy in the sphere at r = 40.1, we can see that the decrease of energy from t = 1000 to t = 2000 is ∆E = 0.0041. Taking into account that the total energy is E = 21.60, we can give a naive estimate on the lifetime by calculating when the energy would decrease into its half value at this rate, getting e = 2.6 10 6. We expect that all near-periodic states radiate, but this radiation is becoming considerably weaker as the frequency get closer to √ 2. This expectation is confirmed by the direct analysis of periodic solutions in the next sections. On Fig. 6 the curve corresponding to the largest radius (r = 107.1) indicates how slowly the energy moves outwards because of the massive character of the field. The different endings of the curves belonging to sub and supercritical states illustrate the two kinds of decay mechanism of near-periodic states. In the first, subcritical mechanism, the field quickly disperses, while energy moves essentially outwards only. In the second supercritical way, the energy of the field first collapses to a smaller region near the origin and then disperses to infinity. This resembles to the behavior of some unstable spherical shell, although no shell structure is visible on the density plots, being the highest energy density always at the center. The instability of the near-periodic states with two distinct kinds of decay mechanisms gives a qualitative explanation on why it is possible to reach long lifetimes by fine-tuning the initial parameters. E. Fourier decomposition of the evolution results Since during the time evolution the field becomes approximately time-periodic for any longer living oscillon state, it is natural to look at the Fourier decomposition of the results provided by our evolution code. Since the Fast Fourier Transform algorithm is very sensitive to the size of the time step with which our evolution code writes out data, we use an alternative direct method which turns out to be significantly more precise in determining the basis frequency and the amplitude of the higher modes. As a first step we determine the oscillation period by locating two subsequent maxima, at instants t 1 and t 2, of the field at the origin r = 0. Since t 1 and t 2 fall in general between two consequent time slices written out by our evolution code, we approximate their position by fitting second order polynomials on the data. It is apparent from our results that for near-periodic states these maxima correspond to two consecutive moments of time symmetry not only at the center but also in a large region around the center to a very good approximation. After determining the oscillation frequency = 2/(t 2 − t 1 ), we obtain the n-th Fourier mode at a radius r by calculating the following integral using the output (t, r) of the evolution code We note that care must be taken for the first and last incomplete time steps when evaluating the integral. In the case of exact periodicity and time symmetry the imaginary part of this integral would be zero for all n. In order to check the deviation from time symmetry for various radii at the moments t 1 and t 2 we also evaluate the imaginary part of the integral for each n and verify whether it is small compared to the real part. As an example, on Fig. 7 we give the Fourier decomposition of the c = −0.7, r 0 = 9.38283 near-periodic state presented on Fig. 5. The integral was calculated between two subsequent maximums of the field at the center at t 1 = 5420.22 and t 2 = 5424.67, yielding an oscillation frequency of = 1.41203. For the decomposition of a near-periodic state with a lower frequency see Fig. 9 of, where the frequency is = 1.38 (page 107. of ). From the two plots one can already see the general tendencies. As the basis frequency increases towards the upper limit √ 2 the oscillon becomes wider, although with a decreasing amplitude. The influence of the higher modes (the relative amplitude ratio) is also getting smaller when the frequency grows. III. FOURIER DECOMPOSITION OF THE QUASI-BREATHERS This section is concerned with a direct search for time-periodic solutions of the NLWE Eq. by means of a Fourier-mode decomposition of the scalar field. A simple parameter counting already indicates that an infinite number of parameters would be necessary in order to obtain an exponentially localized breather, so even if some do exist, it seems to be difficult to produce them directly. Therefore we look for weakly localized solutions of Eq., and by analyzing their behavior in function of the free parameters we expect to obtain a clear indication about the existence of truly localized breathers. A. Equations and their Asymptotic behaviors Since we seek periodic solutions of Eqs. we make a Fourier mode decomposition of the form We have assumed here that by a suitable choice of the time origin the solutions are time-symmetric. Inserting this form in Eq. gives rise to a system of coupled elliptic equations where 2 n = (2−n 2 2 ). In fact we have already regrouped together all the linear (first order) terms of the corresponding Fourier mode, n to the left hand site of Eqs.. We will make a systematic search for localized solutions of the system. We shall not assume any a priori connection with the oscillons presented in the previous section for the possible frequencies,. As we shall see below the eventual existence of a localized solution can only be possible if system exhibits some very special properties. In order to look for regular solutions of Eqs. we analyze the various asymptotic behaviors of the homogeneous solutions of the corresponding operator on the left hand side of Eqs.. If 2 n > 0 then there is one regular homogeneous solution, which falls off exponentially : exp /r. This function clearly decays sufficiently fast at infinity so that the energy would always be convergent. 2 n = 0 is a degenerate case when the operator is the usual Laplacian. The non-singular (decaying) homogeneous solution is simply 1/r which, however, does not decay sufficiently fast for the energy to be bounded. If 2 n < 0, both homogeneous solutions tend to zero at infinity, cos /r and sin /r. As for the previous case, these functions do not decrease fast enough to make the energy convergent. Apart from that, both functions are regular at infinity and this implies that there is no uniqueness of the solution which is only defined up to a given phase (for more details, see Sec. IV C). Observe that no matter what the value of is, there always exists an n, such that all modes, n n > n will be of the third type (i.e. with 2 n < 0). It is now easy to understand why in the generic case, one cannot expect to find exponentially localized ( with finite energy) solutions of Eqs.. To ensure localization one has to suppress all slowly decaying oscillatory modes which would normally require an infinite set of freely tunable parameters. The problem of finding a localized solution can be seen as a matching problem between the set of modes regular in a neighborhood of the origin, { 0 n } ∞ n=0 and the set of modes with fast (exponential) decay for r → ∞, { ∞ n } ∞ n=0. In this case each mode 0 n regular at the origin, has a single freely tunable parameter whereas none of the modes with fast fall off, ∞ n for n > n have any free parameters and therefore a sort of a miracle is needed that the two sets could be matched. This counting implies that while one can expect to find time-periodic solutions Eqs. (even a whole family), these solutions have generically oscillatory tails for large values of r. This clearly reflects the argument "anything that can radiate does radiate" transposed to the stationary case. On the other hand, it is not excluded that a localized solution might exist for very particular values of, for which the oscillatory tails are absent. Using such techniques, solving differential equations can be reduced to, in each domain, inverting a matrix on the coefficients space. Then, a linear combination of the particular solutions with the homogeneous ones is done, in order to impose regularity at the origin, appropriate boundary conditions and continuity of the overall solution. We refer the reader to for more details on the algorithm, in the case of a Poisson equation. For this work, we have extended the operators presented in and included the Helmholtz operators appearing in Eqs., i.e. ∆ − 2 n with both signs of 2 n. This is rather straightforward because, as for the Laplacian, they produce two homogeneous solutions. Only the case of infinity has to be treated differently as we will see in Sec. IV C. B. Description of the sources As we have seen in Sec. III A, for all value of, there exists a value of n after which all the modes are dominated by homogeneous solutions of the type sin (| n | r + n ) /r. Such functions are not easily dealt with by our solver. Indeed, in order to treat spatial infinity, LORENE usually uses a simple compactification by means of the variable u = 1/r in the exterior domain. In the past, this has enabled us to impose exact boundary conditions at infinity. When waves are present, it is well known that such a compactification is not practical. Indeed, no matter what scheme is used, there is always a point after which the distance between computational points (grid or collocation) is greater than the characteristic length of change of the wave, thus causing the scheme to fail. This issue is dealt with by imposing boundary conditions at a finite radius R lim. for the slowly decaying oscillatory modes. The reader should refer to where such methods have been applied to gravitational waves. For the exponentially decaying modes in Eqs. we decide to keep in the right hand side (in the sources) only those terms that are dominated by the exponentially decaying homogeneous solutions, i.e., only those modes for which 2 n > 0. As it will be seen later, the range of interesting pulsations is < √ 2. Therefore the only two exponentially decaying modes are 0 and 1. In the Eqs. for 0 and 1 for r > R lim., we set all the higher modes (n ≥ 2) to zero (including terms of the type 3 2 1 ). So, we effectively solve for large values of the radius the following equations: This method yields solutions for 0 and 1 which are correct for "intermediately large" values of r > R lim. where the oscillatory and slowly decaying terms induced by the nonlinearities do not dominate. It is clear that for sufficiently large values of r 0 and 1 do not decay exponentially, since their behavior will be dominated by the slowly decaying oscillatory nonlinear source terms. We have carefully checked that changing the value of R lim. does not influence the oscillatory modes, therefore we can conclude that the back reaction of the 0 and 1 is negligible on n, n ≥ 2 for r ≫ R lim.. C. The operators When 2 n > 0, the Helmholtz operator ∆ − 2 n admits two homogeneous solutions of which only one tends to zero at infinity. This situation is exactly the same as the one for the standard Laplace operator and all the techniques presented in can be used. Once again, as we will be working for < √ 2, this happens only for 0 and 1 and the associated sources have been given in the previous section IV B. The situation is quite different when dealing with the Helmholtz operator with 2 n < 0. Apart from the compactification problem previously mentioned, we have to note that now there are two homogeneous solutions that are regular at infinity. There is no reason to prefer one to the other. This means that there is no unique solution to our problem. Indeed, one can get a solution by doing the matching with any homogeneous solution of the type sin (| n | r + n ) /r, where n can take any value in [0, 2[. So to summarize, for all modes such that 2 n < 0 we have to match, at a finite radius R lim., the solution with a homogeneous one of the type : Clearly we need additional conditions to fix the values of the phases n. In order to do so, let us recall that we are mainly interested in finite energy solutions. Such solutions should not contain any oscillatory behavior in 1/r at infinity, i.e. all the coefficients A n of such homogeneous solutions should be zero. One can hope to achieve that by searching, in the parameter space of the phases ( 2,...., n ), the values that minimizes the absolute value of coefficient of the first oscillatory homogeneous solution that appears : |A 2 |. This solution being "quite close" to a localized breather, for that reason we refer to it as a quasi-breather. Given the non-linearity of the problem, the location of the minimum can not be found analytically and one has to rely on a numerical search. We make use of the multi-dimensional minimizer provided by the GSL numerical library. The algorithm is based on the simplex algorithm of Nelder and Mead. We start by setting all the phases to /2 and iterate the procedure until the value of the minimum converges with a given threshold. Let us mention that the phases are searched in [0, [, the rest of the interval, being described by changing the sign of the amplitudes. The simplex solver converges rapidly, the function |A 2 | being very smooth, with no local extrema. So the final situation is the following : For 0 and 1 the space is compactified and the sources in the exterior region are given by Eqs. and. For n, n ≥ 2, we only solve for r < R lim and match the solution with a homogeneous solution A n sin (| n | r + n ) /r, the n being determined by the simplex solver by minimizing |A 2 |. D. Avoiding the trivial solution The system is solved by iteration but a problem arises from the fact that the trivial solution n = 0 is a solution of the equations (then 0 is a constant whose value is either 0, 1 or 2). No matter what the initial guess for the different modes is, the code always converges to a trivial static solution of this type. So, one needs to find a way to prevent this to happen. Given that the system is coupled, it is sufficient to impose 1 (r = 0) = 0, in order to avoid the trivial solution. To do so, after each step of the iteration, we rescale 1 everywhere, by a factor in order to impose that 1 (r = 0) has a certain value. In the general case, after convergence of the code, this scaling parameter is different from 1 meaning that the value of 1 we found is no longer solution of the system. However for some values of (being in an interval, see Sec. V), it is possible to find, exactly one value of 1 (r = 0) such than the scaling factor is one. Thus, it is possible, at least for some values of to find the appropriate value of 1 (r = 0) such that the obtained modes are non-trivial solutions of Eqs.. In practice, after convergence to a threshold level of typically 10 −3 we switch on the convergence toward the true value of 1 (r = 0). With a technique already use, for example to get neutron stars of appropriate mass (see Sec. IVD3 of ), at each step, we change the value of 1 at the center, in order to make closer to one. The value of 1 (r = 0) to which one wants to converge is modified according to : Doing so, the value of 1 at the center converges to the only value such that the scaling parameter is one, providing us with the real, non-trivial, solution of the system. E. Influence of the phases In order to verify that the simplex solver converges to the proper minimum of |A 2 |, we show, on Fig. 8 how various quantities vary when one changes the precision at which the extremum is found. Fig. 8 presents the values of : the phase of the second mode 2 at which the minimum is found, the values of the first modes at the origin n (r = 0), and the value of the minimum of |A 2 |. Those three quantities are shown as a function of, for three different values of the precision required for the simplex solver, 10 −3, 10 −4 and 10 −5. From the values of the phase 2, it appears that a precision of 10 −3 or 10 −4 is not good enough, the curve being quite noisy. The curve obtained with a precision of 10 −5 is smoother and we will assume that this level of precision is sufficient for the purpose of this paper and this value will be chosen for all the rest of this work. The situation is even better for both the values of the modes at the origin and the actual value of the minimum. Indeed, as can be seen on Fig. 8, those quantities show almost not dependence on the level of precision. This is simply related to the fact that the values of the fields depend very weakly on the values of the phases of the homogeneous solutions in the external region. This is true for the phase of the second mode 2 but even more for the higher order phases n>2. The very moderate dependence of A 2 on the phases is illustrated by Fig. 9 where the value of the amplitude is shown, as a function of the phase 2, for = 1.37. All the other phases n>2 are fixed to the same value, each of them corresponding to one curve on Fig. 9. It is clear that the influence of the phases of the modes n > 2 is very weak. The extrema on the three curves of Fig. 9 are very close to the real extremum found by the simplex solver. Therefore we are quite confident that we can find the minimum of |A 2 | with a good accuracy, even if the associated values of the phases are slightly less accurate. F. Convergence tests In order to further check the validity and accuracy of our code with respect to the computational parameters, we present in this section various additional convergence tests. The idea is to take a reference set of computational parameters and to change one of those at a time, in order to check that the obtained results do not change much. The first three radial domains are chosen with boundaries, and. After r = 4, we keep the size of the domain constant to 4 (i.e. the boundaries of the domain i are ). We found that this is necessary to ensure that in every domain, we have enough collocation points to resolve the oscillatory homogeneous solutions. The various computational parameters are the following : n z is the number of radial domains which relates directly to the value of R lim. given the setting of the domain mentioned above. N r is the number of coefficients in each domain. Finally we will also checked the convergence of the results with respect to the finite number of modes we consider in the system. Our standard setting consists of n z = 14 ⇒ R lim. = 44, N r = 33, and the use of 6 modes. We then change one parameter at a time to verify that the obtained results are indeed meaningful. The convergence is presented by showing the same three quantities as in Sec. IV E : the phase 2, the values of the first modes at the origin and the amplitude |A 2 |. The three plots of Fig. 10 show the dependence of the results when varying the number of coefficients in every domain. Since for the values of N r the curves of Fig. 10 change only slightly, this is a strong sign that N r = 33 is greatly sufficient to get accurate results. It is also supported by the fact that, after the end of the iterative scheme we find that the relative difference between the right and left hand sides of Eqs. is smaller than 10 −9. A strong test is provided by the plots of Fig. 11. They show that the results are independent of the value of the outer computational radius R lim. thus validating the matching procedure with the oscillatory homogeneous solutions. Finally Fig. 12 illustrates the fact that the results do not change substantially when increasing the number of modes. The last test of our code has been a comparison of the r.h.s. and l.h.s. of Eq., using the Fourier expansion. We have computed the relative difference between the r.h.s. and l.h.s. after averaging over one period. The results for various number of modes are depicted on Fig. 13 as a function of. As expected, the error decreases as the number of modes increases. This is not surprising, given that Eq. could be satisfied only for an infinite number of modes. The error also increases when decreases. This can be understood by recalling that the modes are more important when is small (i.e. see the behavior of n (r = 0) as a function of on Figs. 10, 11 and 12). Thus, the effect of the missing modes is more important for smaller. Finally we would like to emphasize that the computational parameters exhibited in this section are sufficient to compute solutions in the regime of "moderate" frequencies (i.e. frequencies around 1.37 − 1.38). If one wishes to go to much lower frequencies, one would need to include more modes, for higher order modes will be more important, as indicated by Fig. 13. On the other hand, as will be seen in the next section, the matching point R lim. must be increased when one approaches the critical value c = √ 2. V. RESULTS As illustrated on Figures 10, 11 and 12 we do find time-periodic quasi-breather solutions of Eqs. for any value of the pulsation frequency 1.32 ≤ ≤ 1.41, and there is little doubt that such solutions exist for all frequencies in the range ]0, √ 2[. It is also clear from the very same Figures that for → c = √ 2 the family of solutions we consider converges pointwise to the trivial solution. We do not expect such quasi-breather solutions of the system to exist for > c. The critical value c = √ 2 is expected to be related to the change of nature of the Helmholtz operator for 1, √ 2 being the value at which 1 ceases to decay like an exponential. In the 1 + 1 dimensional case Coron has proved that the allowed frequencies are indeed constrained by < c. We have not attempted to prove the analogous statement for our case, as it would certainly require some sophisticated mathematical tools. The quasi-breather solutions obtained this way are not well localized in space because of their slowly decaying ∝ 1/r oscillatory tail. Consequently none of these solutions has finite energy. In fact we have selected a special class of time periodic solutions by minimizing the amplitude of their oscillatory tail. This is, in some sense, the closest one can get to a breather and we have called those configurations "quasi-breathers". There is no special value of for which the amplitude of the oscillatory tail would show any tendency to become very small or going to zero. This is illustrated on Fig. 14 where the logarithm of the coefficients of "tails" are shown for the modes 2, 3, 4 and 5, as a function of. The only value of for which the curves go to zero is the critical one for which the solution tends to the trivial one. The many convergence tests of Sec. IV F show that we can accurately compute the value of those coefficient and that they are not numerical artifacts. To illustrate the behavior of the quasi-breathers we show on Fig. 15 and 16 the energy density and the one including the volume element, i.e. r 2 as a function of the radius, for values of going from 1.36 to 1.40. We can clearly see two qualitatively different behaviors: i) the solutions have a well defined "core", where the behavior is dominated by the exponential decay of the fields and so the density goes to zero. This core is getting larger when → √ 2. ii) However, inevitably at some point, the oscillatory tails start to dominate and the density reaches a plateau, ultimately causing the total (integrated) energy to be infinite. Let us mention that if for high values of, the plateau is not seen, it comes solely from the fact that the value of R lim used on Fig. 15 and 16 is not sufficient. In spite of the numerical smallness of the amplitude of this plateau, we are quite confident that its value is reasonably accurate, and also that it really corresponds to a physical effect. Indeed, its value is very stable when varying the various computational parameters. As increases, the value of the plateau diminishes. This is related to the fact that the coefficients of the homogeneous solutions are decreasing functions of and to the quadratic nature of the energy density in, i.e. the value of the plateau is roughly the square of the greatest coefficient (i.e. the one for 2 ). As already stated, we can also confirm that the transition radius between the core and the plateau increases as one approaches = √ 2. To be more quantitative, we define the transition radius R trans as the first value of r for which It is the radius at which the oscillatory behavior starts to dominate. R trans as a function of is shown on the left panel of Fig. 17. The precise location of the transition radius can not be computed for > 1.39, given our nominal choice for R lim = 44. Once again, this illustrates the fact that when going to higher values of, one needs to increase the boundary radius R lim. On the right panel of Fig. 17 we show the total energy inside the transition radius, i.e. the total energy of the core of the configuration. Contrary to the transition radius, this is not a monotonic function and there is a configuration of minimum energy. This non trivial behavior is due to two competing effects when increasing, first the increase of the transition radius, and second the decrease of the magnitude of the modes. We also note that this minimum is very close to the value of the frequency which Honda and Choptuik claim to be the one of their conjectured breather solution (i.e. they quoted ≈ 1.366) but this may very well be just a coincidence. It seems likely that the energy contained in the core diverges when → √ 2. Finally, on Fig. 18 we show the first modes near the origin (left panels) and in the region of the transition radius (right panels). Three different values of are shown, corresponding to a low value (1.32), the value corresponding to the minimum of energy (1.365, cf Fig. 17) and one rather high frequency (1.39). A. Fourier decomposition of evolution results In this Section we present our oscillon scenario, based on the existence of the quasi-breathers. In particular we present some convincing evidence, that an oscillon state of main frequency is quantitatively described by a quasibreather of the same frequency, whose tail is cut off at some value of r. In order to do this we perform Fourier decomposition of various a long-lived oscillon solutions found by doing the evolution (see Subsection II E for the actual method). This gives us the fundamental pulsation frequency and the various modes. Then, those modes are compared to the ones found by directly solving the system for this particular value of. The same kind of comparison is done in Ref. as to support the existence of a periodic solution of frequency ≈ 1.366. An important difference is that in Ref. the comparison is performed only for higher frequency near-periodic states in an interval of which is comparable to the core part of the quasi-breather and therefore its oscillatory tail is not yet apparent. It is apparent from the results provided by our evolution code that the solutions corresponding to the time evolution data are very close to being time symmetric not only at the center but also in the intermediately far region, justifying the choice of only cosine terms in the mode decomposition when constructing QBs. B. Near-periodic states First we perform the Fourier decomposition of various near-periodic states obtained by fine-tuning the initial parameter r 0, since these states appear to be periodic and time symmetric to a very high degree. For a specific case Figures 19 and 20 show the real part of the modes coming from the Fourier transform of the evolution and the modes obtained by the direct solution of. The comparison is done with a near-periodic state corresponding to the first peak on the lifetime curve with initial data c = 1. The Fourier decomposition is performed between the two maxima at t 1 = 1592.29 and t 2 = 1596.78. The frequency calculated from the position of the maxima is = 1.398665, which is used to calculate the corresponding periodic solution by solving. The agreement for the first oscillating mode n = 2 is remarkably good even in the oscillating tail region. The curves obtained by the two different decompositions obtained similarly good agreement not only for = 1.398665 but for all frequencies for which near-periodic states exist (i.e. for 1.365 < < √ 2). This comparison between the results coming from the evolution code of Sec. II and from the Fourier-mode decomposition of the quasi-breathers is a very strong consistency check. Let us recall that the two codes have been developed completely independently. Figure 22 shows, for large radii, the first mode for which the oscillatory term appears, i.e. 2, in the case = 1.38. It can be seen from the first of the three plots that the real part of the Fourier transform agrees very well with the decomposition of the corresponding QB, the two curves can hardly be distinguished. However, just at those radii where the oscillatory behavior appears the imaginary part starts being comparable to the real part, breaking the time symmetry in this outer region. The second plot shows that the absolute value of the complex 2 obtained by the Fourier decomposition essentially behaves like a smooth envelope curve covering the standing wave like peaks of | 2 | obtained by the mode decomposition. The presence of the complex argument of the Fourier transform shown on the third plot indicates that the oscillating tail obtained by the evolution code is composed of outgoing waves carrying out energy from the core region. C. Oscillons from Gaussian initial data Oscillons obtained by the evolution of generic Gaussian initial data are relatively long living for a large set of the possible values of the initial parameter r 0. On Fig. 23 the agreement of the modes is not as excellent as for the near-periodic states, the curves calculated by the two methods are still remarkably close. Especially important is the similarity at the tail section of the first oscillating mode, i.e. mode 2. This indicates that the periodic quasi-breather solutions can describe, at least qualitatively, even the generic long living oscillons. In particular, the existence of the oscillating tail region is responsible for the slow but steady energy loss of these configurations. D. Initial data obtained by mode decomposition We have also tested our scenario in the reverse way. First, for a given (1.38 in the first example), we have solved the system for the modes of the quasi-breather. In particular, we can compute (t = 0) at the moment of time symmetry, and use this as initial data for the evolution code. The evolution of such initial data is shown on Fig. 28 for numerical simulation with various spatial resolutions. The field (here its value at the origin) oscillates at the appropriate frequency for a relatively long period (approx. t=300, with about 66 oscillations) before it collapses in a sub or supercritical way very similar to the collapse of the c = −0.4 states shown on Fig. 4. The initial state is so close to the ideal configuration that it even depends on the chosen numerical resolution whether in the final stage the configuration collapses in a subcritical or supercritical way. Strangely, the longest living state is not the one with the highest resolution, which is probably connected to the fact that the initial data still contain small numerical errors. Although the lifetimes attainable by this method are very long compared to the dynamical time scale of the linearized problem, they are still shorter than the ≈ 2000 achievable by fine-tuning the first peak of the c = −0.4 initial data. Actually, it should not be surprising that the numerically determined initial data corresponding to a QB cannot be as long-living as a configuration which is fine-tuned to 32 decimal digits. Since the modes are matched to oscillating tails at large radii, the initial data provided by the mode decomposition of a QB is valid up to arbitrarily large r. However, because the slow falloff of the oscillating tails, the energy contained in balls of radius r diverges as r goes to infinity. Using such configuration as initial data for our evolution code does not cause serious problems, since the physical distance between grid points in our numerical representation increases with the distance from the center, and consequently the high frequency tails cannot be represented numerically above a certain radius. This provides a cutoff in the initial configuration and an effective outer boundary during the evolution. An advantage of our conformal compactification method is that this outer boundary moves to higher and higher radii when increasing the numerical resolution. On Fig. 29 we show the initial part of the upper envelope curve for numerical runs with various spatial resolutions. We emphasize that this strong resolution dependence is entirely due to the inadequate representation of the infinite energy initial data. When we used initial data with compact support or sufficiently fast falloff, such as the Gaussian initial data in, then the code remained convergent up to much larger time periods (at the order of t = 10000), and the curves with various resolutions agreed to very high precision at the initial stage (at around t = 100). An important point is that, with this method, we can generate almost periodic oscillons at frequencies which are difficult or impossible to attain by fine-tuning initial data of the form. On Fig. 30 we show the upper envelope of the oscillations at r = 0 in the initial stage of the evolutions when using initial data provided by the mode decomposition method with frequencies = 1.30 and = 1.36. Again, as the numerical resolution is increased, the size of the oscillating tail that can be taken into account in the numerical representation of the initial data gets larger, and the evolution will remain nearly periodic for longer times. The field can oscillate truly periodically only if the energy lost by radiation through the dynamically oscillating tail is balanced out by an incoming radiation already present at the tail section of the initial data. If the tail is cut off above some radius r then the non-periodic region moves inwards from that radius essentially at the speed of light. Getting into the non-periodic domain, for the frequencies ≤ 1.36 the tail amplitude appears to be large enough to cause a small but significant energy loss, a decrease in amplitude, and consequently a steady frequency increase. On Figs. 31 and 32 we show the long-time behavior of the amplitude and frequency of the oscillations generated by initial data with frequencies = 1.30 and The initial data is generated by solving the system for the modes at the pulsation frequency = 1.38. On the second graph the time dependence of the oscillation frequency is plotted for each simulation. = 1.36. Apart from a relatively short stable initial stage, the main characteristic of the evolution is a slow but steady decrease of the amplitude accompanied by a simultaneous increase of the frequency up to a point (approx. = 1.365) where the configuration quickly decays. This evolution is very similar to the behavior of generic (i.e. not fine-tuned) oscillons started from Gaussian initial data of type. A typical example of such evolution, with initial data corresponding to an r 0 close to the top of the lifetime curve (but between two peaks) with c = 1, is also shown on Figs. 31 and 32 in order to facilitate comparison. It is also rather remarkable that the onset of the rapid decay seems to coincide with the configuration which minimizes the energy inside the core (see Fig. 17). The longtime evolution of a low frequency initial data provided by the mode decomposition method is very similar to the evolution of generic oscillons evolving from Gaussian initial data. An important difference though, is the smaller low frequency modulation of the envelope curve and of the time dependence of the frequency. From this one can expect that the Fourier decomposition of these states is even closer to that of the quasi-breather state with the corresponding frequency. On Figures 33 and 34 we give the Fourier decomposition of the evolution of the 1.3 initial data between the two maxima after t = 5257.45, where the calculated frequency is 1.35997. The agreement is markedly better than on Figs. 26 and 27, which is most likely due to the much lower amplitude of the low frequency modulations on the envelopes of field value at the center. These low frequency modes are excited to a higher amplitude by Gaussian type of initial data, while they appear to be present to a much lower extent in the lower frequency periodic initial data (with = 1.3 in this case). We remind the reader that the existence of the peaks on the lifetime curve (see Fig. 4 of ) is closely related to these low frequency oscillations. Namely, the peaks separate domains of the initial data in r 0 with different number of low frequency oscillations on the envelope curve. The first two initial data is provided with the mode decomposition method, solving the system by choosing the frequencies = 1.30 and = 1.36. The third evolution corresponds to Gaussian-type initial data with c = 1 and r0 = 2.70716. VII. CONCLUSION In this paper, we have adapted and applied spectral methods implemented in the LORENE library to find timeperiodic solutions of the spherically symmetric wave equation of 4 theory. Our code passed numerous tests and we are quite confident that it is sufficiently precise. With our code we find that for frequencies 0 < < √ 2 there is a whole family of standing wave-type solutions with a regular origin having a well localized core and an oscillatory tail. Because of the slow decrease of the oscillatory tail the total energy of these solutions in a ball of sufficiently large radius, R, is proportional to R cf. Figs.. We have constructed a special class of solutions, called quasibreathers, defined by minimizing the amplitude of the oscillatory tail of the solutions. The size of the core of the QBs gets larger and larger as → √ 2 and the amplitude of the oscillatory tail is getting smaller and smaller. Interestingly the energy contained in the core of the QBs exhibits a minimum for ≈ 1.365. Using our high precision time evolution code, we have also investigated the time evolution of Gaussian initial data. We observe that a generic oscillon state can be characterized by a slowly varying frequency, (t), increasing in time up to a critical one c, when it decays rapidly. We have also investigated in detail the near-periodic states described first in Ref. which can be characterized by an almost constant frequency. In contradistinction to Ref. we find such near-periodic states for any frequency √ 2 > > c. Moreover we find that the near-periodic states decay in time (i.e. they cannot be truly time periodic states) cf. Figs.. In particular while loosing some of their energy their frequency decreases very slowly with time. By a careful comparison of the Fourier modes of an oscillon state of frequency (t) with that of the corresponding QB, we have obtained convincing evidence that the localized part of the oscillon is nothing but the core of the QB of the same frequency. Our results demonstrate that the time evolution of an oscillon state can be described to a good approximation as an adiabatic evolution through a sequence of QBs with a slowly changing frequency (t). What is more, the oscillatory tail of the QB describes very well the standing wave part of the oscillon. Therefore any oscillon contains the core and a significant part of the oscillatory tail of the corresponding QB. The existence of near-periodic states is closely related to the fact that the energy of the oscillon core exhibits a minimum for a critical frequency c ≈ 1.365 and for √ 2 > > c a single unstable mode appears which can be suppressed by fine-tuning the initial data. VIII. ACKNOWLEDGMENTS This research has been supported by OTKA Grants No. T034337, TS044665, K61636 and by the Pole Numerique Meudon-Tours. G. F and I. R would like to thank the Bolyai Foundation for financial support.
The invention relates to a dishwasher comprising a filter system for cleaning the dishwashing liquid and a method for operating the same. During cleaning of items to be washed in a dishwasher, dishwashing residues are released from the items to washed, these accumulate in the dishwashing liquid and are partly circulated with the dishwashing liquid during the entire dishwashing process. The more dishwashing residues are entrained in the dishwashing liquid, the more disadvantageously this affects the dishwashing result. Furthermore, dishwashing residues entrained in the dishwashing liquid can become deposited in the transport paths of the dishwashing liquid or clog the sieves provided in the dishwasher. Filter systems in the form of sieve devices which can be removed from the dishwasher, cleaned and re-inserted again have already been proposed to eliminate this problem. These sieve devices have the disadvantage that the cleaning process is laborious and unpleasant for the user. Furthermore, the cleaning process is frequently forgotten or carried out too infrequently so that problem-free operation of the dishwasher can no longer be ensured as a result of clogging of the sieve devices and hindrance in the transport paths of the dishwashing liquid, which disadvantageously impairs the dishwashing result and in extreme cases, can result in destruction of the dishwasher. In other known dishwashing machines attempts are made to improve the dishwashing result by using large quantities of water, long running times and multistage filter systems. These dishwashing machines have the disadvantage that they have an elevated energy and water requirement. In addition, the known filter systems are not in a position to filter out fine-grained impurities in the dishwashing liquid since the sieves even of multistage filter systems are too coarse-meshed or the finer-mesh sieves impede the circulation of the dishwashing liquid in the dishwasher.
Q: How can I be visible in the dark when I'm signalling a turn? My bike and I are fairly visible at night--white light in front, red light in the back, bright yellow fenders, reflective tabs and a light on my helmet. I use hand signals to alert motorists when I'm turning... but I don't think they can see my hands in the dark. Is there anything different I should do when I'm signalling a turn in an unlit place? A: I should imagine you should be looking for some reflective gloves. Or even some glo gloves Check these out as an example ... http://lifehacker.com/395978/glo-gloves-reflective-cycling-gear Also using a good reflective jacket that has good reflective strips down the arms is useful. A: Putting reflective bands on your sleeves can help make your movements visible. They don't need to be attached permanently: a second pair of trouser clips works very well when strapped around your cuffs, or possibly the cuffs of your gloves if you're wearing big winter gloves. Something like Ron Hill snap bands (there are lots of equivalent products with different names) are very visible, take up next to no space when you're off the bike, don't encumber your wrists or get uncomfortable, and take only a few seconds to put on when you set off. You might even find your local road safety organization gives away bands like these at events, so they don't have to cost anything. Wearing reflective (or at least brightly coloured) gloves is also pretty good. Other than that it's all about being extremely cautious. A: Similar to the LED gloves, you can always make a signaling jacket. I bet I know a few of our friends that would be totally down helping with that ;)
Motor skills assessments: support for a general motor factor for the Movement Assessment Battery for Children-2 and the Bruininks-Oseretsky Test of Motor Proficiency-2. OBJECTIVE To evaluate the construct validity and model-based reliability of general and specific contributions of the subscales of the Movement Assessment Battery for Children-2 (MABC-2) and Bruininks-Oseretsky Test of Motor Proficiency-2 (BOT-2) when evaluating motor skills across a range of psychiatric disorders. METHODS Confirmatory factor analysis (CFA) and bifactor analysis were conducted on BOT-2 data from 187 elementary school students (grades 1 to 6) (mean age: 113 ± 20 months; boys: n = 117, 62.56%) and on MABC-2 data from 127 elementary school students (grade 1) (mean age: 76 ± 2 months; boys: n = 58, 45.67%). RESULTS The results of the CFA fit the data for multidimensionality for the BOT-2 and presented poor fit indices for the MABC-2. For both tests, the bifactor model showed that the reliability of the subscales was poor. CONCLUSIONS The BOT-2 exhibited factorial validity with a multidimensional structure among the current samples, but the MABC-2 showed poor fit indices, insufficient to confirm its multidimensional structure. For both tests, most of the reliable variance came from a general motor factor (M-factor), therefore the scoring and reporting of subscale scores were not justified for both tests.
by Dr. Ibrahim Shaikh* Table of contents 1. Al-Zahrawi 2. Selected articles on Al-Zahrawi 3. General References 4. Notes *** Note of the editor Abu al-Qasim Al-Zahrawi the Great Surgeon, Dr Ibrahim Shaikh, Abu al-Qasim Khalaf ibn al-Abbas Al-Zahrawi, Al-Zahrawi, Abulcasis, Islamic medicine, history of surgery, Andalus, Islamic Spain. *** 1. Al-Zahrawi Figure 1: Imaginary portrait of Al-Zahrawi. (Source) Abu al-Qasim al-Zahrawi, known also by his Latin name Albucasis, was born near Cordoba in 936 CE. He was one of the greatest surgeons of his time. His encyclopaedia of surgery was used as standard reference work in the subject in all the universities of Europe for over five hundred years. The Muslim scientists, Al-Razi, Ibn Sina and Al-Zahrawi are among the most famous of those who worked in the field of medicine in pre-modern times. They have presented to the world scientific treasures which are today still considered important references for medicine and medical sciences as a whole. Abu al-Qasim Khalaf ibn Abbas Al-Zahrawi (known in the West as Albucasis) was born at Madinat al-Zahra near Cordoba in Islamic Spain on 936 CE and died in 1013 CE. He descended from the Ansar tribe of Arabia who had settled earlier in Spain. His outstanding contribution to medicine is his encyclopaedic work Al-Tasrif li-man 'ajaza 'an al-ta'lif, a long and detailed work in thirty treatises. The Al-Tasrif, completed about 1000 CE, was the result of almost fifty years of medical practice and experience. Here is how the author expressed his credo in this book: "What ever I know, I owe solely to my assiduous reading of books of the ancients, to my desire to understand them and to appropriate this science; then I have added the observation and experience of my whole life." Figure 2: The beginning of the first article of Part I of a manuscript of Kitab al-tasrif li-man 'ajaza 'an al-ta'lif authored by Al-Zahrawi. The page shows his definition of medicine, quoted from Al-Razi, as the preservation of health in healthy individuals and its restoration to sick individuals as much as possible by human abilities (Source) Al-Tasrif is an illustrated encyclopaedia of medicine and surgery in 1500 pages. The contents of the book show that Al-Zahrawi was not only a medical scholar, but a great practicing physician and surgeon. His book influenced the progress of medicine and surgery in Europe after it was translated into Latin in the late 12th century, by Gerard of Cremona, and then afterwards into different European languages, including French and English. Al-Tasrif comprises 30 treatises or books (maqâlat) and was intended for medical students and the practicing physician, for whom it was a ready and useful companion in a multitude of situations since it answered all kinds of clinical problems. The book contains the earliest pictures of surgical instruments in history. About 200 of them are described and illustrated. In places, the use of the instrument in the actual surgical procedure is shown. The first two treatises were translated into Latin as Liber Theoricae, which was printed in Augusburg in 1519. In them, Al-Zahrawi classified 325 diseases and discussed their symptomatology and treatment. In folio 145 of this Latin translation, he described, for the first time in medical history, a haemorrhagic disease transmitted by unaffected women to their male children; today we call it haemophilia. Book 28 is on pharmacy and was translated into Latin as early as 1288 under the title Liber Servitoris.[1] Of all the contents of Al-Zahrawi's Al-Tasrif, book 30 on surgery became the most famous and had by far the widest and the greatest influence. Translated into Latin by Gerard of Cremona (1114-1187), it went into at least ten Latin editions between 1497 and 1544. The last edition was that of John Channing in Oxford (1778), which contained both the original Arabic text and its Latin translation on alternate pages. Almost all European authors of surgical texts from the 12th to the 16th centuries referred to Al-Zahrawi's surgery and copied from him. They included Roger of Salerno (d. 1180), Guglielmo Salicefte (1201-1277), Lanfranchi (d. 1315), Henri de Mondeville (1260-1320), Mondinus of Bologna (1275-1326), Bruno of Calabria (d. 1352), Guy de Chaulliac (1300-1368), Valescus of Taranta (1382-1417), Nicholas of Florence (d. 1411), and Leonardo da Bertapagatie of Padua (d. 1460). Figure 3: Frontispiece of the Latin translation of Al-Zahrawi's Kitab al-tasrif: Liber theoricae necnon practicae Alsaharavii... iam summa diligentia & cura depromptus in lucem (Impensis Sigismundi Grimm & Marci Vuirsung, Augustae Vindelicorum, 1519, 159 leaves). This is a translation of the first two books of Al-Tasrif, edited by Paul Ricius. For a long time, Al-Tasrif was an important primary source for European medical knowledge, and served as a reference for doctors and surgeons. There were no less than 10 editions of its Latin version between 1497 and 1544, before it was translated into French, Hebrew, and English. (Source). The 300 pages of the book on surgery represent the first book of this size devoted solely to surgery, which at that time also included dentistry and what one may term surgical dermatology. Here, Al-Zahrawi developed all aspects of surgery and its various branches, from ophthalmology and diseases of the ear, nose, and throat, surgery of the head and neck, to general surgery, obstetrics, gynaecology. Military medicine, urology, and orthopaedic surgery were also included. He divided the surgery section of Al-Tasrif into three part: 1. on cauterization (56 sections); 2. on surgery (97 sections), 3. on orthopaedics (35 sections). It is no wonder then that Al-Zahrawi's outstanding achievement awakened in Europe a hunger for Arabic medical literature, and that his book reached such proeminence that a modern historian considered it as the foremost text book in Western Christendom. Serefeddin Sabuncuoglu (1385-1468) was a surgeon who lived in Amasia in central Anatolia. He wrote his book Cerrahiye-tu l-Hanniyye in 1460 at the age of 80 after serving for many years as a chief surgeon in Amasiya Hospital (Darussifa) for years. His text Cerrahiye-tu l-Hanniyye was presented to Sultan Mohammad the conqueror, but the manuscript disappeared afterwards until it emerged in the 1920s. The book is roughly a translation of Al-Tasrif of Al-Zahrawi, but Sabuncuoglu added his own experiences and brought interesting comments on previous application, besides that every surgical procedure is illustrated in his work. William Hunter (1717-1783) used Arabic manuscripts for his study on Aneurysm. Among them was a copy of Al-Zahrawi's Kitab al-Tasrif.[2] In his biography of William Hunter, Sir Charles lllingworth, the author described the circumstances and the context of the purchase by William Hunter of an Arabic manuscript of Al-Tasrif of Al-Zahrawi, which he obtained from Aleppo in Syria.[3] Figure 4: Artistic scene of Al-Zahrawi treating a patient while students look on. Credits: Wellcome Library, London (Source) The oldest medical manuscript written in England around 1250 according to The British Medical Journal has startling similarity with Al-Zahrawi's volume: "This interesting relic consists of eighty-nine leaves of volume, written in beautiful gothic script in the Latin tongue. The work contains six separate treatises, of which the first and most important is the DE CHIRURGIA OF ALBU-HASIM [sic] (Albucasis, Albucasim ). This occupies forty four leaves, three of which are missing. It may be contended that this really is the oldest extant medical textbook written in England."[4] Thus, in conclusion, Al-Zahrawi was not only one of the greatest surgeons of medieval Islam, but a great educationist and psychiatrist as well. He devoted a substantial section in the Tasrif to child education and behaviour, table etiquette, school curriculum, and academic specialisation.[5] In his native city of Cordoba there is a street called 'Al-Bucasis' named after him. Across the river Wadi Al-Kabir on the other side of the city, in the Calla Hurra Museum, his instruments are displayed in his honour. As a tribute, his 200 surgical instruments were reproduced by Fuat Sezgin and exhibited in 1992 in Madrid's Archaeological Museum. A catalogue, El-legado Cientifico Andalusi, published by the museum, has good colour photos and manuscripts, some of which are on Al-Zahrawi's achievements, legacy and influence. Figure 5: A copper spoon used as a medical implement to press down the tongue (dated from the 3rd century H/ 9th century CE, Abbasid period) preserved at the Museum of Islamic Art in Cairo. This tool demonstrates that the physicians of the Islamic medical tradition attached much importance to medicine and medical tools in various areas of treatment and how they developed them. A detailed description of these tools can be found in the book Al-Tasrif of al-Zahrawi. (Source). Hakim Saead, from Hamdard Foundation in Karachi, Pakistan, has a permanent display of silver surgical instruments of Al-Zahrawi in the library of the Foundation. He also published a colour booklet. Professor Ahmed Dhieb of Tunis has also studied the surgical instruments and reconstructed them; they were displayed in the 36th International Congress for the History of Medicine held in Tunis City in Tunisia. In this exhibition, all surgical instruments of Al-Zahrawi were described and illustrated in detail in three languages - Arabic, French and English under the title Tools of Civilisation. 2. Selected articles on Al-Zahrawi and Islamic medicine on MuslimHeritage.com Figure 6: Extract from the Arabic text published in De chirurgia. Arabice et Latine, cura Johannis Channing, natu etr civitate Londinensis (Oxford, 1778). This book contains the surgical section of Al-Tasrif, the first rational, complete and illustrated treatise on surgery and surgical instruments. The surgical portion of Al-Tasrif was published separately and became the first independent illustrated work on the subject. It contained illustrations of a remarkable array of surgical instruments and described operations of fractures, dislocations, bladder stones, gangrene and other conditions. It replaced Paul of Aegina's Epitome as a standard work and remained the most used textbook of surgery for nearly 500 years.(Source) Abdel-Halim, Rabie E., The Missing Link in the History of Urology: A Call for More Efforts to Bridge the Gap (published 01 May 2009). Abdel-Halim, Rabie E., Paediatric Urology 1000 Years Ago (published 13 May 2009). Abdel-Halim, Rabie E., and Al-Mazrooa, Adnan A., Anaesthesia 1000 Years Ago: A Historical Investigation (published 05 June 2009). Abdel-Halim, Rabie E., and Elfaqih, Salah R., Pericardial Pathology 900 Years Ago: A Study and Translations from an Arabic Medical Textbook (published 06 May 2009). Burnett, Charles, Arabic Medicine in the Mediterranean (published 29 November 2004). Buyukunal, S. N. Cenk, and Sari Nil The Earliest Paediatric Surgical Atlas: Cerrahiye-i Ilhaniye (published 07 September, 2005). FSTC Research Team, Medical Sciences in the Islamic Civilization: Scholars, Fields of Expertise and Institutions. Section 3: Al-Zahrawi the Genius Surgeon (published 2 February 2009). Kaf al-Ghazal, Sharif, Selected Gleanings from the History of Islamic Medicine ( series of 5 articles published 03 April, 2007). Article 3: Al-Zahrawi (Albucasis) the Great Andalusian Surgeon. Khan, Aliya, and Hehmeyer, Ingrid, Islam's Forgotten Contributions to Medical Science (09 January 2009). Sayili, Aydin, Certain Aspects of Medical Instruction in Medieval Islam and its Influences on Europe (published 24 October, 2008). Shaikh, Ibrahim, Eye Specialists in Islam (20 December, 2001) 3. General References on al-Zahrawi Figure 7: Front cover of Albucasis (Abu Al-Qasim Al-Zahrawi): Renowned Muslim Surgeon of the Tenth Century by Fred Ramen (Rosen Central, 2005) Abdel-Halim, Rabie E., Altwaijiri, Ali S., Elfaqih, Salah R., Mitwali, Ahmad H., "Extraction of Urinary Bladder Described by Abul-Qasim Khalaf Alzahrawi (Albucasis) (325-404 H, 930-1013 AD)", Saudi Medical Journal 24 (12), 2003: 1283-1291. Abdel-Halim, A. E., et al., Extraction of Urinary Bladder Stone as Described by Abul-Qasim Khalaf Ibn Abbas Alzahrawi (Albucasis) (930-1013 AD, 325-404 H), Saudi Journal of Kidney Diseases and Transplantation, 9(2), 1998, pp. 157-168. See also the article in PDF here. Abd al-Rahim, Abd al-Rahim Khalaf, Al-Adawat Jirahiya wa al-Awani al-Tibiya fi al-'asr al-Islami min al-Qarn al-Awal hatta al-Qarn al-Tasi' lil Hijra [Surgical Instruments and Medical Vessels in the Islamic Period from the First Century to the Ninth Century], MA thesis, University of Cairo, 1999. Ahmad, Z., "Al-Zahrawi, The Father of Surgery", ANZ Journal of Surgery, vol. 77, (Suppl. 1), 2007, A83. Al-Hadidi, Khaled, "The Role of Muslem Scholars in Oto-rhino-Laryngology", The Egyptian Journal of O.R.L., 4 (1), 1978, pp. 1-15. [Al-Zahrawi], Albucasis on Surgery and Instruments. A definitive edition of the Arabic text with English translation and commentary by M. S. Spink and G. L. Lewis. London, Wellcome Institute of the History of Medicine, 1973. [Al-Zahrawi], Albucasis de chirurgia. Arabice et Latine… Cura Johannis Channing. Oxonii: e typographeo Clarendoniano, 1778. 'Awad, H. A., "Al- Jiraha fi al-'asr al-Islami [Surgery in the Islamic Period]", Majalat al-Dirasat al-Islamiya [Journal of Islamic Studies], Cairo, vol. 3, 1988. Hamarneh, Sami Khalaf, and Sonnedecker, Glenn, A Pharmaceutical View of Abulcasis al-Zahrawi in Moorish Spain, with special reference to the "Adhean, Leiden, E.J. Brill, 1963. Horden, P., 'The Earliest Hospitals in Byzantium, Western Europe, and Islam' Journal of Interdisciplinary History, vol. 35 (3), 2005, pp 361-389. Hussain, F. A., The History and Impact of the Muslim Hospital. London, Council for Scientific and Medical History, 2009. Kaadan, Abdul Nasser, "Albucasis and Extraction of Bladder Stone", Journal of the International Society for the History of Islamic Medicine, vol. 3, 2004, pp. 28-33. Levey M., Early Arabic Pharmacology, E. J. Brill, Leiden, 1973. Martin-Araguz, C. Bustamante-Martinez, Ajo V. Fernandez-Armayor, J. M. Moreno-Martinez, "Neuroscience in al-Andalus and its Influence on Medieval Scholastic Medicine", Revista de neurología, vol. 34 (9), 2002, p. 877-892. [Medarus], "Portraits de Médecins", Albucasis, (Khalaf ibn Abbas Al-Zahrawi) 936-1013 Chirurgien arabe d'Espagne. Noble, Henry W., Tooth transplantation: a controversial story, History of Dentistry Research Group, Scottish Society for the History of Medicine, 2002. Ramen, Fred, Albucasis (Abu Al-Qasim Al-Zahrawi), The Rosen Publishing Group, 2006. Savage-Smith, Emilie, "Attitudes Toward Dissection in Medieval Islam", Journal of the History of Medicine and Allied Sciences (Oxford University Press), vol. 50 (1), 1995, pp. 67–110. Tabanelli, Mario, Albucasi, un chirurgo arabo dell'alto Medio Evo: la sua epoca, la sua vita, la sua opera. Firenze [Florence], L.S. Olschki, 1961. [Wikipedia], Al-Zahrawi. Wikipedia], [Kitab] Al-Tasrif 4. Notes [1.] Liber servitoris de praeparatione medicinarum simplicium. Bulchasin Benaberazerin, translatus a Simone Januensi, interprete Abraam Tortuosiensi, et divisit in tres tractatus. Dixit agregator hujus operis (Venetiis: Per Nicolaum Ienson, 1471, 1 vol., in-quarto. There are several copies of this edition, for example in the library of the University of Glasgow, Special collections, MS Hunterian Bx.3.26. [2.] See H. G. Farmer, "William Hunter and his Arabic Interest", in Presentation volume to William Barron Stevenson, edited by Cecil James Mullo Weir, University of Glasgow Oriental Society, 1945, "Studia semitica et orientalia, vol. 2". See a description of the Hunterian Collection on the website of the library of Glasgow University. [3.] Sir Charles Illingworth, The Story of William Hunter, Edinburgh: E. S. Livingstone, 1967, p. 58. [4.] [Note in "Nova Vetera" section], "The Oldest Medical Manuscript Written In England", The British Medical Journal (published by BMJ Publishing Group), vol. 2, no. 4096 (July 8, 1939), pp. 80-81; p. 81. [5.] Sami K. Hamarneh, Health Sciences in Early Islam: Collected Papers, Zahra Publishing Co., 1984. ~ End ~ Back to the Table of Contents * Dr Ibrahim Shaikh is a retired medical practitioner. He is a Fellow of the Manchester Medical Society, Manchester, UK.
The Effect of Lutein on Eye and Extra-Eye Health Lutein is a carotenoid with reported anti-inflammatory properties. A large body of evidence shows that lutein has several beneficial effects, especially on eye health. In particular, lutein is known to improve or even prevent age-related macular disease which is the leading cause of blindness and vision impairment. Furthermore, many studies have reported that lutein may also have positive effects in different clinical conditions, thus ameliorating cognitive function, decreasing the risk of cancer, and improving measures of cardiovascular health. At present, the available data have been obtained from both observational studies investigating lutein intake with food, and a few intervention trials assessing the efficacy of lutein supplementation. In general, sustained lutein consumption, either through diet or supplementation, may contribute to reducing the burden of several chronic diseases. However, there are also conflicting data concerning lutein efficacy in inducing favorable effects on human health and there are no univocal data concerning the most appropriate dosage for daily lutein supplementation. Therefore, based on the most recent findings, this review will focus on lutein properties, dietary sources, usual intake, efficacy in human health, and toxicity. Introduction A large body of evidence suggests that a diet rich in antioxidants, which have an anti-inflammatory role, may contribute to reducing the burden of chronic diseases. Carotenoids are nutrients widely distributed in foods, especially in fruit and vegetables, and appear to have antioxidant properties. In recent decades, there has been increasing interest in their effects on health; a high dietary intake of carotenoids has been associated with beneficial effects in several systemic diseases and in eye disorders, with protection of the retina from phototoxic light damage. Most studies have focused on lutein (L), a carotenoid with a strong antioxidant effect in vitro that has been associated with a reduced risk of age-related diseases. Lutein is a xanthophyll, i.e., an oxygenated carotenoid that all mammalians, humans included, derive from their diet because they are unable to synthesize carotenoids. Several studies have shown that high L intake, either through diet or as nutritional supplement, has beneficial effects on eye diseases, preventing or even improving both age-related macular degeneration (AMD) and cataract. However, conflicting conflicting data had been reported concerning L efficacy, and in 2006, it was claimed that no compelling evidence had been provided concerning the supposed beneficial effect of L on human health. Furthermore, no univocal data concerning the appropriate dosage for possible L supplementation had been found. More recently, a number of studies have suggested that L may indeed have favorable effects via anti-inflammatory activity, improving cognitive functions, and decreasing the risk of cancer, cardiovascular diseases and other systemic conditions. Overall, it seems that the amount of L intake, including by supplementation, may partly counter inflammatory processes and favor human health, but inconsistencies still remain. We reviewed the literature on the evidence for the health effects of L, including its usual intake with different diets, adequate doses, and safety of supplementation, with specific reference to eye diseases. Structure and Distribution The structure of L is similar to that of other carotenoids, with a skeleton made up of 40 carbon atoms, organized into eight isoprene units, as shown in Figure 1. However, an important chemical difference with functional implications is the presence of two oxygen atoms inside the structure, thus making L a polar carotenoid which is classified as a xanthophyll, namely an oxygenated carotenoid. With zeaxanthin (Z), another xanthophyll, L is the main carotenoid in the human macula, so that the two compounds are mostly referred to as macular pigments (MP). Lutein is found mainly in the inner plexiform layer and in Henle's fiber layer but it can also be found in Mller cells. The presence of L has also been demonstrated in peripheral regions of the fovea, but its content decreases in the central region where Z is prevalent by a 2:1 ratio. Interestingly, the content of carotenoids diminishes significantly, by a factor of 100, moving away from the macula. In infants, the macular levels of L are higher than those of Z, probably due to differences in transport mechanisms that are not yet completely developed. On the other hand, Bernstein et al. reported that uveal structures account for about 50% of total eye carotenoids and 30% of total eye L ; this is the basis for the possible beneficial effects that L may have in the ciliary body and in the iris. Sato et al. suggested that L uptake in the retina may be mediated by a specific transporter, namely scavenger class B type 1, thus explaining the massive build-up of L in the eye. Finally, L is significantly detectable not only in the eye, but also in brain, where it represents the main carotenoid, especially in infants and in the elderly. Absorption and Metabolism As reported above, mammalians are not able to synthesize carotenoids which therefore need to be introduced with food. Once ingested, L is absorbed by the mucosa of the small bowel and bound to chylomicrons; then, it is secreted into lymph and reaches the liver. In hepatocytes, L is incorporated into lipoproteins that are distributed to peripheral tissues, particularly the retina, where the highest concentrations have been demonstrated. L is fat-soluble, hence the dietary content of lipids mediates L absorption through its incorporation into micelles, and several dietary factors may compete. A diet rich in fiber has been reported to reduce carotenoid serum levels, also affecting L absorption, whereas the presence of other carotenoids in diet might interfere Absorption and Metabolism As reported above, mammalians are not able to synthesize carotenoids which therefore need to be introduced with food. Once ingested, L is absorbed by the mucosa of the small bowel and bound to chylomicrons; then, it is secreted into lymph and reaches the liver. In hepatocytes, L is incorporated into lipoproteins that are distributed to peripheral tissues, particularly the retina, where the highest concentrations have been demonstrated. L is fat-soluble, hence the dietary content of lipids mediates L absorption through its incorporation into micelles, and several dietary factors may compete. A diet rich in fiber has been reported to reduce carotenoid serum levels, also affecting L absorption, whereas the presence of other carotenoids in diet might interfere with L assimilation, probably via a competitive mechanism. The content of iron and zinc as well as protein deficiency may affect L absorption ; conversely, the presence of mono-and di-glycerides is likely to positively regulate L absorption, as suggested by significantly increased L plasma levels. Finally, extra-dietary factors may reduce L bioavailability. Orlistat, a drug that inhibits lipase activity, has proven to decrease L absorption to an extent similar to that of impaired activity of pancreatic enzymes, as in the case of in vitro measurements relative to patients with cystic fibrosis or, to a lesser extent, with smoking and alcohol consumption. Mechanisms of Action Both animal and in vitro studies demonstrated that some carotenoids are compounds with antioxidant activity. Lutein has been demonstrated to exert an extremely potent antioxidant action by quenching singlet oxygen and scavenging free radicals, although it seems to be less potent than Z. Another protective effect of L consists in the ability of filtering blue light, thus reducing phototoxic damage to photoreceptor cells. Subczynski et al. hypothesized that L properties might be amplified by its localization in the most vulnerable regions of the retina and by the specific orientation in membranes. Notably, several studies have observed that L inhibits both the pro-inflammatory cytokine cascade and the transcription factor nuclear factor-kB (NF-kB). There is also compelling evidence that L reduces reactive oxygen species (ROS) production, the expression of inducible nitric oxide synthase (iNOS) and the activation of the complement system. Through all these mechanism(s), it is quite conceivable that L may exert a pivotal role in regulating immune pathways, modulating inflammatory responses, and reducing oxidative damage. Dietary Lutein Intake Lutein is naturally abundant and available in fruit, cereals, and vegetables, and it is also present in egg yolk, as seen in Table 1, where its bioavailability is higher than in any other food. Since L intake depends on vegetable consumption, it may vary according to dietary habits, in a range that has been estimated from 0.67 mg/d to more than 20 mg/d. In individuals consuming a Western-style diet, the average daily L intake has been estimated at 1.7 mg/d while in countries that consume a Mediterranean diet rich in fruit and vegetables it has been reported to be between 1.07 and 2.9 mg/d, with large inter-country variability. In a Korean population, the average L intake was estimated at about 3 mg/d. Interestingly, the highest L intake was reported in Pacific countries, where individuals consume a diet extremely rich in fruit and vegetables, reaching the peak of about 25 mg/d in the Fiji Islands. Age-Related Macular Degeneration Age-related macular degeneration (AMD) is the main cause of vision impairment and blindness in developed countries. Low habitual consumption of green leafy vegetables and fruit is one of the risk factors of AMD. To date, many studies have observed positive effects of L in terms of improvement of macular pigment optical density (MPOD) levels [12,14,16,32,, visual acuity (VA) and contrast sensitivity (CS). The original studies that first investigated a possible protective role of L against AMD date back to the early 1990s. In a case-control study including 421 individuals with neovascular AMD and 615 controls, it was found that the odds ratio (OR) of developing AMD was 0.3 when the highest quintile of L serum concentrations was compared with the lowest quintile, hypothesizing a negative relationship between L levels and AMD risk. In the same cohort, the dietary L intake was associated with the risk of AMD, and again an OR of 0.43 was observed comparing the highest vs. the lowest quintile of dietary L intake. Following these observational studies, the efficacy of nutritional supplementation with L was investigated in intervention studies. The Lutein Antioxidant Supplementation Trial (LAST) included 90 individuals with atrophic AMD and demonstrated a significant beneficial effect of L supplementation (10 mg/d), either alone or in combination with other antioxidants, for nearly 1 year ; in particular, it was observed that L enhanced MPOD, also improving VA and CS. The Carotenoids and Antioxidants in Age-Related Maculopathy Italian Study (CARMIS) demonstrated that L supplementation (10 mg/d for 1 year) in people with non-advanced AMD improved the dysfunction in the central retina assessed by multifocal electroretinograms, obtaining benefits also in terms of VA and glare sensitivity. According to a more recent study, L supplementation (20 mg/d for 3 months, followed by 10 mg/d for another 3 months) significantly increased MPOD by about 28% compared to placebo in 126 participants with AMD. The Carotenoids with Co-antioxidants in Age-Related Maculopathy (CARMA) study demonstrated that L (12 mg/d for 2 years) improved VA in 433 patients with early AMD. Furthermore, in the Combination of Lutein Effects in the Aging Retina (CLEAR) study, the MPOD of 72 patients with AMD significantly increased after one year of L supplementation (10 mg/d). The Age-Related Eye Disease Study 2 (AREDS2) is the most important and recent randomized controlled trial (RCT) to have assessed AMD treatment with oral supplementation of vitamins and micronutrients, including L. The original AREDS oral supplementation had already proven its efficacy in reducing the risk of developing advanced AMD. In order to analyze the effect of adding L (10 mg/d) and Z (2 mg/d) to the original formulation , the AREDS2 study included more than 4000 individuals at risk of developing late AMD. The trial failed to prove the efficacy of L in reducing the progression to advanced AMD or in improving VA. However, a 26% reduction of risk for late AMD was observed in individuals in the lowest quintile of L dietary intake who also received AREDS2 supplementation, especially in subjects with large drusen. In this study other formulations were also investigated. The AREDS formulation + L, with and without beta-carotene, was compared to the original AREDS formulation, because of the possible increased risk of lung cancer associated with beta-carotene consumption, especially in smokers ; AMD progression was significantly reduced in the L enriched group, suggesting that L might replace beta-carotene, improving treatment safety. Genetic factors are thought to influence the risk of AMD, in particular mutations involving genes encoding complement factor H (CFH). Interestingly, Ho et al. demonstrated that higher antioxidant and L intakes significantly decreased the risk of developing AMD in individuals with CFH genes variants, probably neutralizing the augmented genetic risk. Two recent meta-analyses including L randomized clinical trials (RCTs) demonstrated beneficial effects of L on MPOD and VA development. However, other studies failed to demonstrate that L supplementation was able to improve both VA and CS. In particular, it was also shown that L was ineffective in reducing the risk of developing AMD and in slowing progression to late AMD. According to the meta-analysis by Chong et al., L was unable to diminish the risk of developing AMD. These differences between the studies might be partly explained by the differing duration of L supplementation, or by differences in clinical characteristics of patients enrolled in the trial protocols. Therefore, though many studies concluded that L may be able to prevent or treating AMD, additional RCTs are needed to elucidate and characterize the therapeutic properties of L. A list of the intervention studies that have assessed the effect of L on visual performance is presented in Table 2. Similarly, the issue of the most appropriate dosage for L supplementation is unsolved. According to three studies, L supplementation with 20 mg/d was not more effective than the dose of 10 mg/d in improving visual performance. Similar results were also obtained by a meta-analysis of 5 RCTs. Conversely, other studies were consistent with a linear dose-response pattern for L efficacy, supported by a meta-analysis of 8 RCTs, suggesting that the higher the L intake, the better the outcome. These conflicting results very likely stem from differences in the study population with specific reference to dietary intake. More than 20 years ago Seddon et al. reported that a L intake of at least 6 mg/d was protective against AMD and AREDS2 showed that the population in the lowest quintile of dietary intake (with a median consumption of only 0.7 mg/d per 1000 kcal of energy) had the most positive results from L supplementation. Therefore, in the presence of an insufficient average intake of L of approximately 1.4 mg/d, supplementation of 10 mg/d, as in the AREDS2, might be the most appropriate dosage for chronic L supplementation. Table 2. Intervention studies on the effects of lutein on visual performance. Intervention group: formulation containing L 10 mg + Z 2 mg Study (year) Design (Number of Participants) Intervention and Lutein Supplementation Effects After 1 year, significant increase in MPOD, recovery from photostress and chromatic contrast. After 12 weeks, no effect in improving VA and glare sensitivity. CS significantly increased in both G2 and G3, but much more in G3. RCT, participants with non-advanced AMD ; treatment group vs. no treatment Treatment group: formulation containing L 10 mg + Z 1 mg Study (year) Design (Number of Participants) Intervention and Lutein Supplementation Effects After 1 year, significant improvement in central retina dysfunction but no effect in peripheral retina. Intervention trial, healthy controls, participants with AMD and participants with central serous chorioretinopathy Formulation containing L 6 mg After 1 year, no effect in MPOD. Significant improvement in CS and retinal sensitivity. Cataract Cataract, as well as AMD, is a growing health problem responsible for vision loss, due to oxidation of lens structures. Antioxidants, specifically L, might be a safe treatment for this condition, and in-vitro studies have demonstrated that L is able to prevent cataract in bovine cells by inhibiting the proliferation and migration of lens cells, as well as to prevent ultraviolet damage in human lens cells. A few observational studies have found a significant correlation between high L plasma concentrations and a low risk for developing cataract, and a negative association has also been reported between daily L intake and the risk of cataract, especially nuclear cataract. In particular, in over 30,000 participants Brown et al. demonstrated that L supplementation for 2 years (15 mg/d) was effective in improving visual function in subjects with age-related nuclear cataract, but these beneficial effects are controversial. Lyle et al. showed that neither high L serum levels had an effect on the incidence of cataract, nor was there a correlation between the disease and L intake. Also, the AREDS2 study showed that L treatment was ineffective both in preventing vision loss and in slowing progression towards cataract surgery. However, a meta-analysis reported a significant negative association between L serum levels and risk of nuclear cataract and another meta-analysis of 6 cohort studies including more than 40,000 participants concluded that daily L intake was negatively associated with the risk of developing nuclear cataract in a dose-linear response. Mares-Perlman et al. suggested that these uncertainties could be explained by limitations in the studies and by the different types of cataract and, irrespective of conflicting data, Weikel et al. concluded that L might be useful in cataract treatment. We definitely need more data, and carefully conducted RCTs evidence on this issue consisting of prospective clinical studies that use data obtained from database of patients who had been prescribed lutein to slow the progression of cataract. The possibility to effectively treat this disabling disease with a safe nutritional intervention is a key issue and needs to be clarified. Other Eye Diseases The effect of L has also been also investigated for other eye diseases, but the results have generally been unsatisfactory. A recent RCT reported that L supplementation (10 mg/d) for 36 weeks in patients with diabetic retinopathy (DR) improved only CS, without any effect on VA. The Atherosclerosis Risk in Communities (ARIC) study showed that L intake was not associated with DR in spite of the marked antioxidant properties. In retinitis pigmentosa, 24-week L supplementation significantly increased the visual field in one study, but other studies failed to confirm this finding. Retinopathy of prematurity (ROP) is an oxidative-based disease, and it has been hypothesized that L might be useful in managing this condition thanks to its anti-oxidant properties. A recent study in mice demonstrated that L significantly ameliorated retinal revascularization, but in preterm infants L failed to prevent ROP. In this case as well, more research is eagerly awaited. Cognitive Function Given the presence of L in the brain, much research has been conducted to investigate the potential of L in slowing down age-related cognitive decline and eventually how to recover damaged cognitive functions. Lutein is considered to have a significant role in preserving neural efficiency. In a cohort of centenarians, Johnson et al. observed that higher circulating L levels were associated with better cognitive performance and similar results were also reported in the general population of older adults. Higher L circulating levels have been correlated with better cognitive ability scores, as measured by the Wechsler scale, and it has been proposed that an increased parahippocampal cortex volume may account for these results. Interestingly, Picone et al. observed a positive correlation between L and blood concentrations of activin-A in infants, a neuroprotection marker. In older women, Johnson et al. demonstrated that L supplementation significantly improved some cognitive functions, namely verbal fluency, memory scores and efficient learning. Recent data also demonstrated that L is effective in improving memory or other measures of cognitive function, such as spatial memory and reasoning ability. By functional magnetic resonance imaging, it has been observed that L may improve cerebral perfusion or neural efficiency in older adults and two studies have demonstrated a relationship between MPOD and some aspects of cognitive function as prospective memory or verbal fluency and processing speed. Furthermore, Kuchan et al. reported that L stimulates the in-vitro differentiation of human stem cells into neural progenitors, a noteworthy effect that might protect against specific types of dementia as Alzheimer's disease. In mice, L was able to reverse the loss of nigral dopaminergic neurons by reducing oxidative stress and improving mitochondrial dysfunction. Morris et al. prospectively reported that dietary factors associated with L intake slow down age-related cognitive decline. Despite these promising reports, the AREDS2 study did not observe any favorable effect of L on cognitive performance. Given the amount of data supporting a beneficial role of L on brain tissue and the importance of treatments for cognitive disorders, randomized clinical trials on the possible therapeutic role of L should be planned. Cardiovascular Health Atherosclerosis and cardiovascular (CV) health partly depend on the activation of inflammatory cytokines ; therefore, nutrients and drugs that favorably influence the cytokine cascade might effectively prevent CV damage. In guinea pigs fed with L-enriched foods a decrease in medium size low density lipoprotein (LDL) was observed, associated with reduced aortic cholesterol, reduced oxidized LDL, and a minimally reduced intimal thickening compared to control animals. In mice, L significantly suppressed atherosclerotic plaque formation by increasing peroxisome proliferator-activated receptor- (PPAR-) expression. Furthermore, in rats L countered the oxidative stress induced by hyper-homocysteinemia, a minor CV risk factor and prevented cardiac and renal injury induced by streptozotocin in association with reduced oxidative stress markers. Interestingly, Rafi et al. observed that L significantly decreased the expression of inducible nitric oxide synthase in mice macrophage cells, a factor associated with inflammation and atherosclerosis. Many studies in humans have also provided evidence for a beneficial role of L supplementation, lowering the blood concentrations of inflammatory cytokines, while favoring the secretion of anti-inflammatory cytokines. Indeed, L reduced circulating complement factors, complement system activation, and factors potentially harmful to CV health. Other factors influencing atherosclerosis and CV risk, like lipid peroxidation and C-reactive protein serum concentrations, were shown to be reduced following L supplementation. Karppi et al. suggested that L plasma levels were inversely associated with circulating levels of oxidized LDL, whereas low circulating levels of L were associated with an increased risk of developing atrial fibrillation. In patients with early atherosclerosis, L supplementation (20 mg/d) for 3 months was associated with a significant reduction in plasma LDL-cholesterol and triglycerides. An RCT conducted on 144 patients with subclinical atherosclerosis demonstrated that L significantly decreased the carotid intima-media thickness, which is an established CV risk factor. In mononuclear cells of patients with stable angina or coronary acute syndrome, L negatively regulated the expression of inflammatory cytokines such as interleukin-1b, interleukin-6, and tumor necrosis factor-. Interestingly, a recent study demonstrated an independent positive association between L plasma levels and telomere length, a suggested marker of the aging process and also proposed as a predictor of myocardial infarction. On the contrary, other studies have failed to demonstrate that L is effective in CV prevention. The AREDS2 study found that L supplementation was unable to reduce CV diseases in patients with AMD. Leermakers et al. assessed median L intake in 13-month-old children by administering a food frequency questionnaire to their caregivers but, at 6 years of age, they did not find any improvement in cardiometabolic health, suggesting a lack of correlation between L consumption and parameters of CV health. However, a recent meta-analysis by the same authors that included studies enrolling more than 350,000 participants concluded that L consumption is associated with better CV health. Although these data have been obtained mostly from observational studies, they conclude that the risk for coronary disease and stroke is lower in the highest tertile for L intake compared to the lowest. In summary, there is some evidence that L is a CV protective factor, however, definitive data from RCTs are still missing. Cancer Risk Cancer is recognized as a multifactorial disease based not only on an uncontrolled cellular growth, but also on immune dysregulation and activation of inflammatory pathways. Indeed, a diet rich in fresh fruit and vegetables is generally considered a protective factor ; as such, L might also play a role against cancer. Slattery et al. observed a 17% reduced risk of developing colon cancer in the highest vs. the lowest quintile of daily L consumption ; a negative correlation between L intake and risk of pancreatic cancer was also reported ; moreover, L appeared to be protective also against breast cancer and head-and-neck cancer. Interesting results have also come from several meta-analyses investigating the potential of L as an anti-cancer compound. According to Chen et al. L is associated with a decreased risk of non-Hodgkin lymphoma, whereas Ge et al. observed a negative association between L intake and esophageal cancer risk. Interestingly, in-vitro studies with L have not reported a cytotoxic effect against normal human colon cells, but a significantly reduced survival rate of colon cancer cells. However, in this case as well, the results are conflicting and findings that fail to support this association have also been reported. At present, there are too many uncertainties about the anti-cancer action of L, but data are sufficient to pursue this research line. Other Systemic and Metabolic Effects Lutein might have an important role even in other diseases. In rats fed with high fat diet, Qiu et al. observed that L significantly decreased circulating cholesterol serum levels and hepatic cholesterol and triglycerides. These authors also found that L improved insulin sensitivity by acting on the expression of key factors involved in hepatic signaling, such as sirtuin-1 and PPAR-. Another experiment carried out on mice demonstrated that L prevented arsenic-induced hepatotoxicity via reduced ROS production and lipid peroxidation. Cao et al. reported a dose-dependent inverse association between non-alcoholic fatty liver disease risk and serum carotenoid levels, including L ; in particular, they observed a 44% lower risk in the highest vs. lowest quintile. Accordingly, L might also exert a significant protective action on the liver. Similarly, L might have a beneficial effect even on lung function, and a high L intake was associated with a significant improvement in forced vital capacity and forced expiratory volume. However, there is probably too little evidence to conclude on this issue. Bone health is of great importance, especially in the elderly as decreased bone mineral density may lead to osteoporosis and fractures. In mice, L significantly stimulated bone formation and inhibited bone reabsorption through its regulatory activity on NF-B. Similar results were obtained in vitro, where L was effective in increasing bone formation, preventing bone loss, and decreasing the interleukin-1-dependent differentiation of osteoclasts. Observational studies have confirmed the possible beneficial effects of L intake on total hip bone mineral density in men, supporting a positive role on bone health. Interestingly, an RCT reported that L protected the skin against the damage induced by solar radiation ; Palombo et al. reported beneficial effects of both topical and oral L administration on skin elasticity and hydration and similar results were observed by Morganti et al.. However, it seems premature to claim that L exerts a protective effect on skin. Finally, a few data also exist on a possible role of L in pregnancy, but, Lorenzoni et al. did not find any protective effect of L on oxidative stress in women with gestational diabetes. On the contrary, a case-control study by Cohen et al. suggested that higher plasma L concentrations were associated with low risk of preeclampsia. Despite these findings, data concerning a protective role of carotenoids against as pregnancy diseases as outcomes have been considered inconclusive even by a recent review on the effects of carotenoids during pregnancy. Lutein Safety and Toxicity In a well-balanced diet, L intake is sufficient and there is no need for supplementation, but in the presence of inadequate absorption or chronic diseases this possibility needs careful consideration. Several studies have been carried out to establish reasonable upper limits of safety for daily supplementation and to describe possible side effects of chronic L supplementation. To date, no study reported toxicity, either in acute or during chronic L supplementation. Studies performed both in animals and in-vitro clearly demonstrated that the use of L is safe as no mutagenic or teratogenic effect was observed. Nevertheless, mice lacking Beta-Carotene Oxygenase 2 exhibited a pathologic carotenoids accumulation and a significant increase in oxidative stress and mitochondrial dysfunction suggesting that an excessive carotenoid supplementation might lead to toxicity under certain conditions. Furthermore, epidemiological studies as well as intervention studies did not observe any toxic effect caused by L. However, according to current evidence, the Joint Expert Committee on Food Additives established an upper safety limit for daily L intake of 2 mg/kg, while the European Food Safety Authority (EFSA) was more cautious and indicated a limit of 1 mg/kg. This is consistent with the data obtained by Landrum et al. and Dagnelie et al. who demonstrated that the intake of L is safe up to 30 and 40 mg per day respectively. The EFSA additionally set an upper limit for L-enriched milk for infants, establishing a maximum L supplementation of 250 g/L. Zheng et al. demonstrated no interactions between L intake and cytochrome P450 enzyme activity; hence it is conceivable that L does not alter the metabolism of other exogenous or endogenous substances. Nevertheless, although L does not seem to be toxic, some side effects have been reported. Indeed, Olmedilla et al. reported that subjects receiving L supplementation of 15 mg/d for 20 weeks developed skin yellowing , an innocuous but unpleasant side effect. An observational study hypothesized that L might be associated with an increased risk of lung cancer, especially among smokers. In particular, an association was observed between chronic intake of supplements also containing L and an increased risk of lung cancer, mainly non-small cell lung cancer. However, an accurate survey performed by the EFSA concluded that data were insufficient to consider that L supplementation is associated with such negative events. Similarly, the AREDS2 intervention study did not observe any increased incidence of lung cancer with L supplementation suggesting that a health warning was unnecessary. A recent case report described the occurrence of crystalline maculopathy in an old woman on L supplementation and this potential side effect reversed after L intake discontinuation. However, the absence of similar data throughout recent decades makes also this association unlikely. Therefore, based on available data, it is reasonable to conclude that chronic L supplementation at the recommended dose of 10 mg/d, as in the AREDS2 study, is safe and not toxic. Conclusions Lutein qualifies as a powerful antioxidant and many studies support its favorable effects on eye health. Also, L has beneficial effects on other tissues, especially the brain, where it was associated with improved cognitive performance. Thus, not only high L intake with a diet rich in fruit and vegetables, but also its supplementation might be encouraged, particularly in the elderly and in individuals at high risk of different clinical conditions. However, there are still conflicting data that need to be elucidated by randomized clinical trials with large cohorts of general population. Furthermore, most of the results available at present were obtained from clinical trials lasting less than 1 year. Therefore, this time span is probably not sufficient to show significant favorable effects; therefore, studies with a longer duration are needed to better elucidate the possible favorable role of L on human health.
Hageman factor-dependent pathways: mechanism of initiation and bradykinin formation. The concentration of bradykinin in human plasma depends on its relative rates of formation and destruction. Bradykinin is destroyed by two enzymes: a plasma carboxypeptidase (anaphylatoxin inactivator) removes the COOH-terminal arginine to yield an inactive octapeptide, and a dipeptidase (identical to the angiotensin-converting enzyme) removes the COOH-terminal Phe-Arg to yield a fragment of seven amino acids that is further fragmented to an end product of five amino acids. Formation of bradykinin is initiated on binding of Hageman factor (HF) to certain negatively charged surfaces on which it autoactivates by an autodigestion mechanism. Initiation appears to depend on a trace of intrinsic activity present in HF that is at most 1/4000 that of activated HF (HFa); alternatively traces of circulating HFa could subserve the same function. HFa then converts coagulation factor XI to activated factor XI (XIa) and prekallikrein to kallikrein. Kallikrein then digests high-molecular-weight kininogen (HMW-kininogen) to form bradykinin. Prekallikrein and factor XI circulate bound to HMW-kininogen and surface binding of these complexes is mediated via this kininogen. In the absence of HMW-kininogen, activation of prekallikrein and factor XI is much diminished; thus HMW-kininogen has a cofactor function in kinin formation and coagulation. Once a trace of kallikrein is generated, a positive feedback reaction occurs in which kallikrein rapidly activates HF. This is much faster than the HF autoactivation rate; thus most HFa is formed by a kallikrein-dependent mechanism. HMW-kininogen is also therefore a cofactor for HF activation, but its effect on HF activation is indirect because it occurs via kallikrein formation. HFa can be further digested by kallikrein to form an active fragment (HFf), which is not surface bound and acts in the fluid phase. The activity of HFf on factor XI is minimal, but it is a potent prekallikrein activator and can therefore perpetuate fluid phase bradykinin formation until it is inactivated by the C1 inhibitor. In the absence of C1 inhibitor (hereditary angioedema) HFf may also interact with C1 and activate it enzymatically. The resultant augmented bradykinin formation and complement activation may account for the pathogenesis of the swelling characteristic of hereditary angioedema and the serologic changes observed during acute attacks.
Multicenter phase II study of pemetrexed and oxaliplatin as first-line therapy in advanced gastric cancer. 14045 Background: We have previously reported that single-agent pemetrexed is active in metastatic gastric cancer. On the basis of the potential synergism of pemetrexed and oxaliplatin, we explored the combination in patients with locally advanced/metastatic gastric carcinoma. METHODS The primary objective was activity of the combination. Eligible patients had to ≥1 measurable lesion according to RECIST. Pemetrexed 500 mg/m2 was given intravenously over 10 minutes, and oxaliplatin 120 mg/m2 was given over 2 hours; both drugs were given on day 1 of a 21-day cycle. Patients were to receive ≥6 (maximum of 8) cycles unless disease progression occurred. Vitamin supplementation was given as well as dexamethasone. A total of 43 patients were planned in a two-stage design with 13 patients in the first stage. An interim analysis was planned at the end of the first stage, so the trial could be stopped if ≤3 responses were observed. RESULTS Between May 2004 and January 2005, 13 patients (6 females) entered the study. Median age was 52 years (range, 27-75). One patient (7.8%) had locally advanced disease, and 5 patients (38.5%) retained primary gastric cancer. Main disease sites included lymph nodes (100%) and liver (23.1%). A total of 60 cycles were administered (median 6; range, 2-6). All 13 patients were evaluable for efficacy with 3 complete and 2 partial responses (ORR 38.5%; 95% CI, 13.9%-68.4%). Stable disease occurred in 3 patients (23.1%). G3 toxicities were neutropenia (30.8%), vomiting, hepatic toxicities and leucopenia (7.7%) each; no G4 toxicity were found. CONCLUSIONS This interim analysis suggests that the activity and tolerability of the combination in advanced gastric cancer is very promising. Study accrual was ended in October 2005, and final results will be presented at the meeting. No significant financial relationships to disclose.
Do you know a mom who goes above and beyond, doesn’t get the recognition she deserves and is in need of some pampering? Nominate her for Mom of the Year. FLORIDA TODAY and Space Coast Parent will celebrate one truly amazing Brevard County mom this Mother’s Day. We need your help finding her. Tell us about a special mother in your life and what makes her great. Is it her kindness? Her bravery? Her selflessness? Her keen fashion sense? Nominations can be for your own mom or someone else’s. She could win a fabulous prize package from FLORIDA TODAY and Space Coast Parent. The week before Mother's Day, FLORIDA TODAY and Space Coast Parent newsroom staff will surprise this mom with a gift. The winning mom will receive a bouquet of flowers and a Spa Showcase afternoon at Essentials Medispa and Salon, including an Ultimate Unwinder Massage, Essentials Signature Facial, Farmhouse Fresh mani/pedi, shampoo & style, and catered spa lunch. (Total ARV: $325). The winning nominator will receive a $50 VISA gift card. To enter, complete the nomination form here. Nominations must be accepted by 11:59 p.m. on April 26. Nominees must reside in Brevard County. Limited to one nomination per nominator. You must be a current Insider to be eligible to nominate a mom for the contest. An Insider is a FLORIDA TODAY subscriber who has an activated account. To become an Insider, click here. Show mom some love this Mother’s Day and nominate her.
For many individuals, the ease with which footwear can be put on and taken off is important. As a result, some individuals avoid certain footwear if it is difficult to put on or take off. Cowboy boots and Wellington-type boots are examples of footwear that is sometimes difficult to put on or take off. If the shaft (sometimes referred to as the quarter) of the boot has a large diameter, it is easier to get in and out of the boot, but the shaft may then be so large that it is difficult to have a pant leg fit over the shaft. Alternately, if the shaft has a narrower diameter, a pant leg can fit over the shaft, but it is difficult to put the boot on and take it off. In particular, with a standard or narrow diameter shaft, it may be difficult for an individual to manipulate the foot through the “turn” between the shaft and the shell or base of the boot. Zippers and other features may be added to such footwear, but these features frequently compromise aesthetics and materially increase manufacturing expenses. Therefore, it would be desirable to provide footwear with an expandable entry and exit system. Such a system should facilitate entry and exit from the footwear, while maintaining desirable aesthetics and avoiding expensive manufacturing processes.
Christian Democratic Party (Chile) History The origins of the party go back to the 1930s, when the Conservative Party became split between traditionalist and social-Christian sectors. In 1935, the social-Christians split from the Conservative Party to form the Falange Nacional (National Phalanx), a more socially oriented and centrist group. The Falange Nacional showed their centrist policies by supporting leftist Juan Antonio Ríos (Radical Party of Chile) in the 1942 presidential elections but Conservative Eduardo Cruz-Coke in the 1946 elections. Despite the creation of the Falange Nacional, many social-Christians remained in the Conservative Party, which in 1949 split into the Social Christian Conservative Party and the Traditionalist Conservative Party. On July 28, 1957, primarily to back the presidential candidacy of Eduardo Frei Montalva, the Falange Nacional, Social Christian Conservative Party, and other like-minded groups joined to form the Christian Democratic Party. Frei lost the elections, but presented his candidacy again in 1964, this time also supported by the right-wing parties. That year, Frei triumphed with 56% of the vote. Despite right-wing backing for his candidacy, Frei declared his planned social revolution would not be hampered by this support. In 1970, Radomiro Tomic, leader of the left-wing faction of the party, was nominated to the presidency, but lost to socialist Salvador Allende. The Christian Democrat vote was crucial in the Congressional confirmation of Allende's election, since he had received less than the necessary 50%. Although the Christian Democratic Party voted to confirm Allende's election, they declared themselves as part opposition because of Allende's economic policy. By 1973, Allende has lost the support of most Christian Democrats (except for Tomic's left-wing faction), some of whom even began calling for the military to step in. By the time of Pinochet's coup, most Christian Democrats applauded the military takeover, believing that the government would quickly be turned over to them by the military. Once it became clear that Pinochet had no intention of relinquishing power, the Christian Democrats went into opposition. During the 1981 plebiscite where Chilean voted to extend Pinochet's term for eight more years, Eduardo Frei Montalva led the only authorized opposition rally. When political parties were legalized again, the Christian Democratic Party, together with most left-wing parties, agreed to form the Coalition of Parties for the No, which opposed Pinochet's reelection on the 1988 plebiscite. This coalition later became Coalition of Parties for Democracy once Pinochet stepped down from power. During the first years of the return to democracy, the Christian Democrats enjoyed wide popular support. Presidents Patricio Aylwin and Eduardo Frei Ruiz-Tagle were both from that party, and it was also the largest party in Congress. However, the Christian Democrat Andres Zaldívar lost the Coalition of Parties for Democracy 1999 primaries to socialist Ricardo Lagos. In the parliamentary elections of 2005, the Christian Democrats lost eight seats in Congress, and the right-wing Independent Democratic Union became the largest party in the legislative body. In recent years, the Christian Democrats have favored abortion in three cases (when a pregnancy threatens the mother's life, when the fetus has little chance of survival, and when the pregnancy is a result of rape), but not in any other instances, and opposes on-demand abortion. Also, the Christian Democrats oppose same-sex marriage. The Christian Democrats left the Nueva Mayoría coalition on 29 April 2017 and nominated current party president Carolina Goic as their candidate for the 2017 presidential election. The Nueva Mayoria has struggled to remain united as differences have opened up within the coalition over approaches to a government reform drive, including changes to the labor code and attempted reform of Chile’s strict abortion laws.
Taiwan has donated 200 tonnes of rice to Kiribati to help relieve a food shortage. Taiwan's Minister of Foreign Affairs says the rice, handed over to the Kiribati government on Christmas Eve, is intended for distribution to hospitals, clinics and public boarding schools. The Government says it demonstrates Taiwan's care and friendship for the people of Kiribati. Food costs have been soaring this year in Kiribati, which is forced to import much of its food. Kiribati is one of six nations in the Pacific that grant diplomatic recognition to Taiwan.
ROBOCADEMY A European Initial Training Network for underwater robotics In 2014, funded by the European Commission through the Marie Curie Programme, ten leading European research institutes and companies in underwater robotics formed the ROBOCADEMY Initial Training Network (ITN). The objective of the network is to educate young researchers from Europe and abroad in the development and application of underwater robots. The ROBOCADEMY training programme comprises of scientific as well as soft-skills courses. Hands-on training is provided through integration in interdisciplinary project-teams and secondments to industry. In their PhD research projects, the ROBOCADEMY fellows develop key enabling technologies for the scientific action lines of disturbance rejection, preception and autonomy. The overall scientific goal of the project is to contribute to the next generation of resilient and robust Autonomous Underwater Vehicles (AUVs). This paper provides a brief introduction and overview in the concept of the ROBOCADEMY training network and the scientific resarch topics addressed.
Copyright by KRON - All rights reserved Bay City News - SAN FRANCISCO (BCN) -- A 99-year-old woman faced with eviction from her Lower Haight apartment in San Francisco could remain in her home under a tentative deal reached in court today, but demands for attorney's fees and an apology remain sticking points. Iris Canada has lived in the apartment at 670 Page St. since the 1950s and was granted a lifetime lease in 2005, allowing her to pay rent of $700 per month for as long as she lives. However, attorneys for the property owners allege that she has been living with family members since 2012 and has neglected the apartment for so long that utilities were shut off and it became uninhabitable. In court filings, Canada has said that she has no place to go if she moves out, will be unable to afford a new home in San Francisco, and needs a wheelchair or walker to move since suffering a stroke. "If I've been away from the unit for extended periods of time, it's because I was at the hospital or visiting with family, but 670 Page St. is my home," Canada said. Superior Court Judge James Patterson today issued a tentative ruling in the legal battle over Canada's eviction that would allow her to continue living in the apartment, but it included a provision requiring her to pay the landlord's attorneys fees, an amount that could exceed $100,000. Attorneys then negotiated a possible settlement in which the property owners would agree to waive the legal fees in return for Canada's signature on paperwork allowing the building to convert to condominiums -- and an apology for what the landlord's attorneys say has been several years of aggressive legal tactics on her behalf. It was unclear today, however, whether Canada would agree to the deal. Family members present in court appeared to be opposed and attorneys asked for a week's continuance to negotiate. Attorney Steven Adair MacDonald, who stepped in to represent Canada today, said he was confident an agreement could be reached by Friday. "There's a lot of animosity in this case, a lot of water under the bridge," he said. Canada's case gained widespread media attention last week after local tenants advocates and Board of Supervisors president London Breed rallied to her cause. Attorney Andrew Zacks, representing property owners Stephen Owens, Peter Owens and Carolyne Radishe, noted that Peter Owens, a former director of housing policies in Burlington, Vermont, recently resigned from his job there because of publicity over the case, which Zacks said has cost Owens money and harmed his reputation. Zacks said outside of court that the case has dragged on for several years and has involved multiple changes of lawyers representing Canada and frequent interference by her family members. "This entire matter was brought here by the exploitation of the legal system by Ms. Canada's family," he said. Tommi Avicolli Mecca, director of counseling programs at the Housing Rights Committee of San Francisco, said after the hearing that he viewed the tentative ruling and proposed deal as victories because they would allow Canada to stay. "I think people have to understand if we hadn't put this kind of pressure on them in the press, we wouldn't be standing here, she would have been evicted," Mecca said. "Evicting a 99-year-old woman is immoral and unjust no matter how they try to spin it," Mecca said.The case is scheduled to return to court on April 27.
Significance of endotoxaemia in experimental "galactosamine-hepatitis" in the rat. The course of galactosamine hepatitis induced by 1.0 g/kg i.p. injected galactosamine (Ga1N) was investigated a sequential study in normal rats, in colectomized rats, and in rats being endotoxin resistent against both exogenous and endogenous endotoxin. Clinical symptoms of Ga1N-hepatitis such as pyrogen reaction, disseminated intravascular coagulation, arterial hypotension, and hypoglycaemia correlated significantly with the development of endotoxaemia, which was detected by means of the limulus gelation test (L.G.T.) Ga1N refractoriness was found after colectomy, a situation, in which gram negative bacterias and their endotoxins were eliminated. Ga1N refractoriness was also observed in case of endotoxin resistence. It is concluded that endotoxins contribute significantly to the pathogenesis of "Ga1N-hepatitis" and its clinical symptoms.
Field of the Invention The present invention relates to a yellow toner to be used in a recording method such as an electrophotographic method, an electrostatic recording method, a magnetic recording method, or a toner jet method, and to a production method therefor. Description of the Related Art In recent years, there has been an increasing demand for an improvement in image quality along with increasing popularity of a color image. In a digital full color copying machine or printer, a color original image is subjected to color separation with filters for blue, green, and red colors, and latent images corresponding to the original image are then developed with developers for yellow, magenta, cyan, and black colors. Therefore, coloring power of colorant in the developer for each color has a large influence on image quality. It is important to reproduce Japan Color in the printing industry or approximate Adobe RGB used in an RGB workflow. In order to secure such color space, it is effective to use a dyestuff having a wide color gamut. As typical examples of a yellow colorant for toner, there are known compounds each having, for example, an isoindolinone, quinophthalone, isoindoline, anthraquinone, or azo skeleton. Of those, as a yellow dyestuff, there are known some examples each using a pyridone azo skeleton such as C.I. Solvent Yellow 162, which has high transparency and coloring power and is excellent in light fastness (see Japanese Patent Application Laid-Open No. H07-140716 and Japanese Patent Application Laid-Open No. H11-282208). In addition, a pyridone azo compound having a phenyl group that is disubstituted or more highly substituted is known to be used for a color filter (see International Publication No. WO2012/039361).
In certain types of equipment, such as earthmoving vehicles, numerous vehicle components are located beneath the operator's cab. To perform maintenance and tests upon those components, it is desirable to gain access to the area beneath the cab. Thus, U.S. Pat. No. 4,053,178 to York et al, issued Oct. 11, 1977, has disclosed a structure for pivotally mounting the operator's cab on the vehicle frame so that the cab may be relatively easily moved to gain access to the components below the cab. Some prior vehicles used cables on all of the controls except the brakes. The motion required to operate the brakes dictated the use of a solid linkage, which linkage became non-functional upon tilting the cab. In order to apply the brakes with the cab tilted, it was necessary to disconnect the linkage whereupon a spring applied the brakes. Several patents, including U.S. Pat. No. 3,259,203 to Ryskamp issued July 5, 1966, U.S. Pat. No. 3,332,522 to Dence issued July 25, 1967, and U.S. Pat. No. 3,892,294 to Nieminski issued July 1, 1975, have disclosed mechanisms which automatically apply a vehicle's brakes when the operator's seat is tilted, such devices being intended to prevent the vehicle from moving when the operator is out of the cab. However, these mechanisms involve use of separate brake actuating systems from those used during vehicle operation, thus requiring costly installation of two separate actuating systems. Further, these mechanisms would not operate automatically upon tilting the entire operator's cab. The present invention is directed to overcoming one or more problems found to exist in the above equipment.
Standardization (external and internal) of HPLC assay for plasma homocysteine. Measurement of plasma homocysteine may be of value in several clinical conditions including homocystinuria, atherosclerosis, thrombophilia, and folate/vitamin B12 deficiency. The increasing interest in measuring total homocysteine in plasma has led to the development of several different methods. A widely used technique for measuring total plasma homocysteine is reversed-phase HPLC with fluorescence detection after derivatization of plasma thiols with ammonium 7-fluorobenzo-2-oxa-1,3-diazole-4-sulfonate (SBD-F). Most published methods use external calibration alone for quantitation of homocysteine because of the difficulty in selecting an internal standard. We have modified this method by adding cysteamine hydrochloride as an internal standard to the plasma or homocysteine calibrator to compensate for variations in thiol derivatization and sample injection procedures. HPLC was carried out by an isocratic system with fluorescence detection (SFM 25 spectrofluorometer), autosampler (SA 360), and HPLC pump supplied by Kontron Instruments. Chemicals were obtained from Sigma. The method has been adapted from that of Ubbink et al. on the basis of the chemical description provided by Araki and Sako. The plasma or homocysteine calibrator (150 L) was incubated with 100 mL/L tri- n -butylphosphine in dimethylformamide (15 L) for 30 min at 4 °C to reduce and release protein-bound thiols. Deproteinization was achieved by the addition of 100 g/L trichloroacetic acid (150 L) and centrifugation. An aliquot of the supernatant (50 L) was mixed with sodium hydroxide (10 L, 1.55 mol/L), borate buffer (125 L, 0.125 mol/L, pH 9.5, containing 4 mmol/L EDTA), and SBD-F (50 L, 1 g/L) and incubated for 60 min at 60 °C. The SBD-F derivative from the supernatant (20-L aliquot) was eluted isocratically from the Spherisorb ODS2 analytical column (Jones Chromatography). The mobile phase was 0.1 mol/L KH2PO4, pH 2.0, containing 40
The bus service between Poonch in Jammu and Kashmir and Rawalakot in Pakistani-administered Kashmir resumed on Monday after being suspended for a week. Officials at the Chakan Da Bagh trade and travel facilitating centre in Poonch district said the cross-LoC (Line of Control) bus service was suspended last week because of ceasefire violations by the Pakistan Army. The bus service was started for divided families living on either side of the LoC in divided Kashmir as a major confidence building measure between India and Pakistan in June 2006.
Transport properties of stage-1 graphite intercalation compounds Stage-1 graphite intercalation compounds approximate quasi-two-dimensional (2D) random spin systems with competing ferromagnetic and antiferromagnetic intraplanar exchange interactions. The temperature dependence of the in-plane electrical resistivity of these compounds has been measured near critical temperatures. The magnetic resistivity consists of the long-range spin-order part and the spin-fluctuation part. For the long-range spin-order part is dominant: the temperature dependence of is described by a smeared power law with an exponent, where is the critical exponent of staggered magnetization. For the spin-fluctuation part becomes larger than. For no appreciable magnetic resistivity is observed. For c = 1 the derivative shows a small peak at around 67 K due to the growth of short-range spin order which is characteristic of the 2D Heisenberg antiferromagnet. The critical behaviour of the in-plane resistivity can be explained in terms of a model based on - d exchange interactions between -electrons in the graphite layers and magnetic spins in the intercalate layers. The -electrons are scattered by spins of a virtual antiferromagnetic in-plane spin configuration arising from the superposition of two ferromagnetic in-plane structures with spin directions antiparallel to each other. The - d exchange interactions of these compounds are also discussed.
The challenge of reducing prehospital delay in patients with acute coronary syndrome. The delivery of definitive treatment for acute coronary syndrome (ACS) should begin as soon as possible after symptom onset to decrease associated morbidity and mortality.1,2 Every 30 minutes of delay results in a 7.5% increased relative risk for 1-year mortality.2 Unfortunately, the time between the onset of cardiac symptoms and admission to the hospital is far beyond optimal. Median times range from 1.5 to 6.0 hours,3 with the most recent times reported to be slightly more than 2 hours.4,5 A major limitation to achieving timely treatment is related to the patients indecision and reluctance to seek treatment.6 To date, efforts to reduce prehospital delay have shown limited success, despite 2 decades of research and multiple randomized, controlled trials of educational strategies directed toward the general public, healthcare professionals, and patients with ischemic heart disease.48 Article see p 148 In early research in this area, investigators9,10 identified the sociodemographic and clinical characteristics that were associated with prolonged prehospital delay. Knowing that older individuals, women, or patients with a history of angina are more likely to delay does not suggest appropriate interventions to reduce delay time, because none of these characteristics are amenable to change. In this issue of Circulation: Cardiovascular Quality and Outcomes, Sullivan et al11 address an alternate understanding of prehospital ACS care delay. Along with examining delay to treatment in terms of patient sociodemographic and clinical characteristics, they tested a developmental model of attachment theory to characterize patterns of interpersonal functioning. More specifically, they asked patients to answer a series of questions
Factors that Contribute to Failed Retention in Former Athletic Trainers BACKGROUND: Athletic trainer retention has been topic of concern for 20 years, with one study indicating a drastic decline within ten years of becoming certified. Burnout, life-work balance, role strain, socialization, salary, in addition to other constructs are potential reasons for a lack of retention. An assessment of individuals who have left the athletic training profession is lacking; therefore, the purpose of this study was to discover the reasons why athletic trainers leave the profession of athletic training. DESIGN: Web-based survey. Qualtrics® was used to survey of 1000 individuals who let their athletic training certification lapse within 5 years of the study. SUBJECTS: Of the 198 (response rate=23%) respondents, the majority were female (n=119, 60%), married (n=149, 75%) with children (n= 127, 64%). MAIN OUTCOME MEASURES: The survey included demographic information and 5-point Likert matrices of factors that contribute to retention. The data were analyzed for demographic and factor analysis information. RESULTS: The data suggests that items of burnout including clinical depression, role strain, ethical and social strain, feelings of sadness, hopelessness, and decreasing sleep consistently contribute to leaving the athletic training profession (44.8% of variability). Employment factors accounted for 10.5% of the variability with variables such as travel demands, work hours, role overload, staffing, work environment, and lack of administrative support contributing to leaving the profession. Personal characteristics (4.5%) and personal fit (9.7%) also contributed variability in the data. CONCLUSIONS: Retention appears to be driven by some extrinsic (burnout and employment) and some intrinsic (personal characteristics and fit) variables. Additional inquiry into the personal factors for persistence may help to better identify those that are more likely to stay in athletic training.
Members of the Stanford community came together to remember the lives of nine Israeli victims of recent terrorist attacks at a Sunday evening vigil hosted by Cardinal for Israel (CFI). The vigil occurred only days after students had gathered in White Plaza in silent protest of what they have called an Israeli occupation of Palestine. Violence perpetrated by Palestinians against Israelis has been growing since Rosh Hashanah, the Jewish New Year, according to the Wall Street Journal. Attacks have left nine Israelis dead and over 60 seriously injured. The first attack occurred on Sept. 14, when Palestinians threw rocks at a driver’s car in East Jerusalem, causing him to fatally crash. The driver, Alexander Levlovich, was 64 years old. As the violence has escalated, over 40 Palestinians have been killed and 1,770 injured by live fire or rubber bullets, according to the Palestinian Health Ministry, some of whom perpetrated these acts of terror against Israeli citizens. However, many of these Palestinian civilians were not the attackers and were killed in clashes with Israeli troops in the West Bank and Gaza. These clashes are the result of tightened security by the Israeli Defense Force (IDF) in response to the terrorist attacks. Militant groups, such as Hamas, have praised the attacks against Israelis and called for a third intifada, an organized uprising by Palestinians against Israel. Palestinian President Mahmoud Abbas has not supported further escalation. The reaction on Stanford’s campus to the increased violence between Israelis and Palestinians has been varied, representing many sides of the complex situation. This past Friday, a group of students gathered in White Plaza with black tape over their mouths, bearing signs condemning what they believe is an Israeli occupation of Palestine. Fatima Zehra ’17, co-president of Students for Justice in Palestine (SJP), explained that Friday’s protest was a preemptive response to Sunday’s vigil. CFI president Miriam Pollock ’16 responded to Zehra’s comment in a message to The Daily. Sunday’s vigil focused primarily on the murders of Israelis over the past month and the terrorist attacks committed by Palestinians against Israelis. Pollock emphasized that the objective of the vigil was to provide members of the Stanford community affected by the terrorist attacks with a space to mourn those who have died and an opportunity to support Israel during this time of increased violence and terror. During the vigil, community members gathered around the stage in almost complete silence. An older man stood in the back of the crowd holding the Israeli flag, and another man wore an Israeli flag over his shoulders like a prayer shawl. On the stage, four student speakers stood in front of candles arranged to form a Star of David. Kaplan-Lipkin’s words captured why many chose to come to the vigil. Avera herself is Israeli and has experienced a terrorist attack in Israel. As the Israel Fellow, she works at Hillel to coordinate events about Israel. The vigil closed with the singing of a song of peace and the lighting of candles. CFI plans to continue to spread awareness of the violence in Israel through a social media campaign using #ItCouldBeMe, along with student groups at UCLA, UC Berkeley and other California schools. “The basic idea behind it is that it could have been any one of us that are in charge of hosting this event that had been killed,” Pollock said in reference to #ItCouldBeMe. However, the entirety of the Jewish community at Stanford does not necessarily agree with the position of Cardinal for Israel. In the coming weeks, Jewish community members will be hosting a memorial for lives lost in both Israel and Palestine. “Help us honor Palestinian and Israeli lives recently lost in Jerusalem with interfaith prayer and a call for a just solution to the Israeli-Palestinian conflict,” read flyers for the event. Matthew Cohen ’18, a member of the Jewish community, chose not to attend Sunday’s vigil, hosted by CFI, but plans to attend the memorial gathering. The current escalation of violence in Israel has deeply affected many in the Stanford community. Some identify with the Israeli side, themselves having experienced terror attacks in Israel. Others identify with the Palestinian struggle of life under Israeli occupation. In his final words to the crowd at Sunday’s vigil, Kaplan-Lipkin captured this complexity and perhaps the only common ground at the moment in this conflict. A previous version of the article was titled “Cardinal for Israel hosts vigil for Israeli victims of violence in Palestine”; however, the vigil commemorated victims in Israel. The Daily regrets this error.
A recent optical transmission system has been further enhancing the transmission speed and has put transmission at 10 Gb/s into practice. In addition, an optical transmission system at the transmission speed 40 Gb/s has been under development. Further, there has been developed an optical transmission system in which 1000-multiplexed optical signals with a bit rates of 10 Gb/s are multiplexed in use of a wavelength multiplexing technique and the resultant wavelength-multiplexed signal is collectively amplified and transmitted. In conjunction with enhancement in transmission speed, chromatic dispersion of an optical fiber intensifies deterioration in waveform of an optical signal and is therefore one of the causes of the restriction on transmittable distances. For this reason, a dispersion compensation fiber that compensates such chromatic dispersion of an optical fiber makes long distance transmission as far as several hundreds kilometers possible. Since the transmission speed further increased to approximately 40 Gb/s causes chromatic dispersion to more largely impinges, realization of long distance transmission as far as several hundreds kilometers prefers exact compensation for chromatic dispersion of an optical fiber and concurrently cannot neglect a variation in chromatic dispersion characteristics caused from a variation in temperature of the optical fiber and a variation in chromatic dispersion caused from polarization mode dispersion. Optical transmission at a high transmission speed, such as 40 Gb/s as above, higher than at 10 Gb/s, tightens the restriction on the tolerance of chromatic dispersion, which accompanies desire for strict conditions on a residual amount of chromatic dispersion obtained after the chromatic dispersion has been compensated for. Therefore, dispersion has to be compensated for which is caused by a variation in temperature of an optical fiber and by polarization mode dispersion in order to reduce residual chromatic dispersion while the apparatus is in operation. Various approaches have conventionally been proposed in order to compensate chromatic dispersion. For example, FIG. 7 is a diagram illustrating a conventional technique including a dispersion compensator, in which the drawings illustrates optical receiver 100, wavelength compensator 101, optical filter 102, optical-to-electrical converter (O/E) 103, data identifying section 104 with clock regenerating means, error detector 105, error rate calculator 106, error-rate variation amount calculator 107, compensation amount calculator 108, and dispersion compensating unit 111 placed at a distance. Optical receiver 100 receives optical signal transmitted through an optical fiber; dispersion compensator 101 compensates chromatic dispersion of the received optical signal; optical filter 102 extracts an optical signal containing a signal components; optical-to-electrical converter 103 converts the extracted optical signal into an electric signal; data identifying section 104 extracts the clock (from the electric signal), identifies the data on the basis of the extracted clock, and inputs the data into error detector 105; and error detector 105 carries out error detection or error correction and outputs the data, serving as a received signal, to a subsequent apparatus which is however not illustrated in the drawing. Error rate calculator 106 calculates an error rate based on an error detection signal of error detector 105 and error-rate variation amount calculator 107 calculates a variation in error ratio and inputs the error ratio into compensation amount calculator 108, which calculates an amount of dispersion control and controls dispersion compensator 101. Specifically, compensation amount calculator 108 controls the amount of dispersion control to be used in dispersion compensator 101 such that the error ratio is not increased. The optical receiver 100 is configured to search, on the basis of signal quality information, such as error ratio, calculated from an optical signal transmitted through the optical transmission line serving as a connection path for an amount of dispersion control to be used in dispersion compensator 101 and sets the searched amount of dispersion control into dispersion compensator 101 at the initial setting stage (until communication has been established between a non-illustrated optical transmitter and optical receiver 100). Described in detail, optical receiver 100 sweeps a range of an amount of dispersion control variable in dispersion compensator 101 and successively measures the error ratio associated with the amount of dispersion control, and finally sets the amount of dispersion control associated with the most preferable error ratio into dispersion compensator 101. Here, dispersion compensator 101 is exemplified by a configuration in which optical or mechanical variation in length of a dispersion compensation fiber controls an amount of dispersion control or a configuration in which a heater that searches for the temperature by supplying electricity corresponding to an amount of dispersion compensation is provided on the basis of knowledge of a temperature variation varying an amount of chromatic dispersion. Either configuration of a dispersion compensator requires tens of milliseconds or more to respond to, for example, a single control. Accordingly, it means that sweeping a compensatable range requires, for example, time as long as tens of seconds or more. Therefore, if an amount of dispersion control is searched for in such a method, sweeping performed in search of an amount of dispersion control by dispersion compensator 101 usually requires relatively long time, optical receiver 100 requires a certain time for searching for an amount of dispersion control in order to secure a desired signal quality. In other words, reduction in time required for the initial setting need reduction in time required for searching for the desired amount of dispersion control to be set. Patent reference 1 sweeps a variable range at the initial setting to calculate a bit error rate and concurrently carries out synchronization detection such as frame synchronization. The disclosed configuration enhances the speed of searching for a amount of dispersion control by setting, sweeping the width of a range in which synchronization detection cannot be carried out larger than that of the range in which synchronization detection can be carried out. The technique disclosed in Patent Reference 2 can be listed among conventional techniques related to the present invention. [Patent Reference 1] Japanese Patent Application Laid-Open (KOKAI) No. 2004-236097 [Patent Reference 2] Japanese Patent Application Laid-Open (KOKAI) No. 2002-208892 However, even the technique disclosed in above Patent Reference 1 requires sweeping a range at least from the lower limit to the upper limit of the variable range of the dispersion compensator. Therefore, it is difficult for the technique to satisfactorily enhance the searching speed for an amount of dispersion control. If the compensation at even the amount of dispersion control initially set in the dispersion controller worsens the error rate or the like due to a variation in temperature of the optical fiber, polarization mode dispersion and/or other reasons, another sweeping has to be made search of an amount of dispersion control of the dispersion compensator. The sweeping which likewise requires a relatively long time needs be repetitiously carried out even during the operation, and consequently a variation in temperature of the optical fiber, polarization mode dispersion and/or other reasons make it difficult to set a stable amount of dispersion control. Further, as described below, it is problematically difficult to optimally compensated for dispersion caused by a phenomenon peculiar to optical transmission, that is, fluctuation in level of an optical input (i.e., fluctuation in Optical signal to Noise Ratio (OSNR)) and fluctuation in characteristics of Polarization Mode dispersion (PMD). Specifically, dispersion of an optical signal being received by an optical receiver is represented by values on the x axis and Q penalty of the same optical signal is represented by values on the y axis, a dispersion value close to the center of abscissa is associated with a lower Q penalty value (i.e., representing more preferably quality) and the Q penalty value increases (i.e., the quality deteriorates) as departing from the center of abscissa (i.e., as the dispersion value decreases or increases from the center value). Namely, the relationship between the dispersion and the Q penalty of an optical signal in this case is represented by a parabola waveform (a tolerance curve) having the bottom at the center thereof. In this case, fluctuation in light input level on the time axis causes the characteristics of OSNR to irregularly fluctuate on the time axis. For this reason, the waveform representing the relationship between the dispersion and Q penalty value varies with time in the y-axis direction (up and down) as illustrated in, for example, FIG. 8. In addition, the characteristics of PMD possessed by the optical transmission path varies on the time axis, so that the waveform representing the relationship between the dispersion and the Q penalty of the optical signal varies with time in the x-axis direction as illustrated in, for example, FIG. 9. The fluctuation in characteristics of PMD and that of OSNR are not associated with each other, so that the combination of both fluctuations does not substantially vary in gradient of the tolerance curve but does randomly shifts on both x and y axes as illustrated in, for example, FIGS. 10 and 11 (see tolerance curves TC1 to TC6 in FIG. 10 and tolerance curves TC11 to TC16 in FIG. 11). Here, FIG. 10 shows an example of a shift in tolerance curve of receipt of an optical signal having transmission speed of 10 Gbit/s and FIG. 11 shows an example of a shift in tolerance curve of receipt of an optical signal having transmission speed of 40 Gbit/s. As illustrated in FIGS. 10 and 11, the optimum amount of dispersion control when the optical receiver receives an optical signal and the Q penalty values corresponding to the optimum amount that are representing the coordinates at the bottom of a tolerance curve randomly shift on the xy coordinates. Further, an optical signal at a transmission speed as high as 40 Gbit/s has a narrower width than that of a transmission speed of about 10 Gbit/s and therefore has a stricter demand for dispersion. FIGS. 12 and 13 show a relationship between the dispersion tolerance width and Q penalty values of an optical signal respectively at transmission speeds of 10 Gbit/s and 40 Gbit/s. Trapezoids L1 and L2 in FIGS. 12 and 13 represent a coordinate region defined by a dispersion value and a Q penalty value which can establish communication even under the influence of variations in OSNR and PMD. As illustrated in FIG. 12, in receipt of an optical signal at a transmission speed of 10 Gbit/s, the width (i.e., the dispersion tolerance width) of dispersion value that can ensure communication establishment even if the tolerance curve shifts as illustrated in above FIG. 10 has a small variation with the variation in Q penalty value represented by the ordinate. Namely, since the variation in Q penalty from the finest value Q11 to limit value Q12 in trapezoid L1 illustrated in FIG. 12 is followed by the variation in dispersion width from the lower base length L11 of trapezoid L1 to the upper base length L12 of trapezoid L2, dispersion tolerance has a small amount of variation in accordance with a variation in Q penalty (in other words, the slope of right oblique L13 and that left oblique L14 of trapezoid L1 with respect to variation in the ordinate direction are gentle). In contrast, as illustrated in FIG. 13, in receipt of an optical signal at a transmission speed of 40 Gbit/s, the width (i.e., the dispersion tolerance width) of dispersion value that can ensure communication establishment when the tolerance curve shifts as illustrated in above FIG. 11 has a relatively large variation with the variation in Q penalty value represented by the ordinate. Namely, since the variation in Q penalty from the finest value Q21 to limit value Q22 in trapezoid L2 illustrated in FIG. 13 is followed by the variation in dispersion width from the lower base length L21 of trapezoid L2 to the upper base length L22 trapezoid L2, dispersion tolerance has a large amount of variation in accordance with a variation in Q penalty (in other words, the slope of right oblique L23 and left oblique L24 of trapezoid L2 with respect to the ordinate direction are sharp). As described above, for an optical signal transmitted at the transmission speed 40 Gbit/s, even if the OSNR value is fine because a dispersion value corresponding to either end of the lower base of trapezoid L2 in FIG. 13 is applied to the optical signal, deterioration in Q penalty value caused from a variation in OSNR due to the variation in PMD or the like as illustrated by arrow A in FIG. 13 causes the Q value to deviate from the dispersion tolerance width, so that the communication may not be secured. Here, as illustrated by arrow B in FIG. 13, the Q penalty even which deteriorates due to PMD fluctuation or the like may remain inside the dispersion tolerance. However, as another problem, on the basis of the OSNR value in a stationary state, whether or not the deterioration in Q penalty shifts the Q penalty to an error region as illustrated by arrow A or to a region in which an error does not occur as illustrated by arrow B. In short, the problem is also that, in spite of a preferable OSNR value, an optical signal with a transmission speed of 40 Gbit/s may have errors caused not only by narrowing the dispersion tolerance width but also by sharpening the variation in dispersion tolerance according to deterioration in Q penalty value. In other words, as increasing in bit rate from about 10 Gbit/s to about 40 Gbit/s, the setting of dispersion of optical signal that are capable of establishing communication becomes more sensitive to OSNR, PMD and the like, and therefore, dispersion compensation requires higher accuracy. The Patent Reference 2 discloses a technique with a configuration that chirps an optical signal at an optical transmitter in advance to compensate for both chromatic dispersion and polarization dispersion at an optical receiver. However, since the technique of the Patent Reference 2 assumes that an optical signal that is to be compensated for previously undergoes a particular process of being chirped at an optical transmitter, the Patent Reference 2 does not provide a technique to rapidly set an amount of compensation for chromatic dispersion which amount minimizes the influence of a variation in temperature of an optical fiber and the influence of PMD without depending on whether or not an optical signal to be received has been chirped.
Reliability-based estimation of the number of noisy features: application to model-order selection in the union models This paper is concerned with the multi-stream approach in speech recognition. In a given set of feature streams, there may be some features corrupted by noise. Ideally, these features should be excluded from recognition. To achieve this, a-priori knowledge about the identity, including both the number and location, of the noisy features is required. We present a method for estimating the number of noisy feature streams. This method assumes no knowledge about the noise. It is based on calculation of the reliability of each feature stream and then evaluation of the joint maximal reliability. Since this method decreases the uncertainty about the noisy features and is statistical in nature, it can also be used to increase robustness of other classification systems. We present an application of this method to model-order selection in the union models. We performed tests on the TIDIGITS database, corrupted by noises affecting various numbers of feature streams. The experimental results show that this model achieves recognition performance similar to the one obtained with a-priori knowledge about the identity of the corrupted features.
Morphology and Postharvest Performance of Geogenanthus undatus C. Koch & Linden Inca after Application of Ancymidol or Flurprimidol. Excessive internode elongation and leaf senescence are common problems with foliage plants transferred to interiorscapes. The authors objective was to determine whether plant growth regulators applied late in the production cycle could control growth during production and improve interiorscape performance. In addition, the authors wanted to quantify the effect of irradiance on growth and morphology during the production phase and in the interiorscape. Geogenanthus undatus C. Koch & Linden Inca plants were grown under one of two photosynthetic photon fluxes ( PPF; 50 or 130 m mol (cid:2) m 2 (cid:2) s 1 ), and were treated with either a (methylethyl)- a --5-pyrimidinemethanol (flurprimidol) or a -cyclopropyl- a -(4-methoxyphenyl)-5- pyrimidinemethanol (ancymidol) during the week 12 production, at 0.5, 1.0, or 1.5 mg/pot of active ingredient. The high PPF resulted in significantly higher leaf, stem, root, and total dry weight, and leaf area, but lower leaf area ratio (leaf area divided by total plant dry weight) compared with the low PPF. After production, growing habit with fleshy, broad, ovate metallic green leaves with parallel bands of pale gray and a characteristic quilted appearance. Geogenanthus undatus C. Koch & Linden 'Inca' has darker green leaves and a more compact habit than the species. The seersucker plant holds considerable potential for the interiorscape industry because of its adaptability to low-light environments. Although new species and cultivars are the lifeblood of the ornamental industry, cultural information is not often available at their market introduction. To produce high-quality marketable material, growers need production guidelines for irradiance, temperature, and nutrition regimes, as well as growth control. The majority of tropical foliage plants are produced for use indoors. Generally, photo-synthetic photon flux (PPF) in postharvest settings are suboptimal for plant growth, even for many shade-adapted species. Low PPF can cause foliar chlorosis or necrosis, premature leaf senescence, or internode stretching (Conover and Poole, 1981). These undesirable responses lead to frequent plant replacement, high costs, and consumer dissatisfaction. Growers often acclimate foliage plants to lower PPF during production to improve postharvest performance. Production PPFs have been shown to affect growth parameters of foliage plants. Species-specific responses have been documented for various growth parameters, such as growth index, dry weight, and root-toshoot ratio (Conover and Poole, 1981). In addition, production irradiance also affects acclimation. Because the majority of foliage plants are sold for the interiorscape market, postharvest performance is an important aspect of the overall evaluation of a particular species or cultivar for the foliage industry. Plant growth regulator (PGR) type and concentration have been shown to affect postharvest performance of foliage plants (Cox and Whittington, 1988;Davis, 1987). Timing of application is considered critical to the efficacy of PGRs; they are generally applied early during the production cycle. However, by the time the plant is installed in the postharvest environment, the growth control exerted by the PGR may be less, and thus the PGR effect may not carry over to the interiorscape. If the PGR is applied later during the production cycle, and at an appropriate concentration, the growth control and related enhanced plant quality may be extended into the postharvest period. In the current study, drench applications of a-cyclopropyl-a-(4methoxyphenyl)-5-pyrimidinemethanol (ancymidol) or a-(methylethyl)-a--5-pyrimidinemethanol (flurprimidol) were administered to Geogenanthus during the later part of the production cycle. The purpose of a late-production application was to ensure growth control throughout the postharvest period. In addition, we hoped that late application of PGRs would slow down growth during the latter part of production, allowing the plants to accumulate starch instead of allocating carbohydrates to new growth. Starch reserves could be used by the plants after transfer to a low irradiance environment and potentially improve their performance in interiorscapes. The objectives of this study were to evaluate the effects of PPF and PGR on growth of G. undatus C. Koch & Linden 'Inca' during production, to determine whether the responses to the PGRs are similar at different irradiance levels, and, lastly, to evaluate the postharvest performance of G. undatus C. Koch & Linden 'Inca' in response to PPF and PGR applied late during the production cycle. Production phase Plant material. Tissue culture liners of G. undatus C. Koch & Linden 'Inca' (Agri-Starts, Apopka, Fla.) were planted in square pots (volume, 793 cm 3 ), using a peatlite medium (Fafard 2P, 65% Canadian sphagnum peat-35% horticultural perlite; Fafard, Anderson, S.C.). Plants were grown in a double-polyethylene Quonset-style greenhouse, covered with a double layer of 50% shade cloth. The temperature control in the greenhouse was set at 21°C day/18°C night (Wadsworth Systems; Arvada, Colo.). Plants were grown on 12 ebb-and-flow benches (1.2 2.4 m 2 ; Midwest GroMaster, St. Charles, Ill.). Fertilizer solutions were stored in plastic barrels (210 L) and pumped into the watertight trays of the ebb-and-flow system using submersible pumps (NK-2; Little Giant, Oklahoma City, Okla.). Media fertility levels were monitored weekly on a random sample of 12 to 24 plants using the pour-through method (Yeager, et al., 1997). Distilled water (50 mL) was poured into each pot and allowed to drain; leachate was collected, and pH and electrical conductivity (EC) were analyzed (Myron L Agrimeter AG-6; Metex Corporation Ltd., Toronto, Canada). Medium fertility levels were found to be within appropriate levels on all testing dates (EC, 1.3-1.6 dSm -1 ; pH, 5.5-6.5). Tissue and media samples were sent to MicroMacro Laboratories (Athens, Ga.) for analysis at the midpoint of production. Macro-and micronutrient levels in tissue were found to be within appropriate ranges, based on general recommendations for foliage plants. Treatments. Two production irradiance levels were achieved by using a single layer of 50% black shade cloth placed over half the benches (designated low PPF treatment), whereas the remaining six benches received ambient irradiance levels and were designated as the high PPF treatment. The actual shade structure provided some additional shading. Measurement of the high and low irradiance levels were taken as instantaneous measurements (2 PM on 5 May 2004), and were found to be 130 mmolm -2 s -1 or 50 mmolm -2 s -1 respectively. Measurements. Morphological data were taken on all plants (height and two perpendicular widths, leaf tip to leaf tip) at the end of production. Height was measured from shoot base to the apex. These data were used to calculate growth index . After 16 weeks of production, plants were prepared for destructive sampling by removing growing media from roots, and by physical separation of roots, stems, and leaves. Whole-plant leaf areas were taken with a leaf area meter (model 3100 Leaf Area Meter; LI-COR, Lincoln, Nebr.). For each plant, the roots, stems, and leaves were placed in separate bags and dried in a forced-air oven maintained at 80°C for a week. Leaf area ratio (LAR) was calculated as leaf area divided by total plant dry weight. Quantitative enzymatic starch analysis was performed separately for dry root, stem, and leaf tissue, according to the method of lo Bianco and Rieger. The starch analysis was only performed on control and flurprimidol-treated plants. At the end of production, physiological experiments were performed on 12 representative plants to determine photosynthesis at nine PPFs. Each plant was exposed to progressively higher PPFs (%0, 10, 20, 30, 40, 50, 100, 400, and 700 mmolm -2 s -1 ). Carbon dioxide exchange concentrations were taken on the most recently matured leaf, midway between the mid rib and leaf edge, and midway between the petiole and leaf tip (CIRAS-1; PP-Systems, Amesbury, Mass.). Dark respiration (R d ), maximum light use efficiency (LUE), and light-saturated gross photosynthesis (P gmax ) were estimated from a nonlinear regression (SigmaPlot v.10 software package; Systat Software, Richmond, Calif.): where P n is net photosynthetic concentration and, P gmax is light-saturated gross photosynthetic concentration, LUE is maximum light use efficiency, and R d is dark respiration (here expressed as a negative value, because it represents a CO 2 efflux from the plant). The light compensation point was determined by solving Eq. 1 for PPF and a P n of 0 mmolm -2 s -1. The light saturation point was determined as the PPF at which P n was 95% of light-saturated net photosynthesis . The units for all parameters are micromoles per square meter per second, with the exception of the unitless LUE, which is a measure of the maximum moles of CO 2 fixed per mole of incoming light (the slope of the light response curve at a PPF of 0 mmolm -2 s -1 ). Experimental design. Three subrepetitions were randomized within each bench (total of 21 plants/table). A subrepetition consisted of seven plants: one plant from each of the three dosages of ancymidol, one plant from each of three dosages of flurprimidol, and one control plant. On each table, the three subrepetitions were designated as part of studies I (morphology and carbohydrate analyses), II (photosynthesis analysis), and III (postharvest analysis). The experimental design was a completely randomized split plot with 12 whole plots (benches), and the variables of PGR type and dosage nested within PPF. Data were analyzed using the general linear model in Statistical Analysis Software v.9 (SAS Institute, Cary, N.C.) to test for twoway and three-way interactions and significant correlations (P < 0.05 was considered statistically significant). PROC General Linear Model was used for the enzymatic starch analysis. Means separation analysis (Fisher's protected LSD) was used to analyze the data further. Significance of the main effects (PPF, PGR application) and their interaction were determined using analysis of variance, whereas more specific comparisons were made with contrast statements. Contrast statements were generated based on the fact that PPF, PGR type, and PGR concentration were treated as classification variables. After 18 weeks in growth chambers, morphological measurements were taken (height and two perpendicular widths, leaf tip to leaf tip). These data were used to calculate growth index. The number of senesced leaves per plant was recorded. Growing medium was washed from the roots. For each plant, the roots and shoots were separated and dried in a forced-air oven maintained at 80°C for a week. Experimental design. The experimental design for the postharvest study was a randomized split plot with 16 whole plots consisting of seven plants each (one from each PGR treatment). Furthermore, whole plots were arranged in a randomized complete block design, with each growth chamber representing a block holding four plots each (two from high PPF and two from low PPF). Statistical analysis was performed using the SAS software package v.9 and PROC MIXED procedure (SAS Institute, Cary, N.C.). Significance of the main effects (PPF, PGR application) and their interaction were determined using analysis of variance, whereas more specific comparisons were made with contrast statements. Contrast statements were generated based on the fact that PPF, PGR type, and PGR dosage were treated as classification variables, with P < 0.05 considered statistically significant. When main or interactive effects were significant, mean separation was accomplished through a series of t test comparisons among PGR treatments. Results and Discussion Production phase. All PGR treatments affected height of G. undatus C. Koch & Linden 'Inca' compared with controls (Tables 1 and 2 (Table 2). Drench applications of ancymidol also decreased height in four cultivars of sunflower (). Whipker et al. found that flurprimidol caused a strong linear decrease in sunflower height up to a concentration of 2 mg a.i./15-cm pot. However, increasing concentration beyond 2 mg a.i./pot did not offer any further control of height. All PGR applications resulted in smaller plants, and this decrease was dosage dependent (Tables 1 and 2). As was the case for plant height, there was no difference between the two PGRs. Studies of the interaction of PGRs and PPF have not been as common as experiments focused solely on the growth-controlling characteristics of PGRs (i.e., height, internode length). Thus, information on the combined effect of irradiance and PGR on dry weight is somewhat limited, especially when considering four separate dry weight quantities (leaf, stem, root, total). Photosynthetic photon flux alone has been shown to change dry weight accumulation and partitioning in shade-obligate plants. Dracaena (Dracaena sanderana hort Sander ex. Mast.) exhibited quadratic responses in dry weight of root, stem, and shoot when grown under four different shading levels (47%, 63%, 80%, or 91%) (). Moderate shading (63% or 80%) allowed for the greatest accumulation of dry matter in all fractions, whereas the highest and lowest irradiance levels resulted in relative reductions in dry matter accumulation. Depending on irradiance requirements of the species in question, extremely high and low PPFs are likely to cause reductions in growth, resulting from inhibition of photosynthesis at either extreme. Total dry weight was affected by both PPF and PGR concentration, but no significance was found for the interaction of these two variables (Tables 1 and 2). Low PPF reduced total biomass by 30% compared with high PPF, with a similar effect on root dry weight. Geogenanthus has not been widely grown, and therefore optimum production parameters are yet to be determined. Our data indicate that low PPF may delay accumulation of adequate dry matter for marketable Geogenanthus plants within a reasonable production period. Both PGRs negatively affected total dry weight of G. undatus C. Koch & Linden 'Inca' and this reduction was correlated with the PGR dosage (Tables 1 and 2). Plant growth regulators did not have an effect on root dry weight. High PPF resulted in greater root dry weight than low PPF, regardless of PGR treatment (Tables 1 and 2). The interaction of PPF with PGR treatment was significant for leaf dry weight (Table 2), and the effects were similar as for stem dry weight. Under high PPF, control plants had higher leaf dry weight than plants treated with either PGR, and there was no effect of PGR concentration on leaf dry weight (Tables 2 and 3). Plant growth regulator applications did not affect leaf dry weight under low-irradiance conditions. Control plants grown in high PPF had 87% greater leaf dry weight than controls grown under low PPF. Leaf area in Geogenanthus was unaffected by irradiance, but tended to be higher at high PPF (P = 0.71). This is in contrast to the general notion that leaf areas tend to increase under low irradiance during the process of light acclimation (Taiz and Zeiger, 2002). The application of either ancymidol or flurprimidol caused a decrease in leaf area, and this effect was similar for both PGRs (Tables 1 and 2). As with other morphological changes, leaf area tends to decrease as PGR dosage is increased (). Because of the tendency to increase leaf area, plants grown under a lower PPF tend to have a higher LAR than plants grown under high PPF (). Both PPF and PGR dosage affected LAR, although an interaction between these two factors was not observed (Tables 1 and 2). As expected, plants grown under low PPF had a consistently higher LAR than high PPF plants of comparable PGR dosages. Flurprimidol reduced LAR compared with the control plants, whereas ancymidol did not (Tables 1 and 2). The root-to-shoot ratio was increased by flurprimidol but not ancymidol (Tables 1 and 2). This is related to the decrease in leaf and stem dry weight of flurprimidol-treated plants, because root dry weight was unaffected by flurprimidol. Photosynthetic photon flux did not affect the root-to-shoot ratio. Plant growth regulators have been shown to change carbon partitioning patterns and increase starch production in several species (;). In general, the major starch storage organ varies by species (). For this reason, we performed starch analysis separately on stems, leaves, and roots. Increased carbohydrate pools would allow for the continuation of maintenance respiration, which would be of particular importance for plants experiencing low PPF stress, as in a postharvest environment. However, the starch concentrations in leaves, stems, and roots were unaffected by production PPF or PGR treatments (data not shown). Results from the photosynthesis-light response measurements did not indicate 10.2 c 19.7 c 6.3 bc 3.2 523 c 208 c 1.05 a z Any two means within a column not followed by the same letter are significantly different at P < 0.05 using a Fisher's protected LSD mean separation. Plant growth regulator (PGR) effects on root dry weight (RDW) were not analyzed, because of the absence of a significant PGR effect (Table 2). An interaction did not exist between irradiance and PGR dosage for the growth parameters shown. PPF, photosynthetic photon flux; H, height; GI, growth index; TDW, total dry weight; LA, leaf area; LAR, leaf area ratio; R:S, root-to-shoot ratio. differences among controls and treated plants, regardless of PPF or PGR treatment. A light response curve from a representative plant revealed that net photosynthesis in G. undatus C. Koch & Linden 'Inca' is low (Fig. 1). The light compensation point was 2.8 mmolm -2 s -1, the light saturation point was 64 mmolm -2 s -1, and the LUE was 0.159 molmol -1. These data are consistent with the slow growth observed during both production and postharvest phases. Shade-obligate plants have lower light compensation and saturation points than sun-obligate plants. In a study on four understory herbaceous plants , three of the four species were considered shade-obligate, whereas E. americanum was considered a sun plant. Light compensation points were lower for the shade-obligate species , than for Erythronium (16 mmolm -2 s -1 ). Although Geogenanthus had a lower light compensation point than all these species, it is close to that of Arisaema. Light saturation points for the shade-obligate species were also low in the understory herb study (Podophyllum, 117 mmolm -2 s -1 ; Arisaema, 133 mmolm -2 s -1 ; and Smilacina, 135 mmolm -2 s -1 ), and much higher for the sun species Erythronium (326 mmolm -2 s -1 ). When comparing maximum photosynthesis, Geogenanthus had a P gmax of 3.4 mmolm -2 s -1, which was close to that of Smilacina (3.9 mmolm -2 s -1 ) and considerably lower than that of the sun species Erythronium (14.7 mmolm -2 s -1 ). Postproduction phase. After 18 weeks under simulated interior conditions, the height of G. undatus C. Koch & Linden 'Inca' was reduced by PGR applications (Tables 4 and 5, and Fig. 2). Both ancymidol and flurprimidol produced shorter plants compared with controls. Control plants were 60% taller than plants treated with 1.5 mg/pot a.i. of ancymidol, and 88% taller than plants treated with 1.5 mg/pot a.i. of flurprimidol (Table 4). Furthermore, there was a difference between the two PGRs; flurprimidoltreated plants were shorter than ancymidoltreated plants. No main or interactive effect involving production PPF was significant with regard to plant height. The effects of the PGRs are particularly evident upon examination of height increase in simulated interior conditions (i.e., the difference in plant height before and after the postharvest period; Tables 4, 5). The height of untreated plants increased more (5.9 cm) than that of treated plants, whereas plants treated with flurprimidol had only a very small height increase (<2.5 cm). Similarly, the growth index increase was lower in PGR-treated plants than in the control plants, and ancymidol-treated plants had a larger increase in growth index than flurprimidol-treated plants. These data indicate that the PGR treatments during late production had extended effects into the postharvest period, and that flurprimidol had a stronger effect during the postproduction phase than ancymidol. In previous studies, shoot growth also was reduced by PGR applications, after a period in a low-PPF interior environment. Davis treated three species of foliage plants with paclobutrazol (25 mg/pot a.i.), or 250 mg/pot a.i.), and either immediately placed them in a simulated interior environment (PPF, 15 mmolm -2 s -1 ) or allowed them to grow for 2 months under optimal greenhouse conditions before being placed in the simulated interior. Paclobutrazoltreated Zebrina (Z. pendula Schnizl.), another member of the Commelinaceae family, experienced almost complete inhibition of growth when immediately placed in an interior environment. However, when greenhouse growth was allowed between PGR treatment and the interior, growth was less than that of control plants in the interior environment. Even though the two concentrations of paclobutrazol were an order of magnitude apart, growth for plants treated with these two concentrations was similar in the interior conditions. In a study using ancymidol drenches, foliage species were grown in a simulated postharvest environment (Blessington and Link, 1980 Shoot and total dry weights were unaffected by production PPF level, PGR type or concentration, or any interactions (Tables 4 and 5). It is interesting that differences were observed between control and treated plants for height and size, but not for total dry weight at the end of the postharvest period. This suggests that the differences in height and size were incited by stem elongation, rather than differences in biomass production. In contrast, at the conclusion of production, high-PPF plants had greater dry weight accumulation, whereas plant height 2 c z Any two means within a column not followed by the same letter are significantly different at P < 0.05 using a Fisher's LSD mean separation. PPF, photosynthetic photon flux; PGR, plant growth regulator; SDW, stem dry weight; LDW, leaf dry weight. did not differ between the PPF treatments compared with low-PPF plants. Root dry weight was reduced by flurprimidol but not by ancymidol (Tables 4 and 5). However, production PPF, or the interaction of production PPF with PGRs, did not affect root dry weight. Root dry weight has been shown to increase in rice (Oryza sativa L.) roots (), but decrease in citrus seedlings () in response to PGRs. There was no interaction between production PPF and PGR treatment for root-toshoot ratio, but their main effects were significant (Tables 4 and 5). Low-PPF plants had consistently greater root-to-shoot ratios (average, 0.51) than high-PPF plants (average, 0.38). This is in contrast to what would be expected, because low-light plants often invest a larger fraction of their carbon resources in the shoot (Taiz and Zeiger, 2002). Plant growth regulator effects on root-to-shoot ratios were related to effects on root dry weight; flurprimidol applications reduced both root dry weight and the root-to-shoot ratio, whereas ancymidol applications did not. Leaf senescence was unaffected by PPF. The number of dropped leaves also was unaffected by production PPF for two species of Schefflera (S. arboricola Hayata ex. Kanehira and Brassaia actinophylla Endl) after 3 months in a simulated interior environment (). However, PGRs have been shown to affect the number of senesced leaves in members of the Commelinaceae family. After application of paclobutrazol, Z. pendula plants had fewer senesced leaves. In the current study, flurprimidol-treated plants had fewer senesced leaves than ancymidol-treated plants (Tables 4 and 5). Although statistical significance was not found between PGR-treated plants and controls, in general the treated plants appeared to have fewer senesced leaves. In conclusion, applications of ancymidol or flurprimidol administered to G. undatus C. Koch & Linden 'Inca' late during the production cycle resulted in significant growth control and, therefore, superior plant performance throughout the postharvest period. Production PPF did not play a major role in the overall response of the plants to the interior environment. Therefore, growers may use the high-production PPF (130 mmolm -2 s -1 ), which results in a greater accumulation of dry weight than the low PPF (50 mmolm -2 s -1 ). Furthermore, although PGRs effectively controlled height and growth index, the PGR concentration did not exert great influence over these parameters. Thus, the lower, more economical concentration can be used successfully. In general, flurprimidol exerted greater control over plant growth than ancymidol, and thus shows promise for use as a growth regulator in the foliage industry. 2.18 b 0.34 c 4.2 c 1.9 d 0.9 d z Analysis of variance did not indicate any treatment effects on total dry weight (TDW); therefore, mean separation was not done for this parameter. y Any two values within a column not followed by the same letter are significantly different at P < 0.05 using pairwise t tests. The interaction between photosynthetic photon flux and plant growth regulator (PGR) dosage for the growth parameters shown was nonsignificant. Growth index (GI) and height (H) increase refer to the increase in the postharvest environment. a.i., active ingredient; RDW, root dry weight; R:S, root-to-shoot ratio; SL, senesced leaves. NS Nonsignificant effect at P < 0.05. Growth index (GI) and height (H) increase refer to the increase in the postharvest environment. SDW, shoot dry weight; TDW, total dry weight; RDW, root dry weight; R:S, root-to-shoot ratio; SL, senesced leaves; PGR A, ancymidol; PGR F, flurprimidol.
I guess it's true that Jesse Jackson's comments will help Barack with some segment of white people, and not hurt him at all among black people. But the more I think about this, the more puzzling I find this to be. What did Jesse Jackson actually do to white people? I guess if you're Jewish you could have a legit beef with him over the whole "hymietown" remark, but I'm just not clear on why Jesse gets more hate than any other liberal. Better yet, why is Jesse hated more than some fool who calls America "a nation of whiners" for feeling uneasy about gas prices. One other tangetial point--Can we retire the phrase "black leader?" There really aren't any anymore, and this is a good thing. Jesse hasn't commanded a national constituincy of black Americans in probably two decades or so. At this point he's basically a media pundit. And that's fine, I don't begrudge him that. But he isn't a "black leader" of any national significance. That's really not a knock on him, either. I don't think any of us would like a return to the conditions which made "black leaders" essential.
The Irish vote legalizing gay marriage drives a stake through the heart of the Catholic Church. Tom Inglis, a sociologist who specializes in the affairs of the Irish Church, opined that “the era of the Church as the moral conscience of Irish society is over.” But not just for the Irish. As the effect of the April 29 referendum — gay marriage approved by 62 percent — sinks in, it becomes clearer that it isn’t just the Irish Church that is trembling but the Catholic Church itself. To say, as Diarmuid Martin, archbishop of Dublin, did after the vote — “I appreciate how gay and lesbian men and women feel on this day. That they feel this is something that is enriching the way they live” — is to position the bishop well among liberals. But it plunges the Church deeper into the mire. The Catholic Church is not a liberal institution. It’s an organized faith, with a pope elected to guard that faith. For centuries, the Irish Church had one of the most powerful grips on its population of any in the world. Hope for heaven and the horror of hell was strong. The 19th- and 20th-century republican movement, though often denounced as godless by the bishops, was motivated in part by Catholic revulsion against the schismatic, Protestant British. And when, in the early 1920s, Ireland became an independent republic, education was handed over to the Church, as was moral guidance. Divorce was hard, abortion forbidden, censorship strict. James Joyce’s Ulysses wasn’t banned, but only because his publishers believed (correctly) that it would be, so they never tried to sell it in the republic. In the past few decades, the descent of the once- omnipotent Church has been swift. The writer Damien Thompson believes that, because of the many instances of priests engaged in pedophilia and because of its “joyless” aspect, “hatred of the Church is one of the central features of modern Ireland.” Even if that’s an exaggeration — indifference is more likely — it’s obviously the case that fear and submission to clerical authority is confined to a tiny few. We may be hard-wired for religion, as many behavioral psychologists believe, but we are not hard-wired for Catholicism, or any other form of religion. Polls in Europe, and increasingly in North America, show that many people believe in “something” supernatural but are not prepared to shape that vague belief into an organized religious practice. Revulsion against those who use their authority to violate minors is a much stronger public attitude, one that easily translates into a turning away from the Church, even when the priest is a good man. Pope Francis, much lauded as a new kind of informal, down-to-earth, liberal-minded pontiff, is deeply wounded by the Irish vote. His remark about gays who are Catholic, made to reporters in July 2013, on a flight back from Brazil — “Who am I to judge?” — was widely interpreted as a possible opening to approval of same-sex unions, and even the sanctification of these unions in marriage. But it wasn’t. It couldn’t be. Most of his senior colleagues, cardinals and archbishops answered his rhetorical question by saying (beneath their breath), “You’re the pope, dummy!” And popes don’t sanction two men or two women marrying. When he called the Synod on the Family in Rome last year, an early draft of the meeting’s report called for the “gifts and values” of gays to be recognized. But the cardinals who organized the event removed any such language from the final document. In behaving like this, the Catholic Church puts itself in the same league with Russia, India and most of the Middle East, entities that suppress homosexuality either by law or encouragement of prejudice. President Vladimir Putin visits the pope next week in the Vatican. The issue won’t likely be on the agenda, but it should be — so that spiritual and temporal powers can compare notes on why they give a platform to a prejudice that fosters hatred, and violence. In the world that has accepted homosexuality as neither an abomination in the sight of the Lord nor an unnatural practice deserving of punishment, earlier bigotry is being debated and confronted with an impressive amount of liberalism and maturity. Quite recent state-sanctioned discrimination against gays is being revealed in all its casual cruelty, and people are recoiling at what they now see clearly. The recent film The Imitation Game chronicled the story of Alan Turing, the mathematical genius who broke the Nazis’ Enigma code and thereby, it’s estimated, shortened World War Two. It depicted a man found guilty of homosexual acts, and given a choice by a judge of two years in prison or chemical castration. He chose the latter — and in 1952 committed suicide. Turing received a posthumous and very late “pardon” from the queen in 2013. Last month, an institute to study the social effects of technology in the future was founded in his name in London. In the liberal societies, the real debate is not whether gay men and women should be free to live, express themselves and marry; it is rather how to handle those who, for religious or bigoted reasons, or both, refuse to accept or serve them. These cases are mounting up. In Oregon in April, a judge fined a small bakery $145,000 for refusing to cater a gay wedding; and a pizza parlor in Indiana, answering a reporter’s question by saying it, too, would not be keen to supply pizzas to such a ceremony, was forced to close for a week in face of protests. The pope goes to the United States in September for another World Meeting of Families in Philadelphia. Francis has called the faithful Catholic family “the salt of the earth and the light of the world … the leaven of society.” But the issue is indeed salty in a different sense, and bitterly so. The Irish answer to the issue of same-sex marriage underscores his isolation from the Western world and its people.
Optimal Bus Stops Allocation: A School Bus Routing Problem with Respect to Terrain Elevation Abstract The paper addresses the optimal bus stops allocation in the Lako municipality. The goal is to achieve a cost reduction by proper re-designing of a mandatory pupils transportation to their schools. The proposed heuristic optimization algorithm relies on data clustering and Monte Carlo simulation. The number of bus stops should be minimal possible that still assure a maximal service area, while keeping the minimal walking distances children have to go from their homes to the nearest bus stop. The working mechanism of the proposed algorithm is explained. The latter is driven by three-dimensional GIS data to take into account as much realistic dynamic properties of terrain as possible. The results show that the proposed algorithm achieves an optimal solution with only 37 optimal bus stops covering 94.6 % of all treated pupils despite the diversity and wideness of municipality, as well as the problematic characteristics of terrains elevation. The calculated bus stops will represent important guidelines to their actual physical implementation.
. OBJECTIVE To study the effect of endophytic fungi. METHOD A. formosanus, harvested after having been cultured for age, statistics were taken, fresh weight and dry weight were gained, and enzyme activities of chitinase, beta-1,3-glucase, phenylalanine ammonia-lyase and polyphenoloxidase were determined. RESULT The survial rates of A. formosanus inoculated with endophytic fungi was 100%. The effect of fungi on fresh weight was very significant (P < 0.01). The effect of fungi on dry weight was significant (P > 0.05). The four enzyme activities were enhanced by endophytic fungi, comparision with the controls. CONCLUSION Survial rates of A. formosanus can be increased by using endophytic fungi in vitro culture.
Filamin C regulates skeletal muscle atrophy by stabilizing dishevelled-2 to inhibit autophagy and mitophagy FilaminC (Flnc) is a member of the actin binding protein family, which is preferentially expressed in the cardiac and skeletal muscle tissues. Although it is known to interact with proteins associated with myofibrillar myopathy, its unique role in skeletal muscle remains largely unknown. In this study, we identify the biological functions of Flnc in vitro and in vivo using chicken primary myoblast cells and animal models, respectively. From the results, we observe that the growth rate and mass of the skeletal muscle of fast-growing chickens (broilers) were significantly higher than those in slow-growing chickens (layers). Furthermore, we find that the expression of Flnc in the skeletal muscle of broilers was higher than that in the layers. Our results indicated that Flnc was highly expressed in the skeletal muscle, especially in the skeletal muscle of broilers than in layers. This suggests that Flnc plays a positive regulatory role in myoblast development. Flnc knockdown resulted in muscle atrophy, whereas the overexpression of Flnc promotes muscle hypertrophy in vivo in an animal model. We also found that Flnc interacted with dishevelled-2 (Dvl2), activated the wnt/-catenin signaling pathway, and controlled skeletal muscle development. Flnc also antagonized the LC3-mediated autophagy system by decreasing Dvl2 ubiquitination. Moreover, Flnc knockdown activated and significantly increased mitophagy. In summary, these results indicate that the absence of Flnc induces autophagy or mitophagy and regulates muscle atrophy. INTRODUCTION The skeletal muscle is a form of striated muscle tissue, which accounts for approximately 40%-60% of the body weight of adult animals. 1 The skeletal muscle consists of multinucleated cells called muscle fibers, which are formed by the coalescence of myoblasts. 2 The study on skeletal muscle is of great interest in the areas of morbidity and mortality associated with muscular dystrophy and myopathy. The skeletal muscle also contributes significantly to the economic value of animals used for meat. 3 Therefore, proper investigation on the regulatory mechanisms underlying the growth and development of the skeletal muscle to improve human health and animal husbandry is crucial. Chicken is regarded as a suitable research model for studying skeletal muscle development in vertebrates, since galline developmental anatomy is similar to that of other amniotes, including humans. 4 Fast-growing chickens (broilers) and slow-growing chickens (layers) are two types of breeds that have been developed through long periods of genetic selection. The growth and development of the skeletal muscle between these two breeds of chickens differs significantly. For instance, in broilers, the growth rate of skeletal muscle is five times higher than that of the laying hens at the age of 6 weeks. Due to the differences in the genetic make-up and genome levels of both layer hens and broilers, this therefore makes them suitable models for studying skeletal muscle growth and development. Filamin proteins are actin binding proteins, 5 containing three family members, including Flna, Flnb, and Flnc. Flna and Flnb are typically expressed in all tissues, whereas Flnc is mainly expressed in the myocardium and skeletal muscle. 6,7 Flnc, a Z-band-associated 2,725 amino acid protein, contains one N-terminal actin binding domain and 24 serine-type immune globulin repeats, and can bind to several Z-disk proteins, including myotilin (Myot) and myozenins (Myoz). 8,9 Dalkilic and Kunkel found Flnc genes among approximately 30 disease-associated genes in various forms of muscular dystrophy. 10 Juo et al. also reported that Flnc interacts with the muscle-specific protein HSPB7 and that Flnc deficiency could lead to progressive skeletal muscle myopathy. 11 The study by Ruparelia et al. reported that impaired autophagy and protein dysfunction could cause Flnc myofibrillar myopathy, 12 which suggested that Flnc may help regulate skeletal muscle development by modulating autophagy. Autophagy is a process of bulk degradation by which cytoplasmic proteins and eliminating defective or damaged organelles are recycled back through the formation of double membranous vesicles (autophagosomes), 13 followed by fusion with lysosomes to be degraded by lysosomal acid hydrolases and proteases. 14 Skeletal muscle autophagy regulates muscle homeostasis. Excessive autophagy could reduce muscle mass due to continuous removal of cellular components. Reduction in autophagy leads to chronic loss of muscle mass due to cell damage or aging. 14,15 Disorders of autophagy are thought to be associated with many forms of inherited muscular dystrophy, including Duchenne muscular dystrophy, Ullrich congenital muscular dystrophy, and Bethlem myopathy. 16 Imbalances in autophagy homeostasis are also believed to play a vital role in various myopathies with inclusions or mitochondrial abnormalities. To maintain metabolic homeostasis in the cell, damaged mitochondria and their accumulated toxicity, must be cleared through a selective autophagy mechanism known as mitophagy. 17 Flnc is a cytoskeleton-associated protein that binds Z-disk proteins to activate downstream signaling cascades. However, the role of Flnc in autophagy and mitophagy remains poorly understood. In this study, we determined the biological impact of Flnc on skeletal muscle growth and development in vitro and in vivo using primary myoblast cells and animal model (chicken chest muscle), respectively. These studies showed that Flnc is a potential therapeutic target for increasing skeletal muscle mass during skeletal myopathy. Flnc is upregulated in hypertrophic broilers and plays a role in skeletal muscle differentiation The results obtained in this study showed that the growth rates of broilers and layers differed significantly ( Figure S1A). The body weight, pectoral muscle weight, pectoral muscle percentage, and muscle fiber area of broilers were much higher than those of the layers (Figures S1B-S1F). To screen for candidate genes that regulate muscle development, we determined the chest muscle transcriptomes of both Ross 308 broilers and White leghorn layers. Of the 12,744 expressed transcripts, 869 had expression profiles that were significantly different between the two breeds, such as Stac3, Klf5, MyoG, Nexn, Myot, and MyH7, which are signature genes that regulate muscle development ( Figure 1A). We then validated the gene characteristics of skeletal muscle development using qPCR ( Figure 1B). Transcriptome sequencing indicated that Flnc expression in fast-growing chickens was 50-fold higher than that in slow-growing chickens. Subsequently, we found that Flnc was mainly expressed in skeletal muscle and cardiac muscle ( Figure 1C). In addition, Flnc plays an important role in maintaining the structural integrity of cardiac and skeletal muscles. Therefore, we speculate that Flnc plays an important role in the growth and development of skeletal muscles. We then collected chicken primary myoblasts to study the function of Flnc. Primary myoblasts were identified using MyoD immunofluorescence staining ( Figure 1D). qPCR analysis indicated that Flnc mRNA expression was higher in chicken myotubes than in chicken myoblasts ( Figure 1E). Using qPCR and western blotting, we also found that Flnc expression increased gradually during the differentiation of myoblasts into myotubes (Figures 1F and 1G). Flnc promotes myoblast differentiation To explore the role of Flnc during myogenic differentiation, we transfected Flnc siRNA to reduce Flnc mRNA and protein expression in chicken myoblasts (Figures 2A-2C). We measured mRNA expression for three myogenic markers genes: myogenin (MyoG), myosin heavy chain 3 (MyH3), and myoglobin (Mb); expression of the three genes was significantly reduced in Flnc-silenced cells ( Figure 2D). Immunofluorescence staining revealed that Flnc silencing restrained myoblast differentiation and decreased the total area of myotubes ( Figures 2E and 2F). Western blotting also showed that MyoG and MyH3 protein expression decreased significantly in Flnc-silenced cells (Figures 2G and 2H). We also determined the effect of Flnc overexpression on myoblast differentiation. We achieved Flnc overexpression by transfecting cells with the Flnc-overexpressing plasmids. In contrast, Flnc overexpression significantly increased myoblast differentiation, myogenic gene expression, and myotube area ( Figures 3A-3H). To confirm these results, we established control, Flnc-silenced, and Flnc-overexpressing C2C12 mouse myoblast cell lines. Similarly, we observed that the differentiation of Flnc-silenced C2C12 cells was significantly decreased ( Figures S2A-S2F), while the differentiation of Flnc-overexpressing C2C12 cells was increased (Figures S3A-S3F). Then, we examined the effects of Flnc on cell proliferation and apoptosis. The results showed that Flnc silencing promoted proliferation and inhibited apoptosis, while Flnc overexpression inhibited proliferation and promoted apoptosis (Figures S4A-S4C and S5A-S5F). This suggests that Flnc may reduce myoblast number and affects myogenic differentiation. Flnc could rescue skeletal muscle atrophy Next, we investigated the influence of Flnc on atrophy-related genes in primary myotubes. We found that Flnc interference significantly increased expression of atrogin-1 and MuRF1 mRNA and atrogin-1 protein ( Figures 4A-4C). In contrast, atrogin-1 and MuRF-1 mRNA expression and atrogin-1 protein levels were decreased significantly after Flnc overexpression ( Figures 4D-4F). It is known that synthetic glucocorticoid dexamethasone overdose induces muscle atrophy. 18 Therefore, we silenced and overexpressed Flnc in vitro in dexamethasone-induced myotube atrophy. Expression of atrogin-1 protein was further enhanced after dexamethasone treatment of Flnc-silenced cells ( Figures 4G and 4H). However, Flnc overexpression reduced atrogin-1 expression after dexamethasone treatment ( Figures 4I and 4J). These results indicate that Flnc may help regulate the expression of muscle atrophy-related genes and remedy dexamethasone-induced myotube atrophy. Furthermore, we tested the effect of Flnc on muscle development in vivo, and found that Flnc mRNA expression was significantly increased in breast muscle after lentivirus-mediated Flnc overexpression, and decreased in Flnc knockdown vector-treated muscle ( Figures 5A and 5B). The mass of Flnc-silenced chickens was also markedly lower than that of the controls at days 7 and 9 ( Figure 5C). Chicken body mass was higher at day 9 after injection with Flnc-overexpressing lentivirus vector ( Figure 5D). The chest muscle fiber diameter and muscle fiber cross-sectional area (CSA) were decreased in Flnc knockdown groups when compared with controls. However, both were increased in Flnc-overexpressing chickens ( Figures 5E-5H). Hematoxylin and eosin staining indicated that muscle fibers in the Flnc knockdown chickens were abnormal and the fiber CSA was reduced ( Figure 5I). Muscle fiber CSA had increased significantly in Flnc-overexpressing chickens (Figure 5J). Similarly, Flnc silencing enhanced atrogin-1 expression, while overexpression reduced atrogin-1 expression in vitro ( Figures 5K-5N). These results suggest that Flnc may play a role in regulating muscle atrophy and hypertrophy. Flnc functions through the Wnt/b-catenin signaling pathway mediated by Dvl-2 RNA sequencing was used to investigate the molecular mechanisms underlying the role of Flnc in myoblast differentiation by analyzing Flnc-silenced and control cells after 3 days of differentiation. The expression of myogenesis-related genes was significantly decreased during myoblast differentiation, which was inhibited by Flnc siRNA ( Figure 6A). Meanwhile, qPCR confirmed that mRNA expression was significantly downregulated in myogenic and Wnt signaling pathway target genes ( Figure 6B). GO functional annotation analysis of differentially expressed genes (DEGs), identified genes involved in muscle cell regulation and development, muscle tissue morphogenesis, and muscle fiber development ( Figure 6C). KEGG pathway analysis indicated that the DEGs were clearly involved in the MAPK and wnt signaling pathways. However, the Wnt signaling pathway is closely involved in muscle cell differentiation ( Figure 6D). 19 To detect the regulatory effect of Flnc on Wnt signaling, we determined the expression of Dvl2 and b-catenin proteins, the key proteins of the wnt pathway, 20 and found that Dvl2 abundance decreased in Flncsilenced cells ( Figures 6E and 6F). Moreover, b-catenin's ability to enter the nucleus was reduced significantly after Flnc silencing (Figures 6G and 6H). Subsequently, LiCl was used to explore the effect of Flnc on Wnt/b-catenin signaling, as LiCl activates the Wnt/b-catenin pathway. 21 The results showed that the expression of Axin1 was reduced after LiCl treatment, although Axin1 was still amassed in Flnc-silenced cells treated with LiCl ( Figure 7A). Nucleus b-catenin protein was increased dramatically in LiCl-treated cells but, in Flnc knockdown cells, nuclear b-catenin expression was lower than in control cells ( Figure 7B). Upon addition of wnt3a, another Wnt signaling pathway activator, 22 Wnt signaling was inhibited in Flnc-silenced cells ( Figures 7C and 7D). 1-Azakenpaullone (1-AKP) can activate Wnt/b-catenin signaling independent of Dvl2 participation. 23 The results showed that MyH3 was increased in Flnc-silenced cells treated with 1-AKP when compared with the untreated Flnc-silenced cells ( Figures 7E and 7F). Western blot analysis indicated that protein levels of Axin1 and GSK3b were reduced in Flnc-silenced cells after incubation with 1-AKP ( Figures 7G and 7H). This suggests that the regulation of Wnt signaling by Flnc requires the mediation of Dvl2. Dvl2 complementation can rescue myogenesis damaged by Flnc silencing The disheveled binding antagonist of b-catenin 1 (Dact1) is known to co-regulate Wnt signaling by antagonizing Dvl2 during mammalian development. 24 Here, we found that Dact1 silencing significantly increased Dvl2 and MyH3 expression in chicken myoblast ( Figures 8A and 8B). We found Dact1 silencing significantly increased Dvl2 expression, although Dvl2 was inhibited when both Flnc and Dact1 were silenced when compared with Dact1-silenced cells ( Figure 8C). MyH3 immunofluorescence showed that Dact1 silencing could rescue myotubes in Flnc-silenced cells (Figures 8D and 8E). Thus, Dvl2 complementation can rescue the normal phenotype in myotubes injury induced by Flnc silencing. These results suggest that Flnc regulates Wnt/b-catenin signaling via mediation by Dvl2. Furthermore, we found that the transfection of Dvl-2-Flag significantly exacerbated LC3 puncta in the DF1 cells. After co-transfection with Dvl-2-Flag and Flnc-HA, LC3 fluorescence intensity was significantly reduced ( Figures 10A and 10B). The results with Flnc-silenced and control cells with autophagy inducer Rap and inhibitor 3MA, showed that GFP-LC3 + puncta were significantly increased in Flncsilenced cells treated with Rap. However, there was no significant difference between the 3MA-induced Flnc-silenced cells when compared with the controls (Figures 10C and 10D). Furthermore, Flnc-silenced myoblasts and the control cells were treated with 3MA, and the results showed that Dvl2 increased in the Flnc-silenced myoblasts ( Figure 10E). Immunoprecipitation experiments were performed to detect the interaction between Flnc-HA and Dvl2-Flag ( Figure 10F). Furthermore, the immunoprecipitation analysis showed that endogenous Flnc interacts with Dvl2 ( Figures 10G and S7A). Moreover, we found that the ubiquitination of Dvl2 was significantly increased in the Flnc-silenced cells ( Figures 10H and 10I), suggesting that Flnc can prevent Dvl2 ubiquitination and also inhibit correct formation of autophagosomes. These results indicate that Flnc stabilizes the Dvl2-mediated Wnt/b-catenin signaling pathway to antagonize autophagy. Flnc silencing-induced autophagy mediates muscle atrophy by mitochondrial clearance We observed increased autophagosomes together with large numbers of mitochondria after Flnc silencing using electron microscopy (Figure 9G). We then investigated whether mitochondrial autophagy (mitophagy) is involved in muscle atrophy induced by Flnc silencing, and found that the potential of mitochondrial membrane decreased 24 h after Flnc silencing ( Figures 11A). In addition, oxygen consumption and ATP production were dramatically increased in Flnc-silenced cells compared with controls ( Figures 11B and 11C). We further assessed the role of Flnc in mitochondrial fusion. Flnc silencing increased the expression of mitochondrial fission protein dynamin-1-like protein ( Figures 11D and 11E). We also investigated whether autophagy and mitochondrial fragmentation are related to selective mitochondrial degradation. Our results indicated that Flnc silencing markedly promoted lysosome-mitochondria colocalization compared with controls, and mitophagy was also increased in cells damaged by dexamethasone treatment ( Figure 11F). This suggests that muscle atrophy induced by Flnc silencing may be mediated by mitophagy. DISCUSSION Skeletal muscle atrophy causes many chronic diseases or inherent complications, including cancer, diabetes, chronic heart failure, and cystic fibrosis. 26,27 Thus, identifying new mechanisms to regulate skeletal muscle growth and development may generate potent targeted therapies to prevent myopathy. Flnc is a member of the filamin family, which plays a key role in cell signaling, transcription, organ development, and cytoskeleton formation. 28 Homozygous mutations of filamin A and B lead to aneurysms and cardiac defects, aberrant upgrowth of the brain, cardiovascular system, and skeleton. 29,30 Flnc also has important functions as other affiliates of the filamin family. Studies have shown that mutations in the N-terminal actin binding region of Flnc can result in distal myopathy. 31 Chen et al., reported that mutations in the Flnc gene result in myofibrillar myopathy with lover motor neuron syndrome. 32 In addition, Flnc is a highly dynamic protein involved in rapid myofibrillar microdamage. 33 In this study, we found that Flnc plays a vital role in promoting the fusion of muscle cells and the safeguarding of muscle mass, strengthening the hypothesis that Flnc participates in muscle development, and the identification of the likely cause of myofibromyopathies associated with Flnc mutation. Muscular dystrophy is the most common skeletal muscle disease, which has a severe impact on human health and animal production. We found that Flnc silencing significantly reduced skeletal muscle mass and muscle fiber size, while overexpression increased muscle fiber hypertrophy. This was consistent with the study of Dalkilic et al., who identified Flnc as a muscular dystrophy gene. 10 Beatham et al. found that Flnc could interact with KY protein in muscular dystrophy and is distributed abnormally in KY-deficient mouse muscle fibers. 34 These findings will provide a better understanding of the pathological mechanisms of Flnc-associated myofibromyopathies. While Flnc is vital for maintaining the structural integrity of cardiac and skeletal muscle, its regulation of the signaling pathway has not been reported previously. We found that Flnc interacted with Dvl-2 Bars not sharing the same letter labels are different, p < 0.05; data are expressed as mean ± SEM (n = 3 independent cell cultures). *p < 0.05, **p < 0.01. to activate the wnt/b-catenin signaling pathway. It is known that the Wnt/b-catenin signaling pathway is essential for embryo development and adult tissue homeostasis and skeletal muscle regeneration. 35 In addition, activation of the Wnt/b-catenin signaling pathway enhances the expression of myogenic factor 5 (MYF5), a basic-helixloop-helix myogenic determination factor. This promotes myogenic differentiation and myotubule formation. 36 Dvl2 is a vital component of the Wnt signaling pathway. We found that Flnc requires Dvl2 to regulate the Wnt/b-catenin pathway. Dact1 is an inhibitor of the Wnt signaling pathway, and suppressing Dact1 upregulates Dvl2 expression, activating the Wnt/b-catenin signaling pathway. 37 The study showed that the expression of MyH3 and Dvl-2 was rescued by silencing Flnc in Dact1 knockdown cells. In brief, Flnc regulates the Dvl2-mediated Wnt/b-catenin signaling pathway and controls myoblast differentiation and skeletal muscle development. Since Dvl2 has a complex role in canonical Wnt signaling during skeletal muscle development, the effect of Flnc on other members of the disheveled family requires further study. Studied have shown that Dvl2 can shuttle between the nucleus and cytoplasm. 38,39 Under normal circumstances, Dvl2 is mainly expressed in the cytoplasm, with little expression in the nucleus. Itoh et al. found that treatment of mammalian cells cultured with Wnt3a resulted in the accumulation of endogenous Dvl2 proteins in the nucleus. 40 This may be due to the Dvl2 nuclear localization required for its function in the Wnt/b-catenin signaling pathway. In our study, we found that Dvl2-Flag immunostaining showed nuclear localization, which may be related to the Wnt/b-catenin signaling pathway. In addition, nuclear localization of Dvl2 may indirectly affect the stability of b-catenin by regulating the protein interaction that sequesters b-catenin in the nucleus, thus preventing its cytoplasmic degradation. However, the regulatory mechanism of Dvl2 in the nucleus requires further study. Disheveled is the hub of wnt signaling and plays a decisive role in the switch of wnt signaling pathways. Studies have shown that ubiquitinated Dvl2 binds to P62, and then forms a complex with LC3 to create autophagosomes, which selectively enter autophagy through the lysosomal degradation pathway. 41 Sharma et al. found that Malin could control the wnt signaling pathway by degrading Dvl2, and enhance K48 and K63-linkage ubiquitination of Dvl2, which also leads to its degradation via proteasome cleavage and autophagy. 42 In addition, the wnt signaling pathway is activated by the inhibition of autophagy, suggesting that Dvl2 is closely involved in the correct formation of autophagosomes through ubiquitination. Our results confirm the interaction between Flnc and Dvl2, which may be an important reason for the involvement of Flnc in autophagy regulation. The Wnt signal antagonist Dapper1 increases the degradation of Disheveled2 by promoting ubiquitination and aggregation-induced autophagy. 43 We also confirmed that Flnc reduced ubiquitinated Dvl2 by stabilizing free Dvl2, thereby inhibiting autophagosome formation and antagonizing autophagy. Autophagy plays a significant role in skeletal muscle homeostasis and muscular dystrophy. The integrity of skeletal muscles is damaged when the balance of autophagy is disrupted. 44 Excessive activation of autophagy aggravates skeletal muscle mass loss in the catabolic state. In hyperautophagy X-linked myopathy, progressive sarcoplasmic accumulation of autosomes filled with non-degraded fragments leads to skeletal muscle atrophy and weakness. 45 Pagano et al. found that enhanced autophagy is a likely factor underlying age-related skeletal muscle atrophy. 46 Excessive autophagy is a crucial cause of skeletal muscle atrophy. Zhang et al. found that Islr controls the canonical wnt signaling pathway by inhibiting Dvl2 ubiquitination and thus mediates skeletal muscle regeneration. 47 These results suggest that Dvl2 ubiquitination is a significant hub of autophagy and the Wnt signaling pathway. Flnc has an important role in regulating muscle integrity by stabilizing Dvl2 to maintain the autophagy system and homeostasis of skeletal muscle. This is further evidence that Flnc maintains muscle development and structural maintenance of skeletal muscles. Skeletal muscle mitochondria are necessary to provide the energy required for muscle contraction. In response to such activity, mitochondria generation is upregulated to satisfy the energy needs of muscle cells. 48 Mitophagy helps autophagosomes to degrade mitochondria, thus contributing to the maintenance of mitochondrial quality. 49 Troncoso et al. found that dexamethasone-induced autophagy mediates muscle atrophy through mitochondrial clearance. 50 Kang and Li found that overexpression of PGC-1a could suppress the activation of mitophagy in the atrophying muscle induced by immobilization. 51 Mitophagy can accelerate mitochondrial fragmentation, which exacerbates mitochondria degradation in lysosomes and promotes muscle atrophy. 52 We also found that Flnc silencing significantly increased mitochondrial protein fragmentation, mitochondrial activity, and mitophagy, suggesting that mitophagy was activated by Flnc silencing. In conclusion, Flnc plays crucial roles in myogenesis and the maintenance of muscle mass ( Figure 12). In addition, myotube damage and myofiber defects in in vitro and in vivo models suggest a novel function for Flnc in maintaining myofiber structure. At the molecular level, we found that Flnc maintained normal muscle development by inhibiting autophagy and protecting Dvl2 from ubiquitination. Since autophagy involves a variety of pathological processes, a better comprehending of the regulatory role of Flnc in autophagy is important for the treatment of both skeletal muscle diseases and other related diseases. Animals Fast-growing chickens (Ross 308 broiler) and slow-growing chickens (White leghorn layer) were used in this experiment. The experimental chickens were provided by the poultry breeding farm at Sichuan Agricultural University (Ya'an, Sichuan, China). The experimental procedures were approved by the Animal Ethics Committee of Sichuan Agricultural University (approval no. 201810200710), and all methods follow relevant guidelines and regulations. Cell culture Fetal myoblasts were isolated from 11-day-old embryonic Ross 308 broiler muscles. 53 Single cells are released from the muscle by collagenase and trypsin digestion and further enriched with myogenic cells by Percoll density centrifugation, as described in previous publications. The cells were grown on a 2% gelatin-coated culture plate using our standard culture medium for myoblast cultures containing Dulbecco's modified Eagle medium (Sigma, St. Louis, MO, USA), 10% fetal bovine serum (Gibco, Grand Island, NY, USA), 5% chicken embryo extract, and 0.5% penicillin and streptomycin (Solarbio, Beijing, China) at 37 C under 5% CO 2 in air. When the cell density reached 80%, complete medium was replaced with differentiation medium containing 2% horse serum (Gibco) to achieve myotube differentiation. RNA isolation and real-time qPCR The cells were washed twice with PBS, TRIzol reagent (Takara, Tokyo, Japan) was added to each culture dish, and the cells were scraped and collected in a 1.5 mL Eppendorf tube. The RNA extraction protocol was based on previous reports. Approximately 1 mg of RNA was reverse transcribed, and cDNA was synthesized using the Takara reverse transcription kit (Takara, Tokyo, Japan) according to the manufacturer's instructions. qPCR was performed on the Bio-Rad CFX Connect Real-Time System using the SYBR Green kit (Takara). All PCR reactions were repeated three times. PCR amplification was performed as follows: 95 C for 3 min and 40 cycles each at 95 C for 10 s, 60 C for 20 s, and 72 C for 20 s using a Bio-Rad sequence detector. The mean normalized cycle threshold (DCt) was used for the statistical analysis of real-time PCR results. 54 All amplicon primers were designed using the NCBI primer Design Center. The primers in the study are listed in Table S1. Treatment protocols and antibodies To verify the activity of Wnt/b-catenin signaling, Flnc-silenced and control cells were treated with 10 mM LiCl (Sigma) and 200 ng Western blot and immunoprecipitation assays The cell total protein and nuclear protein were extracted using a kit from BestBio (Shanghai, China), the protein extraction procedure was according to the manufacturer's instructions. Protein concentration was determined using a bicinchoninic acid assay kit (BestBio), and 5 loading buffer was added to the lysates and desaturated in boiling water at 100 C for 10 min. A total of 20 mg of protein was protein electrophoresed on SDS-PAGE gel and then transferred to a polyvinylidene fluoride (PVDF) membrane. The PVDF membrane was blocked in 5% skimmed milk at room temperature for 2 h, and then incubated with a primary antibody at 4 C overnight. The next day, the PVDF membrane was washed with Tris-buffered saline Tween (TBST) and incubated with HRP-labeled secondary antibodies at room temperature for 1 h. After washing with TBST, the relevant proteins were visualized using an enhanced chemiluminescence reagent. The captured images were analyzed using ImageJ software (National Health Institute, Bethesda, MD, USA). For the immunoprecipitation assay, the cells were lysed with IP lysis solution and the total proteins were immunoprecipitated with protein G beads. Immunocomplexes were washed three times with IP lysis buffer, samples were electrophoresed on 8% SDS-PAGE gel and western blotted with appropriate antibodies. Immunofluorescence and confocal microscopy Cells were plated on glass cover slides in culture medium, washed with PBS after the culture medium was removed, and fixed with 4% paraformaldehyde for 10 min. The cells were then washed three times with PBS and permeabilized with 0.1% Triton X-100 for 20 min. After washing with PBS, the cells were incubated with a primary antibody at 4 C. After primary antibody incubation and washing, the cells were incubated with fluorescent secondary antibodies at room temperature for 1 h in the dark. The cells were then washed three times with TBST and the fluorescence intensity was observed using an LSM 510 confocal microscope (Zeiss, Milan, Italy). Lentivirus production and transduction A mixture of lentivirus-mediated Flnc overexpression vector (pWPXL-Flnc), lentivirus-mediated Flnc knockdown vector (pLKO-Flnc), and lentivirus-mediated control vector were synthesized by RiboBio Biotechnology (RiboBio, GuangZhou, China). Chicks (1-day-old Ross 308 broiler) were infected with lentivirus (pWPXL-Flnc, pWPXL control, pLKO-Flnc, and pLKO-control; n = 15) into the chest muscle by direct injection. After 9 days of feeding, the breast muscles were removed, and a standard blade was used to take a 4 mm section according to the designated anatomical marks. Protein and RNA were extracted for subsequent western blotting and qPCR analyses. Histological analysis Chicken chest muscle tissue was extracted with using surgical blades and fixed with 4% paraformaldehyde. The tissues were fixed and embedded in paraffin, sliced, stained with hematoxylin and eosin, and the images were analyzed. Mitochondrial transmembrane potential and ROS production The cells were loaded with 200 nM tetramethylrhodamine (Sigma), or 25 nM dihydrorhodamine-123 (Invitrogen) for 30 min according to the manufacturer's instructions, respectively. Fluorescence was detected using a FACScan system using flow cytometry. RNA sequencing Total RNA was extracted from Flnc siRNA, control siRNA cells, and Ross 308 broiler and White leghorn layer chest muscle, according to the experimental steps described above, and cDNA library construction, sequencing, and transcriptome data analysis were performed by Shanghai Personalbio (Shanghai, China). Sequencing and bioinformatics analyses were conducted in strict accordance with the company's standard operating procedures, which are available online (http://www.personalbio.cn/). Statistical analysis The data were statistically analyzed using the non-parametric Kruskal-Wallis ANOVA on ranks followed by Dunn's test or the Student-Newman-Keuls test. For comparison of the means of two groups, Student's t test or non-parametric Mann-Whitney U test were used. When the p value is <0.05, the difference was considered significant. All data are expressed as mean ± standard error of the mean (SEM).
Body Fluids-Derived Exosomes: Paving the Novel Road to Lung Cancer Diagnosis and Therapy. Lung cancer is a major human malignancy. Nowadays, the lack of specific diagnostic markers of lung cancer restricts the early diagnosis and therapy of patients. Exosomes, as spherical 30-100 nm microvesicles, are released by normal and cancer cells in both physiological and pathological circumstances. Exosomes carry various molecular cargos such as miRNA, proteins, mRNA, DNA and lipids. Therefore, analysis of the molecular profiles of exosomes may provide beneficial biomarkers for disease diagnosis. Exosomes can be transported by body fluids. The molecules (miRNAs and proteins) detected in body fluid exosomes may contribute to lung cancer diagnosis. In this review, we summarize typical molecules (miRNAs and proteins) in body fluids-derived exosomes to reveal the potential biomarkers in lung cancer. Besides, the role and the application of exosomes in chemotherapy and radiotherapy of lung cancer patients have also been discussed in this review.
President Obama will visit Orlando this Thursday after the city fell victim to the deadliest mass shooting in U.S. history on Sunday. President Barack Obama will visit Orlando this Thursday after the city fell victim to the deadliest mass shooting in U.S. history on Sunday. "On Thursday, the President will travel to Orlando, Florida, to pay his respects to victims' families, and to stand in solidarity with the community as they embark on their recovery. We will have more information on the President's trip in the coming days," said Josh Earnest, White House press secretary, in an email. Orlando lost 50 lives in the early morning hours of Sunday, June 12; 49 of which were victims, one of whom was shooter Omar Mateen, a 29-year-old Port St. Lucie native. Orlando Mayor Buddy Dyer described the massacre as "the most difficult day in the history of Orlando." "First of all, our hearts go out to the families of those who have been killed," Obama said. "Our prayers go to those who have been wounded. This is a devastating attack on all Americans. It is one that is particularly painful for the people of Orlando, but I think we all recognize that this could have happened anywhere in this country. And we feel enormous solidarity and grief on behalf of the families that have been affected." President Obama said that while a lot of reporting has been done on the topic of the shooting at Pulse nightclub, it's important to remember that the investigation is still in its preliminary stages. "The one thing that we can say is that this is being treated as a terrorist investigation," he said. "It appears that the shooter was inspired by various extremist information that was disseminated over the internet. All those materials are currently being searched, exploited so we will have a better sense of the pathway that the killer took in making the decision to launch this attack." Obama did verify that Mateen had pledged allegiance to ISIL before the shooting but that there has been no evidence suggesting Mateen had been directly involved with ISIL. Mateen's ability to legally acquire two weapons in one week after having been investigated by the FBI twice for potential terrorist ties has agitated proponents for stricter gun control laws. "This massacre is therefore a further reminder of how easy it is for someone to get their hands on a weapon that lets them shoot people in a school, or in a house of worship, or a movie theater or in a nightclub," he said. This story was originally published on June 13, 2016.
Over the last 48 hours or so, during which time I've been largely off the grid on jury duty, the story of wife-beating U.S. District Court Judge Mark Fuller has finally taken off in the corporate media, as well as among a number of the elected officials who would be responsible for impeaching the 2002 George W. Bush lifetime-appointee to the federal bench. I couldn't be happier to finally be playing catch-up on this story for a change, as calls for accountability for the federal judge from Alabama's Middle District have now become a "virtual chorus" over these last few days. The state's Governor, as well as both of Alabama's U.S. Senators and its entire Congressional delegation, save for one member (Rep. Mike Rogers), have now called for Fuller's resignation and/or impeachment. On Wednesday, MSNBC's All in with Chris Hayes finally devoted an entire segment to the outrageous case which we first reported back in early August, just after Fuller had been arrested for beating his wife bloody at the Ritz-Carlton in Atlanta. He was arrested by Atlanta Police following a 911 call from his second wife Kelli, during which she is heard being struck, asks for an ambulance, and implores the dispatcher: "Please help me. He's beating on me." Just weeks later, Atlanta prosecutors -- perhaps not knowing about almost identical allegations of abuse by Fuller's previous wife Lisa in 2012, in documents that were mysteriously sealed during their divorce proceedings -- allowed Fuller to enter a pre-trial diversion program, requiring a few weeks of domestic abuse counseling to avoid prosecution and, if successfully completed, have his entire arrest record expunged as if nothing ever happened. The plan -- which a senior Republican federal judge described last week as "a sweet deal...that will allow him to erase his criminal conviction for beating the crap out of his wife in a fancy hotel room while reeking with booze" -- would allow Fuller to return to his lifetime $200,000/year job where he sits in judgment of others, while leaving his next victim, whoever that might be, with little if any protection. Please go to The Brad Blog to read the rest of this article.
There is beauty in diversity in all areas of life including neurological diversity (Bella): A mixed method study into how new thoughts on neurodiversity are influencing psychotherapists practice The term neurodiversity describes the idea that, throughout the human population, different brain developments and structures exist. Neuronal variances, such as autism, are not seen as disorders but as variations, different from the neurotypical brain. Instead of being considered ill and cure-worthy, neurodiverse people should be included and integrated into society. The aim of this study was to explore psychotherapists knowledge of current trends in the understanding of autistic people i.e. the Neurodiversity movement. It wanted to identify the sources psychotherapists gained their knowledge of autism and whether terminology impacts on psychotherapists approach to therapy. Data was collected using a mixed method online questionnaire and analysed using thematic analysis. The results showed that participants were acutely aware of the potential negative impact of language and terminology on the other, including shame and judgement. Participants explored the purpose and usefulness of labelling and how using client-preferred terminology was helpful. Most participants felt that further knowledge and training in neurodiversity would benefit their practice. Implications for psychotherapy practice and the profession are discussed, including identifying sources of knowledge and training on neurodiversity. The opportunities, and barriers, needed for psychotherapists to support the autistic community in positive self-understanding and advocating are considered. Introduction It is estimated that 1% of people are autistic; that's over 77 million people in the world (O'Neill 2020). Neurodiversity might be the next step of diversity. The term neurodiversity was first established in the online autism community in the 1990's. It describes the idea that throughout human population different brain developments and structures exist. Neuronal variances such as autism should not be seen as disorders, but as variations different from the neurotypical brain. Instead of being considered ill and cureworthy, neurodiverse people should be included and integrated into society (). The Neurodiversity movement challenges the medical model's interest in causation and cure, celebrating autism as an inseparable aspect of identity (). Jody O'Neill, an Irish, autistic playwright says "... we've always relied on medical people to define autism, when the real experts, the ones in a position to define it, are the people who are actually autistic." (O'Neill 2020, p. 56). Williams describes therapy involving verbal language as a grind for autistics, causing information overload. The study aimed to determine awareness of the Neurodiversity movement in the psychotherapy profession, and the impact, if any, on practice with autistic clients. It's important to recognise that autistics may have non-typical communication, behavioural and social style, however, this was not focused upon in this study. I am a mother of 3 boys who each received an autism diagnosis in childhood. Although there have been challenging times, as with any child, I always found the frequently used expression "suffering from autism" disconcerting. I have described them as "high-functioning" (although I now know to avoid that term) when trying to alleviate the concerns of the 'other', who usually express sympathy when they hear my children are autistic, as if they were dying. I have an autistic niece who has challenging behaviours that make family life difficult, and thus I know there is no blueprint to understanding autism. In my ongoing journey of educating myself on autism to support my children if/as required, I only recently heard of the Neurodiversity movement. Having discovered its existence, I was intrigued as to the implications for the working psychotherapeutically with neurodiverse clients. Empathy, integral to the therapeutic relationship, is defined as... our capacity to move away from ourselves as the locus of our reference for understanding emotion and sensation and see these phenomena as they might be experienced by another person, given their context and the information coming to them from their senses and cognition. It involves the capacity to read another's mind and put oneself in his or her shoes. (McCluskey 2005, p. 50) For autistic clients, part of their human experience will be living in a world where the majority are neurologically different. Awareness of this is crucial for empathy with autistic clients. I have personal experience with autism, but not all psychotherapists are so fortunate. Neurological difference encompasses diagnoses including dyspraxia, dyslexia, and others; for this study I focused on autism. Literature review The Neurodiversity movement seeks to provide a culture wherein autistic people feel pride in a minority-group identity and provide mutual support in self-advocacy as a community (Jaarsma and Welin 2011). The concept of neurodiversity regards atypical neurological development as normal human difference. The neurodiversity assertion contains at least two different aspects. The first is that autism, among other neurological conditions, is first and foremost a natural variation. The second is about conferring rights and value to the neurodiversity condition, demanding recognition, and acceptance. Jaarsma and Welin advise, just as the LGBTQ+ community once had to live in a homophobic society, autistics may be living in an autism-incompatible or even autism-phobic society. They identify a need for discourse about the detrimental effects of this on the well-being of autistics. For high-functioning autistics, society should not stigmatize these people as being disabled, having a disorder or use other deficit-based language. They highlight the use of terms as a moral issue and that group-specific rights for autistic people are needed to ensure autistic culture is treated with genuine equality. They believe it is incorrect to incorporate Asperger's and high-functioning autistics into the wide diagnostic category of Autism Spectrum Disorder, as the American Psychiatric Association proposes (DSM-V 2013). Some autistics do not benefit from a psychiatric defect-based diagnosis and may be harmed by it, because of the disrespect it displays for their natural way of being. Jaarsma and Welin also advise that people with low-functioning autism may be extremely vulnerable, and their condition justifies the qualification "disability". Thus, it is reasonable to include other categories of autism in the psychiatric diagnostics. They suggest that the narrow conception of the neurodiversity claim should be accepted but the broader claim should not. Furthermore, the degree of social construction of their disability must be considered. I am interested in how autistics may be disabled by trying to fit into a neurotypical world. This is aligned to the social model of disability that purports that people with any form of accredited impairment are disabled by an unjust and uncaring society (Barnes 2020). Rather than want to change the individual, we should accommodate and support them in ways that enable them to live positively (Lawson 2011). Dainius Puras, the United Nations' Special Rapporteur, argues that for decades mental health services have been governed by a reductionist biomedical paradigm that has contributed to the exclusion, neglect, coercion, and abuse of people with autism and those who deviate from prevailing cultural, social and political norms (2017, as cited in Kinderman 2019). Kenny et al. elicited the views and preferences of UK autism community members-autistic people, parents and their broader support network-as to what terms they use to describe autism. The term 'autistic' was endorsed by a large percentage of autistic adults, family members/friends but by considerably fewer professionals. "Person with autism" was endorsed by almost half of professionals but by fewer autistic adults and parents. The findings demonstrate that there is no single way of describing autism that is universally accepted. Use of language is an important variable within the therapeutic relationship (Kearney 2010). Any language K spoken cannot be separated from its cultural context; changes in language reflect the evolution of the people of the culture it belongs to. Language can either mean inclusion or exclusion. It can be the reason why barriers between people are formed (Chikwiri 2019). AsIAm, Ireland's national autism charity and advocacy organisation, operates as a platform for people affected by autism to share their stories and views; it provides a strong voice for their concerns. AsIAm found many indicated a preference for identity-first language when talking about themselves. That is, they prefer "autistic" instead of "I have" or "I'm living with" autism, because they see autism as an integral part of their personal identities and as difference, rather than disability. They recommend that terms such as "disorder" and "high functioning/low functioning" should be avoided. Terms such as "normal" when referring to non-autistic people implies that those who are on the autism spectrum are somehow abnormal or defective; an alternative is "neurotypical" (AsIAm 2019). Throughout the literature I found the terms 'autists' and 'autistics' used, and I replicated these in this study. Autistics are more vulnerable to depression because of the mental and emotional exhaustion of socialising, of coping with change and with the organisational aspects of life, and because of a pessimistic outlook. An additional cause of depression can be a diagnosis of Asperger's as a disability and a self-perception of "being irreparably defective and eternally socially stupid" (Attwood 2016, p. 11). Cassidy et al. noted increased rates of suicidal ideation in adults with Asperger's, and depression as an important potential risk factor for such suicidality. They note that adults with Asperger's often have other risk factors for secondary depression, for example, social isolation or exclusion, and unemployment. Research indicates that identifying positively with an autistic identity mediates the relationship between self-esteem and mental health difficulties; this suggests that personal acceptance of autism as part of one's identity could protect against depression and anxiety (). Despite this, many autistics frequently report "masking" or "camouflaging", by using strategies to act non-autistic or "neurotypical" to fit into the non-autistic world (;). The effort required by autistics to camouflage may be detrimental to mental health (). Walker's web-article, on the benefits of building a neurodiversity-positive society, is inspiring and captures the value of inclusion: Neurodiversity is an invaluable creative resource, a problem-solving resource. The greater the diversity of the pool of available minds, the greater the diversity of perspectives, talents, and ways of thinking-and thus, the greater the probability of generating an original insight, solution, or creative contribution. And in any given sphere of society, we only get the benefit of the contributions of those individuals who are empowered to participate... without being forced to suppress their differences. The core of minority-group claims is often that there is something special to be protected, such as a certain culture in risk of being swallowed by the majority culture (Jaarsma and Welin 2011). The authors suggest if autists have a vulnerable status as a group, that confers obligation on the majority of neurotypicals. This study aimed to understand if we, as psychotherapists, have the capacity to fulfil that obligation. Aim and objectives The research aim was to explore how the Neurodiversity Movement has impacted upon the practice of psychotherapy. The objectives sought to establish Where psychotherapists gained their knowledge related autistic people What psychotherapists' understanding is of the landscape of terminology concerned with autism How terminology impacts on psychotherapists' approach to therapy Psychotherapists' levels of knowledge of current trends in understanding autistic people i.e. the Neurodiversity movement How knowledge of the Neurodiversity movement will impact on their psychotherapy practice Methodology I used a mixed-method study using an online questionnaire. The advantage of mixed methods studies is they recognize the value of iterating between that which can be counted and that which cannot to generate richer insights about the phenomena of interest (Kaplan 2016). The use of an online questionnaire permitted ease of participation for psychotherapists across Ireland and, although not by design, allowed the research to continue during the COVID-19 lockdown. The questionnaire was semi-structured to: a) allow for the identification of patterns amongst the psychotherapy profession in Ireland via quantitative responses b) use qualitative open-ended questions and/or comments boxes to allow respondents to discuss the topic in their own words, free of limitations. This provided opportunity to uncover the respondents' meanings and avoid the researcher imposing her own structures and meanings. As the research was focused on the use of terminology, no consistent terminology was used in the invitation to participate or in the questionnaire itself; this was to avoid leading the participants. Thus "autism", "autistic" and "neurodiverse" were used as alternatives in the questionnaire, and in the write-up of the findings. The invitation and the questionnaire itself were designed to introduce participants to the broad concept of Neurodiversity, and considerations of neurodiverse differences not as disorders but as a minority group that need to be included and integrated into society. Participants The inclusion criteria were psychotherapists in general practice. There were 15 participants, who self-selected by responding to an invitation in the IACP (Irish Association for Counselling & Psychotherapy) E-news online publication, which is distributed to the entire IACP membership, from Student to Supervisor status. The exclusion criteria were psychotherapists who identified as working in specialist autism services, on the basis that they are required and/or are likely to be informed on latest trends as part of on the job experience and/or continuous professional development (CPD). No respondents met the exclusion criteria. Participants were asked to categorise their experience to determine if this influenced their knowledge of neurodiversity. The level of experience is outlined in Fig. 1. Procedure To promote confidentiality, questionnaires were completed through the specialist survey website 'SurveyMonkey'. Quantitative data was analysed using Excel to examine correlation between the variables e.g. years in practice and identified need for additional training in autism. Qualitative data was analysed using thematic analysis approaches to promote the identification of semantic patterns in the text. Thematic analysis allows for the emergence of key themes that most appropriately inform the research question, whilst still allowing for the richness of respondents' experiences to be heard (Braun and Clarke 2006). Reflexivity I was aware that my own values and experiences, especially as a mother of three autistic persons, could bias the selection of literature and influence the interpretation of data (McLeod 2015). A pilot group was used to identify, evaluate, and eliminate, bias in the design of the questionnaire. Thematic analysis by taking a quasiphenomenological/hermeneutic approach was used to give a voice to respondents' experiences and meanings, rather than my own interpretation of the data. Ethical considerations The dignity, rights and welfare of the research participants were considered to ensure protection from harm. This included ongoing research supervision as part of this process. I followed McLeod's ethical framework, giving attention to his question (p. 176) "What are the broader moral implications of the study, in terms of the ways that results will be used?". My concern was of a negative impact to the relationship between the autistic community and the psychotherapy profession, for example, via any implication that psychotherapists are not sufficiently informed to work effectively with autistic clients and/or cannot empathise with the needs and preferences of the neuro-diverse community. Findings and discussion I will present the results firstly by describing at some of the relevant results of the quantitative data and secondly the results of the qualitative text, using the words of the participants in italics for the key themes that emerged from the analysis. Quantitative analysis Eight participants (53%) had experience of working with autistic clients (Fig. 2). There was a correlation between experience of working with autistic clients and level of experience in Psychotherapy. This is not surprising given the potential for depression and suicidality amongst autistics (;Attwood 2016) and that may contribute to autistics seeking therapy. Only two participants (13%) indicated they gained knowledge of autism through formal college or Continuing Professional Development (CPD), and both participants were at Supervisor level. This indicates extensive experience in psychotherapy practice is gained before formal training on autism occurs. Twelve participants (80%) indicated their knowledge of autism was through media sources and/or personal and professional relationships rather than formal channels. Regarding the Neurodiversity movement, 53% of participants had never heard of it (Fig. 3). There was no correlation between knowledge of the Neurodiversity movement and level of experience in psychotherapy. Three participants, who had never heard of the Neurodiversity movement, had experience with autistic clients, with one of those having significant experience with autistic clients. 87% of participants believed that further knowledge and training on autism would benefit their practice and this was not differentiated by level of experience. As a result of participating this this study, 33% of participants felt they would investigate the Neurodiversity movement. When asked where they would access such additional To warm participants up to a discussion on terminology, they were asked to complete an exercise on the appropriateness of different terms (Fig. 4). Twelve participants (80%) selected "Autism as a developmental disability" as an inappropriate term, consistent with the aims of the Neurodiversity movement. Thirteen participants (87%) selected "Autistic person", the term preferred by the autistic community (; AsIAm 2019), as an inappropriate term, which evidences a disconnection between psychotherapists and the autistic community. Qualitative analysis From the analysis of the qualitative text, five main themes emerged: 1. Language can equally throw up barriers between humans (MB) 2. It's a minefield that needs to be walked through carefully (Samantha) 3. Terminology has a purpose (Jennifer) 4. Look at a client's issues in surviving and being as happy as possible in this world (John) 5. Consider diversity of how people are... including neurologically (Bella) These five themes are used to collate the participants' words and understanding. The participants chose their name/identifier for this research, which may or may not be their own. Language can equally throw up barriers between humans (MB) Participants were acutely aware of the potential negative impact of language on another, consistent with Chikwiri, and themes of shame and judgement arose;... language can be loaded with judgement-real or perceived-and language can be emotionally triggering (Bella). The importance of language in the therapeutic relationship was recognised, consistent with Kearney. Bella states the importance of communicating respect, positive regard, empathy and understanding via language and checking with clients what is ok and not ok. MB felt that using normalising language with clients would be HELPFUL TO THEM. Some participants were aligned with preferences of the autism community in terms of identity-first language and avoidance of terms such as "normal" (AsIAm 2019). Bella felt it wasn't helpful or fair to label people as 'normal' or having a 'disorder'/... being abnormal. It's a minefield that needs to be walked through carefully (Samantha) In terms of the terminology that may feature in working with autistic clients, Samantha described it as a minefield. Participants felt terminology was important to avoid possible shaming of clients and to avoid judgement (Miranda). Six participants used the term 'label' in some form. Padraigin stated teenagers, in particular with high functioning Autism, find being labeled very humiliating. Participants were keenly aware of the role terminology and language can play in stigmatizing a minority group. Aidan said some terminology is stigmatising, offensive and demoralising to others and particularly in relation to the autism community. Furthermore, MB felt the word autism... still carries a lot of stigma in common language. Participants recognised how the Neurodiversity movement was striving to address stigma, though with certain drawbacks, as MB describes, 'Neurodiverse' seems to be a new word that attempts to overcome existing prejudice, however in my eyes risks simply adding more exclusive, unnecessary jargon to professional's vocabulary (MB). I believe this demonstrates that, within the psychotherapy profession in Ireland, there is no universally accepted single way of describing autism. This is consistent with the findings of Kenny et al. from the broader autism community in the UK, including professionals and the broader support network. Three participants believed that they would be more thoughtful in the language they use in terms of sensitivity, labels, and awareness, because of participating in the study. The term "developmental disability" was offered in the inappropriate terms exercise, yet only one participant referenced disability in their open text response. Jennifer felt the Neurodiversity movement acknowledges the uniqueness of the individual and their particular ways instead of making it into a disability... or something that needs fixing. K "There is beauty in diversity in all areas of life including neurological diversity" (Bella):... Terminology has a purpose (Jennifer) Participants spoke of using terminology that the client preferred, such as asking them to explain their choice, to get an insight into their terminology and relate to them in their language (Smyth), and to be respectful (Stanford). This is consistent with the preferences of the autistic community (;AsIAm 2019). Terminology can help clients feel understood and accepted by their therapist (PMC) and feel that they are not alone and can help in understanding their own behaviours and feelings (Jennifer). Terminology can help them understand themselves (MB) and for their therapist to understand them better, faster. (MB). MB supports Jaarsma and Welin's argument that, while lower-functioning autism is also part of natural variation, categorization as a disability maybe necessary for autistics to get access to necessary supports and resources such as specific information, services, support groups. Look at a client's issues in surviving and being as happy as possible in this world (John) Although some participants had not heard of the Neurodiversity movement, their view of their role as therapists was to help autistic clients live positive lives (Jennifer, John), aligned with Lawson, and to assist with overcoming the degree of social construction of the disability of being autistic (Jaarsma and Welin 2011). Jennifer described work with autistic clients as to help them to find ways of living the life that feels right for them rather than trying to get them to conform to societal norms. My overall sense was that participants were instinctively neurodiversity-positive. Anthony identified that for autistic clients, autism is only part of their human experience, and that labelling someone as autistic neglects all the other characteristics of that person. He was concerned that the label would create stuckness for the client (and possibly the therapist) in what they can achieve and what we can do to help that client in a way that works for them. My interpretation of this strong belief is that a greater understanding of neurodiversity, including the challenges of being neuro-diverse in a neuro-typical world, would be helpful to both psychotherapists and their clients. Consider diversity of how people are... including neurologically (Bella) Participants seemed generally aligned with the Neurodiversity movement. MB agreed with the intention of the movement to avoid stigmatising, medicalising, 'othering' of individuals not fitting societal pre-conceived notions of 'normal'. Bella stated there is beauty in diversity in all areas of life including neurological diversity, which I experienced as an impassioned statement. Jennifer was excited by the movement but stated as a humanistic psychotherapist I continue to be led by the client. This leaves me with a concern that clients and counsellors may not be aware that the client may be disabled by society by trying to fit into a neuro-typical world under the social model of disability (Barnes 2020). I think this may implicitly inhibit the client in determining, and communicating, how they need to lead. Participants did not identify any role that the psychotherapists could play in promoting a culture wherein autistics feel pride in a minority group identity (Jaarsma and Welin 2011). I see a role for psychotherapists in helping clients with identifying positively with an autistic identity. This could combat the mental health difficulties caused by an autistic diagnosis (Attwood 2016) and discourage the "camouflaging" and lack of self-acceptance that that hinders autistics from "coming out" (;). Carrie noted that clients often are parents of autistic children, who struggle with worries about their children. Thus, psychotherapists can contribute to a society that embraces neurodiversity, beyond the autistic clients themselves. Anthony summed up the overall sentiment of participants by stating Every human is unique regardless of labels applied onto them. Conclusion The findings reveal that, aligned to wider society, there are opportunities for psychotherapists to enhance their knowledge of current trends in the autistic community, to promote individual self-acceptance with their autistic clients, and to advocate for autistics as a minority group. This may help combat the mental health difficulties that brought autistic clients to therapy. Further knowledge and training on autism would benefit psychotherapists' practice, and a clear source should be established. Accredited psychotherapy organisations could provide such training in the form of online CPD courses. Given the implications of language and terminology, a glossary of terms could be developed to guide psychotherapists at all levels of experience. Limitations This study purposefully took a broad approach to Neurodiversity and the awareness and views of psychotherapists on the Neurodiversity movement. This does not consider the complexities of autism and the challenges this may bring to the therapeutic relationship. Recommendations for further research Further research on the use of neurodiversity-positive terminology, and its impact on the therapeutic relationship, from the client's perspective would be worthwhile. Using the principles of the Neurodiversity movement in psycho-educating autistic clients and impact to therapy outcomes could be researched, particularly where the client had no prior knowledge of the movement. Other recommendations include an examination of the neuro-compatibility of therapist and client and how that influences therapeutic outcomes. Reflexive statement My introduction to the world of autism, over eleven years ago, with the first autism diagnosis of one of my children, left me shocked, bereft and immediately shamed. I felt I had borne children that had something wrong with them and this was corroborated by society. A new door had opened in my life into routes peppered with struggle and stigma. While I had the resilience to navigate these routes successfully, the positive outcomes always felt like a "get out of jail free" card rather than a winning lottery ticket. I sensed my role was to bolster my children in navigating the world that was available to them and to champion their needs with schools, organisations, and family. It is a surprise and disappointment to me, as a mother and aunt of autistic children, and now as a psychotherapist, that I have only recently heard of the Neurodiversity movement. I recognise that it is slowly gaining momentum. I believe that we are on the precipice of change for the neuro-diverse community, as the LGBTQ+ community were 40 years ago, and I take comfort in that. Nevertheless, I am still surprised when I see this neurological difference in my children manifest itself in crippling anxiety, particularly in those life experiences that are considered a "normal" part of growing up. This is largely because of trying to operate in a neurotypical world or, at a minimum, trying not to stand out as "different". For me, this confirms the need for recognition of support and resource needs for autistics, that are not required by neurotypicals, simply because of their way of being. Thus, the debate between acceptance of normal human difference and the need to categorize those who cannot function adequately in society so that they can receive adequate support, is unresolved. For Psychotherapy, perhaps the goal is to be knowledgeable on both considerations in order to integrate both, depending on the needs of the client. Yalom (1989, p. 185) tells us that Even the most liberal system of psychiatric nomenclature does violence to the being of another. If we relate to people believing that we can categorize them, we will neither identify nor nurture the parts, the vital parts, of the other than transcends category. The enabling relationship always assumes that the other is never fully knowable. Thus, along with my humanistic integrative colleagues, I continue to be led by my client. With knowledge of the Neurodiversity movement, I feel I could have, and still can, advocate better for my children by a positive embracement of their neurological difference, rather than focus on their impairment. This is my wish for my fellow psychotherapists and the autistic clients with whom they will most likely work. As a helping profession, I believe we have a role to play with this special minority-group by advocating for pride in their natural way of being and accelerating the momentum of the Neurodiversity movement. To do this, and to strive for true empathy, we must try to inform ourselves and not expect the other to explain to us, using our terminology, what it is like to be born on the wrong planet. (Hammerschmidt 2008) K
Drivers of Patient Satisfaction and Effects of Demographics on the HCAHPS Survey Catherine K. Madigan: Drivers of Patient Satisfaction and Effects of Demographics on the HCAHPS Survey (Under the direction of G. Rumay Alexander) Although patient satisfaction has traditionally been a quality indicator measured by most hospitals, it has taken on greater importance in light of the recent inclusion of this metric as a component of Medicare's hospital inpatient Value-Based Purchasing (VBP) program. Acute-care hospitals are financially rewarded or penalized based on the quality of care that they provide to Medicare patients with payments starting at 1.25% of hospitals' base operating diagnosis-related groups (DRG) payment for Federal Fiscal Year (FFY) 2014 and increasing incrementally over the next three years. How patients respond to the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) overall rating question, in which they choose a number from 0 to 10 to " rate this hospital during your stay, " can have tremendous ongoing financial implications. The objectives of this Doctor of Nursing Practice project were to: 1) analyze HCAHPS discharge data from 7/1/13 to 12/31/13 for inpatient acute care units at UNC Hospitals (UNCH) to identify the drivers of patient satisfaction specific to UNCH, and 2) explore the potential effects of demographic differences on perceptions and ratings of the care provided. Decision-tree analytics were used to identify which survey items are most influential in framing the patient's total experience and the effect of demographical differences on these scores. Based on the analysis, potential strategies are suggested to improve patient satisfaction scores for key drivers of patient satisfaction on the survey. iv ACKNOWLEDGEMENTS
The Criteria of Sustainability Economic and Social of Existance Geothermal Power Plant to Surrounding Community (Case study on geothermal power plant at Sukabumi, West Java) Recently, electricity consumption in Indonesia continue to increase rapidly along with the increase in technology, economic growth and also the increase in the population. One potential source of electrical energy comes from geothermal energy, where Indonesia has enormous potential. In this research the aspects studied is sustainable development on economic and social as research variable, where the existence of geothermal power plants is expected to have a positive impact on the sustainability of economic and social of the surrounding communities. Locus of this research is Geothermal Power Plant that operated at Kalapanunggal sub-district, which is located in Sukabumi District, West Java. The method used is Analytical Hierarchy Process (AHP) to obtain assessment criteria. From the AHP results, the economic criteria from existence geothermal power plant is regional income from production bonus, and social criteria is the use of labour and local products.
Honour medal of the National Police Award history Created by the Decree of 3 April 1903 at the request of Monsieur Émile Combes, Minister of the Interior, the medal was originally called the "Médaille d’honneur de la Police Municipale et Rurale" (English: "Honour medal of the rural and municipal police"). The decree of 17 November 1936 will rename it the "Médaille d’honneur de la Police française" (English: French Police honour medal"). Finally, Decree No. 96-342 of 22 April 1996 will give the medal its current designation. The decree of 4 February 1905 allows its award upon application to the Governor-General of Algeria, to officers of the Police Municipale et Rurale posted to Algeria with at least 20 years of impeccable service. The award was extended in 1972, to administrative staff and senior officers of the National Police. Award description The Honour medal of the National Police, a design of engraver Marie Alexander Coudray, is a 27mm in diameter circular silver medal. The obverse bears the relief image of the protecting Republic, in the form of a standing helmeted woman holding a sword and a shield, protecting a kneeling woman pulling a frightened child to her, behind them a tree. Along the right circumference, the semi circular relief inscription "POLICE FRANÇAISE" (English: "FRENCH POLICE"). The reverse bears at its bottom a framed rectangular area destined to receive the name of the recipient and year of the award. Along the upper medal circumference, the relief inscription "RÉPUBLIQUE FRANÇAISE" (English: "FRENCH REPUBLIC"), at its center, the inscription "MINISTÈRE DE L'INTÉRIEUR" (English: "INTERIOR MINISTRY"). The medal hangs from a 3cm wide silk moiré tricolour ribbon with an 8mm wide central blue stripe bordered by 6mm wide white stripes and 5mm red stripes at its edges. The ribbon's suspension loop is adorned with a crown composed of an olive branch and a sprig of oak with an opening on the right. The Honour medal of the rural and municipal police in Algeria included a clasp on the ribbon consisting of a star placed on a crescent of Islam. Today, the only ribbon device is in the form of a silver five pointed star when the medal is awarded in exceptional circumstances.
The accompanying photo you find here is of then 110-year-old Red Sox fan Kathryn Gemme doing her best to investigate the World Series trophy after the Sox won in '04 to determine whether it's actually the trophy. Or the bathroom. Or the dishwashing machine. Or President Taft. Anyway, Gemme died at the Nemasket Healthcare Center in Middleborough on Friday at the age of 112. She first went to Fenway Park as a saucy little 18-year-old vixen and still attended Red Sox games until she was 109. Gemme credited her long life and health to simple living. "I didn't drink, didn't smoke, I ate regularly. No fancy stuff," she once told The Enterprise of Brockton. Although her eyesight, hearing and mobility failed, she remained mentally sharp until the end, said Sharon Gosling, Nemasket's activity director. Gemme is survived by her daughter, four grandchildren, seven great grandchildren, and, of course, the 2004 Boston Red Sox.
Disturbance of kidney aldosterone sensitivity and the mechanism of its development during neurodystrophy. It has been established that neurodystrophy resulting from the sectioning of an animal's left sciatic nerve and treatment of its central stump with formalin leads to a prolonged disturbance of the water- and salt-excreting function of the kidneys. As a rule, this is expressed as oligouria and decreased excretion of sodium in the urine. The indicated changes are due to disturbance of both intra- and extrarenal mechanisms. Intrarenal mechanisms of oligouria basically involve an acute depression in filtration of primary urine, and the hyponatriuresis is due to a decrease in the sodium load of the nephron and to an increase in the tubular reabsorption of sodium. Extrarenal mechanisms implicated in the indicated changes in sodium reabsorption involve an increased concentration of mineralocorticoid hormones of the body. Further studies have shown that the hyponatriuresis which arises during neurodystrophy is related not only to the increased concentration of the above-mentioned hormones in the fluid media of the body, but also is related to a change in the kidneys' sensitivity to them.
Urbanisation: A brief episode in history This paper paints an altogether broader canvas building upon earlier debates in City concerning the decline of cities over the coming decades and the changed states of mind and understanding of what life is about that can be expected to unfold under these circumstances. There is already considerable preliminary experience of this, analysed in the paper, that has resulted from economic shocks' over the past two decades and, on the positive side, the experience of the Transition Movement and of well over 1000 mature intentional communities scattered across the Occidental world. Taking the long view, the paper analyses the growth and fall of civilisations and their cities as centres of the accumulation of power. The discussion then moves on to discuss how modern civilisation came about, arriving in a state of Possessive Individualism and through three centuries of social conflict culminating in our technology-dominated, consumerist world of megacities. The role played in this by the exploitation of fossil fuels is analysed and the way in which this could never have been sustained for long is made clear. The second half of the paper then presents scenarios of the coming decades, providing some detail of possible steps from our urbanised, globalised world to a return to considerably more localised economies organised as networks of villages and towns as was the case almost everywhere in the world until well into the 20th century. Agriculture and the use of local resources may be expected once again to characterise the way in which societies function. This will also mean a return to local communities as the centres of most lives and the paper discusses how this might unfold both logistically and in terms of the consciousness and outlook of the people. Models of Utopia are presented as a way to conceptualise how today's younger generation might come to terms with this changed world.
The 8-Bit Bureaucrat: Can Video Games Teach Us About Administrative Ethics? Politicians and public administrators have long played both prominent and supporting roles in works of fiction. The field of public administration has a tradition of using fictionalized depictions of administrative processes to understand better the relationship between theory and practice. Portrayals of public servants and government action in video games remain an understudied area, leaving gaps in our understanding related to digital media and bureaucratic characterization. Using Squires framework of understanding video games as a designed experience, this study examines the elements of administrative ethics and ethical decision making within the video game Papers, Please. Future research will most certainly be required to connect characterization of public service in digital media to public perception and pedagogy of administrative functions. This research aims to catalyze exploration into this concept through illustrated applicability and salience.
Financing of Studies at Universities: Mapping out Lithuanian Solutions This article discusses the shortcomings in the financing of studies at universities in Lithuania, and the need for reforms in this area. They are determined both by inadequate levels of public financing in st ate universities, as by the social and economic ineffectiveness of the currently applied financing methods. The article investigates the correspondenc e of the main types of public financing payment for studies as well as various types of financial suppo rt available to students to theoretical provisions o f public goods, the social justice crisis behind fina ncial assistance given to students and how it contravenes prevailing world practices. The article is based on an analysis of the financing of studies in the Europea n Union and other countries.
N-arylamines compounds are important substructures in natural products and industrial chemicals, such as pharmaceuticals, dyes, and agricultural products. N-arylamines are useful for screening for pharmaceutical and biological activity and in the preparation of commercial polymers. It would be advantageous to prepare N-arylamine compounds from arylating compounds such as aryl halides and/or aryl tosylates because aryl halides are generally inexpensive and readily available, while aryl tosylates are easily prepared from phenols. However, to date, methods of producing N-arylamines are inefficient or economically unattractive. Many known processes that generate an aryl-nitrogen bond must be performed under harsh reaction conditions, or must employ activated substrates which are sometimes not available. Examples of procedures that generate aryl amine compounds include nucleophilic substitution of aryl precursors and synthesis of aryl amines via copper-mediated Uhlmann condensation reactions. The commercialization of metallocene polyolefin catalysts has led to widespread interest in the design and preparation of other catalysts and catalyst systems, particularly for use in economical gas and slurry phase processes. Anionic, multidentate heteroatom ligands have received attention in polyolefins catalysis. Notable classes of bidentate anionic ligands which form active polymerization catalysts include N—N− and N—O− ligand sets. Examples of these types of catalysts include amidopyridines and polyolefin catalysts based on hydroxyquinolines. U.S. Pat. No. 5,576,460 (the '460 patent) discloses two synthesis routes to preparing arylamine compounds. The first route includes reaction of a metal amide comprising a metal selected from the group consisting of tin, boron, zinc, magnesium, indium and silicon, with an aromatic compound comprising an activated substituent in the presence of a transition metal catalyst to form an arylamine. The second route utilizes an amine rather than a metal amide. The '460 patent teaches that this reaction be conducted at a temperature of less than about 120° C. and is drawn to the use of the arylamine as an intermediate in pharmaceutical and agricultural applications. U.S. Pat. No. 5,929,281 discloses the preparation of heterocyclic aromatic amines in the presence of a catalyst system comprising a palladium compound and a tertiary phosphine and the preparation of arylamines in the presence of a catalyst system comprising a palladium compound and a trialkylphosphine. U.S. Pat. No. 3,914,311 discloses a low temperature method of preparing an arylamine by the reaction of an amine with an aromatic compound having a displaceable activated substituent at temperatures as low as 25° C. in the presence of nickel catalyst and a base. Other patents discussing N-arylamine compounds may include U.S. Pat. Nos. 6,235,938 and 6,518,444, among others, as well as references such as, Shen, Q., Shekhar, S, Stambuli, J. P., Hartwig, J. F., Angew. Chem., Int. Ed.; 2005, 44, 1371-1375. A need exists for a general and efficient process of synthesizing N-arylamine compounds from readily available arylating compounds. The discovery and implementation of such a method would simplify the preparation of commercially significant organic N-aryl amines and would enhance the development of novel polymers and pharmacologically active compounds.
Doc Gamble has taken a new position as assistant head coach and quarterbacks coach at UAPB. Doc Gamble, a former star quarterback and head coach at Withrow High School who also coached one season at Fairfield High School, two seasons as an assistant at Mount St. Joseph and was an offensive assistant coach for the UC Bearcats in 2011, has taken a new position as assistant head coach and quarterbacks coach at University of Arkansas Pine Bluff after spending the past five seasons as Kent State's wide receivers coach. Gamble announced his new position on his Twitter and Facebook accounts. Gamble also has coached at Alcorn State University and East Carolina University. Gamble coached Withrow from 2003-07 and 2009-10 and was elected to the 2018 Withrow Athletics Hall of Fame.
Jerry Coleman Broadcasting career In 1958, New York Yankees general manager George Weiss named Coleman personnel director, which involved Coleman scouting minor league players. Roy Hamey terminated Coleman from that position, upon becoming the Yankees' general manager. It was only after Coleman met with Howard Cosell that Coleman considered becoming a broadcaster. In 1960, Coleman began a broadcasting career with CBS television, conducting pregame interviews on the network's Game of the Week broadcasts. His broadcasting career nearly ended that year; he was in the midst of an interview with Cookie Lavagetto when the national anthem began playing. Coleman kept the interview going through the anthem, prompting an avalanche of angry letters to CBS. In 1963 he began a seven-year run calling Yankees' games on WCBS radio and WPIX television. Coleman's WPIX call of ex-teammate Mickey Mantle's 500th career home run in 1967 was brief and from the heart: Here's the payoff pitch ... This is it! There it goes! It's outta here! During his time broadcasting with the Yankees he lived in Ridgewood, New Jersey, which he described as being "19.9 miles from Yankee Stadium, but a million miles from New York". After broadcasting for the California Angels for two years, in 1972 Coleman became the lead radio announcer for the San Diego Padres, a position he held every year until his death in 2014 except for 1980, when the Padres hired him to manage (predating a trend of broadcasters-turned-managers that started in the late 1990s). He was known in San Diego for his signature catchphrase, "You can hang a star on that one, baby!", which he would deliver after a spectacular play. During home games, the phrase would be accompanied by a tinsel star swinging from a fishing pole that emanated from his broadcast booth. Coleman's other catchphrases included "Oh Doctor!", "And the beat goes on," and "The natives are getting restless." He also called national regular-season and postseason broadcasts for CBS Radio from the mid-1970s to 1997. During an interview in the height of the steroids scandal in 2005, Coleman stated, "If I'm emperor, the first time 50 games, the second time 100 games and the third strike you're out", referring to how baseball should suspend players for being caught taking steroids. After the 2005 World Series, Major League Baseball put a similar policy in effect. Coleman was known as the "Master of the Malaprop" for making sometimes embarrassing mistakes on the microphone, but he was nonetheless popular. In 2005, he was given the Ford C. Frick Award of the National Baseball Hall of Fame for broadcasting excellence, and is one of five Frick award winners who also played in the Major Leagues (the others are Joe Garagiola, Tony Kubek, Tim McCarver and Bob Uecker). He was inducted into the San Diego Padres Hall of Fame in 2001. In fall 2007, Coleman was inducted to the National Radio Hall of Fame as a sports broadcaster for his years as the play-by-play voice of the San Diego Padres. Ted Leitner and Andy Masur replaced Coleman for most of the radio broadcasting efforts for each Padres game. He did, however, still work middle innings as a color analyst. As of the 2010 season, he reduced his broadcast schedule down to 20–30 home day games. As of November 2010, Coleman was the third-oldest active play-by-play announcer, behind only fellow Hall of Famers Felo Ramirez and Ralph Kiner. Coleman collaborated on his autobiography with longtime New York Times writer Richard Goldstein; their book An American Journey: My Life on the Field, In the Air, and On the Air was published in 2008. On September 15, 2012, the Padres unveiled a Jerry Coleman statue at Petco Park. Coleman's statue is the second statue at Petco Park, the other being of Hall of Fame outfielder Tony Gwynn. Death Coleman's death was reported by the San Diego Padres on January 5, 2014. He died after being hospitalized after a fall in his home. He was 89. Coleman was interred at Miramar National Cemetery after a private funeral. Legacy In 2015, a sports facility at Marine Corps Recruit Depot San Diego was named in honor of Coleman.
Velocity Estimation for UAVs by Using High-Speed Vision In recent years, applications of high-speed visual systems have been well developed because of their high environmental recognition ability. These system help to improve the maneuverability of unmanned aerial vehicles (UAVs). Thus, we herein propose a high-speed visual unit for UAVs. The unit is lightweight and compact, consisting of a 500 Hz high-speed camera and a graphic processing unit. We also propose an improved UAV velocity estimation algorithm using optical flows and a continuous homography constraint. By using the high-frequency sampling rate of the high-speed vision unit, the estimation accuracy is improved. The operation of our high-speed visual unit is verified in the experiments.
1. Field of the Invention The present invention relates in general to electrolytic solutions and, more particularly, to an electrolytic solution for use as a gel electrolyte in an electrolytic cell. 2. Background Art Electrolytic solutions have been used in forming cured or thermoset gel electrolytes for use in electrolytic cells for several years. In particular, polyethylene oxide (PEO) has been used in combination with other electrolyte materials such as propylene carbonate to form thermoset gel polymer systems. The use of polyethylene oxide has been particularly useful for, among other things, increasing the viscosity of an electrolyte solution to, in turn, improve coating and flow properties on an associated substrate, such as an electrode, before curing the electrolyte solution to a thermoset gel. Although polyethylene oxide has been used with some degree of success, electrolytic solutions incorporating PEO have had certain drawbacks, such as thermodynamic instability. This instability leads to inadvertent precipitation and crystallization from the solution before intended curing which, in turn, results in non-uniform coatings and subsequent difficulties with controlling coating of the electrolytic solution onto a substrate as well as inhomogeneities of ionic conductivity. Furthermore, the use of PEO has resulted in the formation of thermoset gel electrolytes which lack desired mechanical properties. In particular, these gel electrolytes exhibit relatively low compressive strengths, and a relatively high compressive modulus. Another type of prior art gel polymer system incorporates poly(methyl methacrylate) (PMMA) into a thermoplastic gel electrolyte. For instance, "Fast Ion Transport in New Lithium Electrolytes Gelled with PMMA", Solid State Ionics 66, pp. 97-104 (1993) by O. Bohnke, et al. disclosed, the use of 30-35 wt. % PMMA to form a thermoplastic electrolytic gel. Moreover, in "Ionic Conductivity and Compatibility Studies of Blends of Poly(methyl Methacrylate) and Poly(propylene Glycol) Complexed with LiCH.sub.3 SO.sub.3 ", Journal of Polymer Science, Vol. 30, pp. 2025-2031, J. R. Stevens et al. contemplated the polymerization of PMMA to form a thermoplastic gel electrolyte. Finally, in "Conductivity and Viscosity Studies of Lithium Ion Conductive Electrolytes Gelled with Poly(methyl methacrylate)", Advance Materials Research (1994), M. Rezrazi et al. used PMMA to form a gel in a liquid electrolyte system. Although these references disclose the use of PMMA in formation of an electrolytic solution and, in turn, a thermoplastic electrolyte gel, these gel polymer systems either (a) use ranges of PMMA that result in a thermoplastic gel when the PMMA is mixed with propylene carbonate, or (b) use PMMA to form a portion of the thermoplastic gel structure in a gel polymer system. Furthermore, not withstanding the less than desirable mechanical properties of the PMMA based electrolytes, the PMMA in such prior art devices is used as the actual gelling agent, which, in turn, results in a thermoplastic gel. Accordingly, it is an object of the present invention to provide an electrolytic solution which incorporates PMMA into the electrolyte composition as a reinforcement polymer to increase the mechanical integrity of the resulting thermoset gel electrolyte. It is another object of the present invention to provide an electrolyte having PMMA remaining in solution after curing of the electrolyte. It is still further an object of the present invention to provide an electrolytic solution which is thermodynamically stable. It is yet another object of the present invention to provide an electrolytic solution which will facilitate an uniform, homogenous solution having increased coatability and adhesion onto an associated substrate. These and other objects will become apparent in light of the present Specification, Claims and Drawings.
An Argentine football fan has died of head injuries - two days after being pushed from a stand by an angry crowd who thought he was a rival supporter. Emanuel Balbo, 22, had reportedly quarrelled with a man during Saturday's derby between his Belgrano team and the visiting Talleres in the northern city of Cordoba. Witnesses said the man had shouted that Mr Balbo was a disguised Talleres fan. He was then pushed over the edge of a stand, video images showed. He fell onto concrete stairs from a height of about five metres (16ft). He was taken to hospital with severe head injuries, where doctors declared him brain dead. Four people have been arrested over the attack, with the Argentine Football Association calling for "those responsible for this inconceivable assault" to be brought to justice. Campaign groups say more than 40 people have died in football related violence in Argentina since 2013.
Estimation of the gene frequency of lactate dehydrogenase subunit deficiencies. To detect the frequency of lactate dehydrogenase (LDH) subunit deficiency, screening for LDH subunit deficiency was performed on 3,776 blood samples from healthy individuals in Shizuoka Prefecture by means of electrophoresis. The frequency of heterozygote with LDH-A subunit deficiency was found to be 0.185%, and with LDH-B subunit deficiency, 0.159%. The frequencies of both subunit deficiencies were not significantly different. Gene frequencies of LDH subunit deficiencies were calculated by the simple counting procedure, and the results are as follows: gene frequency of LDH-A subunit deficiency was 11.9 X 10(-4), and that of LDH-B subunit deficiency, 7.9 X 10(-4). In addition, the second case in the world of a homozygous individual with LDH-A subunit deficiency was detected by this screening. This case with regard to the characteristics of LDH-A subunit deficiency are summarized herein.
Robert Bunch Robert Bunch KCMG (born September 11, 1820, died March 21, 1881) was a British diplomat, who was a secret agent present in the United States South during the American Civil War. Before the outbreak of Civil War, he had served as a diplomatic representative, first in the North, and then replacing George Buckley Mathew in Charleston, who was causing diplomatic problems. In particular, Mathew was vocally against South Carolina's recent rule of incarcerating British African sailors while in port. Life Robert Bunch was from 1841 to 1845 attaché and private secretary of William Pitt Adams in Bogotá and Lima. In September 1844 he was sent together with Captain Sir Thomas Thompson as Joint Commissioner to the Supreme Junta of Government of Peru to Arequipa. On December 16, 1845 he was appointed Consular Agent in Lima. On February 6, 1846 he was appointed acting consul in Callao (Peru). Robert Bunch was vice-consul in New York City from October 25, 1848 to 1853. From 1853 to 1864, Bunch was consul in Charleston, South Carolina. In 1864 he became Consul General in Havana.[3] From 1866 to 1878 he served as Minister Resident and Consul General in Colombia. He then took on diplomatic posts in Venezuela and Peru. Robert Bunch had been married to Charlotte Amelia Craig since 1853. [4]
Ellicott Dredges History By 1783, brothers John and Andrew Ellicott were well established flour exporters and operators of the largest flour mill on the East coast US – Ellicott Mills. To accommodate their export business, the Ellicott brothers built a warehouse and wharf at Pratt and Light Streets on the Baltimore harbor. To maintain the wharf’s water depth, and continue to bring in ships, the Ellicotts commonly utilized horse drawn scoops to remove sediment from the harbor. In 1785, the Ellicotts designed and built their first dredge, the Mud Machine. Their efforts have been documented as the first dredging effort in the Baltimore harbor. In the same year, Maryland’s assembly established a board of port wardens to oversee the management of the Baltimore port and its dredging needs. In 1790, the wardens approved the first use of mud machines throughout the harbor. In 1827, the federal government recognizing the importance of the Port of Baltimore and its location on the B&O railroad, approved funding for an ongoing Baltimore harbor dredging program. To initiate the program, the Maryland state senate purchased ten Mud Machines from the Ellicott brothers. Shortly thereafter, horses were replaced by steam engines on the Mud Machine crafting the first power driven dredge in the US. Ellicott Dredges built the cutter dredges used in construction of the Panama Canal. The first machine delivered was a steam-driven, 900 HP, 20-inch dredge. In 1941, Ellicott Dredges also built the dredge MINDI, a 10,000 HP, 28-inch cutter suction dredge still operating in the Panama Canal. Currently, Ellicott has sold over 1,500 dredges to 80 different countries. Their cutter dredges can be used for a variety of applications including coastal protection, sand mining, and land reclamation.
Neutropenic Enterocolitis in Critically Ill Patients: Spectrum of the Disease and Risk of Invasive Fungal Disease Objectives: Neutropenic enterocolitis occurs in about 5.3% of patients hospitalized for hematologic malignancies receiving chemotherapy. Data from critically ill patients with neutropenic enterocolitis are scarce. Our objectives were to describe the population of patients with neutropenic enterocolitis admitted to an ICU and to investigate the risk factors of invasive fungal disease. Design: A multicentric retrospective cohort study between January 2010 and August 2017. Setting: Six French ICUs members of the Groupe de Recherche Respiratoire en Onco-Hmatologie research network. Patients: Adult neutropenic patients hospitalized in the ICU with a diagnosis of enteritis and/or colitis. Patients with differential diagnosis (Clostridium difficile colitis, viral colitis, inflammatory enterocolitis, mesenteric ischemia, radiation-induced gastrointestinal toxicity, and Graft vs Host Disease) were excluded. Interventions: None. Measurement and Main Results: We included 134 patients (median Sequential Organ Failure Assessment 10 ), with 38.8% hospital mortality and 32.1% ICU mortality rates. The main underlying malignancies were acute leukemia (n = 65, 48.5%), lymphoma (n = 49, 36.6%), solid tumor (n = 14, 10.4%), and myeloma (n = 4, 3.0%). Patients were neutropenic during a median of 14 days (922 d). Infection was documented in 81 patients (60.4%), including an isolated bacterial infection in 64 patients (47.8%), an isolated fungal infection in nine patients (6.7%), and a coinfection with both pathogens in eight patients (5.0%). Radiologically assessed enteritis (odds ratio, 2.60; 95% CI, 1.327.56; p = 0.015) and HIV infection (odds ratio, 2.03; 95% CI, 1.213.31; p = 0.016) were independently associated with invasive fungal disease. Conclusions: The rate of invasive fungal disease reaches 20% in patients with neutropenic enterocolitis when enteritis is considered. To avoid treatment delay, antifungal therapy might be systematically discussed in ICU patients admitted for neutropenic enterocolitis with radiologically assessed enteritis.
The energy efficiency concept and implementation prospects of the jet thermocompression principle in small heat energy The expediency of the implementation of the principle of steam thermal compression to improve the energy efficiency of sources of electricity and heat supply of small heat power engineering is substantiated. The results of thermodynamic analysis and numerical optimization of the parameters of the compressor steam-turbine cycle of a small cogeneration power plant are presented. A jet step-down thermotransformer has been tested - as an alternative to traditional boiler heating. On the basis of the conducted thermodynamic analysis, a new combined cycle of a step-down thermotransformer has been developed, which ensures efficient conversion of the supplied energy (mainly in the form of fuel heat) into the heat carrier flow of the heat supply system with the required temperature level 50... 90 °C). The fundamental difference between the considered thermal transformer and steam compressor heat pumps is the replacement of a mechanical compressor with a steam thermocompressor module (STC-unit). The working process in the STK-module is realized by using the liquid phase of the refrigerant, which boils up during expiration, subcooled to saturation, as an active medium of a jet compressor. Injection of steam from the evaporator is provided due to the fine-dispersed vapor-droplet structure formed in the outlet section of the active flow nozzle. A program for the numerical study of the working process of a step-down thermal transformer was prepared and tested, on the basis of which multivariate calculations were carried out. On the basis of computational studies, the area of achievable indicators of the proposed heat supply system has been established; the area of initial operating parameters corresponding to the maximum values of the conversion coefficient and exergy efficiency was determined; comparative indicators of the main parameters of the investigated thermal transformer on various working substances in the range of operating modes as a heat pump or a refrigerating machine were obtained. Key words: workflow, steam thermocompressor, step-down thermotransformer, energy efficiency, heat pump mode
Q: Can I use Canon lenses on a Nikon dSLR? I have Canon lenses and would like to purchase an adapter to use them on the D3200? Is there one available? If so name and model would be appericated. I understand that I will have to shoot on manual mode. A: You won't readily find a Canon EOS → Nikon F or Canon FD/FL → Nikon F adapter. There are reasons for this. A lens's ability to focus through the entire distance range to infinity relies heavily upon the distance it's held from the image plane. This is known as the register distance or flange focal distance, and it's specific to each mount system. The Nikon F distance is 46.5mm. The Canon EOS distance is 44mm; the Canon FD/FL distance is 42mm. So, it's easy to build a mechanical ring that makes up the 2.5mm between Canon EOS and Nikon so you can mount a Nikon F lens on a Canon EOS dSLR body and use it (albeit without electronic communication; more on this below). But in the reverse direction, you'd have to shave off 2.5mm from either the lens mount or the camera mount, and you're liable to have the lens's rear element collide with the mirror if you manage to do so. If you do find an adapter ring it must act like a macro extension tube by holding the lens farther away than its registration distance. So an optical element is required to act as a short teleconverter to regain focus to infinity. If the glass is cheap, it's liable to degrade image quality. Without glass, the lens will only be able to focus relatively closely, so for macro or close portrait use, it might be ok, but it's relatively rare to find one of these rings. The electronic mount communication between the camera and the lens is proprietary and adapter rings don't translate between the two, so you give up all the features that require camera/lens communication (e.g., camera body control of the lens aperture [i.e., you can only shoot in M or A], wide-open metering, lens EXIF information, autofocus). The Nikon D3x00/D40 and D5x00/D60 models cannot perform stop-down metering. So even if you're willing to give up focus to infinity and try and use an adapter ring, you'll lose accurate metering. Canon EOS lenses have no aperture rings. Without electronic communication and with no manual way to adjust the aperture, you either have to shoot wide open all the time, or you have to mount the EOS lens on a Canon body, set the aperture, hold down the DOF button and unmount it and then put it on your Nikon body. And go through this little ritual every time you want to change the aperture. FD/FL lenses, however, do have aperture rings. The only practical way I've seen to adapt other (mostly manual-focus) mount SLR lenses to Nikon F for those of us without machine shops and experience tinkering with lenses and camera bodies, are the Leitax lens mount replacement kits, and the only mounts they provide Nikon kits for are Leica R, Contax/Yashica (think: Zeiss), and Olympus OM lenses. In short, you'd probably be better off selling your Canon lenses and purchasing Nikon lenses for your D3200. But if you have to use these lenses, then get a Canon EOS dSLR body to match your glass if it's EOS, or get a mirrorless camera body and an adapter if they're Canon FD/FL. A: Since the registration distance for the Nikon F-mount is larger than the registration distance for the Canon EF-mount, there is not a simple adapter that will work well, still allow infinity focus, and not change the focal length/maximum f-number through the use of additional optics. The registration distance is the spacing between the camera's film plane or sensor and the lens mounting flange. It is often referred to as the 'flange focal distance.' The Nikon F-mount has a registration distance of 46.5mm. The Canon EF mount has a registration distance of 44mm. It's fairly easy to adapt lenses designed with a longer registration distance to a camera with a shorter one. We just need to add a spacer so that the adapted lens sits at its designed distance in front of the camera's mounting flange. The opposite, though, would require an adapter with negative thickness that would allow the adapted lens to protrude into the throat of the camera. This is physically impossible in the case of Canon EF -mount lenses and Nikon F-mount cameras. The Canon EF-mount has a larger throat diameter than the Nikon F-mount. There's no room to allow an adapter to attach to the front of the Nikon's 44mm diameter mounting flange spaced 46.5mm in front of the sensor and then protrude 2.5mm into the camera to place a larger 54mm diameter mounting flange for the EF lens only 44mm in front of the sensor - the smaller Nikon flange ring is in the way. There would also likely be clearance issues with the reflex mirror for Nikon FF cameras. The only way to adapt an EF lens to an F-mount body so that it can focus further than just a few inches to a few feet in front of the lens (depending on the particular lens) would be to use an adapter with a mild magnification lens. This is essentially a teleconverter, which comes with optical image quality penalties as well as an increased f-number at maximum aperture. Making one that is fairly affordable would result in a significant loss of image quality. To the best of my knowledge, no such adapter is currently available commercially.
Stalinization and its Limits in the Saxon KPD, 192528 Using newly available documentation, this article re-examines the debate on the political development of the German Communist Party (KPD) during the mid-1920s. Initially, the history of the KPD was written as the history of the partys subordination to Moscow. However, with the rise of social history, historians shifted the focus of their research to the view from below. Moscows omnipresence in the history of German communism was replaced by the KPDs ability to formulate policy in response to specifically German socio-economic and political developments. Taking a position between the poles of this polarized debate, Saxony is used as a case study to demonstrate that although the party per se was Stalinized it proved impossible to uproot the local membership from their immediate local environment. Factionalism represented the organized response of local activists against the intrusions of a remote leadership, whose promotion of Moscows line clashed with their own everyday experience.
Antiinflammatory activity of natural triterpenesAn overview from 2006 to 2021 Terpenes are one of the most abundant classes of secondary metabolites produced by plants and can be divided based on the number of isoprene units (C5) in monoterpenes (2unitsC10), sesquiterpenes (3unitsC15), diterpenes (4unitsC20), triterpenes (6 unitsC30), etc. Chemically, triterpenes are classified based on their structural skeleton including lanostanes, euphanes, cycloartanes, ursanes, oleananes, lupanes, tirucallanes, cucurbitanes, dammaranes, baccharanes, friedelanes, hopanes, serratanes etc. Additionally, glycosylated (saponins) or highly oxidated/degraded (limonoids) triterpenes could be found in nature. The antiinflammatory effect and action as immunomodulators of these secondary metabolites have been demonstrated in different studies. This review reports an overview of articles published in the last 15years (from 2006 to 2021 using PubMed and SciFinder database) describing the antiinflammatory effects of different triterpenes with their presumed mechanism of action, suggesting that triterpenes could be appointed as natural products with future pharmaceutical applicability.
Verbatim recall in formal thought disorder in schizophrenia: a study of contextual influences Introduction We have previously reported that people with schizophrenia and formal thought disorder (FTD) were disproportionately impaired in recalling sentences verbatim and in judging their plausibility. We proposed that these deficits were due to impairment in integrating higher-order semantic information to construct a global whole. However, it is also possible that a lower-level linguistic problem affecting lexical activation could account for this pattern. Methods The present study analysed and compared the sentence repetition errors produced by people with FTD, people with schizophrenia who were non-FTD and healthy controls. Errors due to failure of activation of the target lexical items were differentiated from those due to erroneous integration of information. Results People with FTD produced significantly more unrelated lexical substitutions and omissions in their corpora than the other two groups, indicating an impairment of activation. In addition, they made significantly more erroneous contextual inferences and unrelated references, suggesting they were impaired in reconstructing the global whole from successfully activated items. Conclusion These findings are consistent with a dual process account of impairments in FTD. Difficulties in repeating and judging sentence acceptability arises due to a combination of difficulty with activation and deficits in using linguistic context to process and produce speech. It is suggested that processing difficulties in FTD result from an impairment in using semantic context to drive lexical access and construction of a global whole.
The Dangers of the Digital Millennium Copyright Act: Much Ado about Nothing? In 1998, Congress passed the Digital Millennium Copyright Act (DMCA), a landmark piece of legislation aimed at protecting copyright holders from those who might manufacture or traffic technology capable of allowing users to evade piracy protections on the underlying work. At its core, the DMCA flatly prohibits the circumvention of technological protection measures in order to gain access to copyrighted works, with no safety valve for any traditionally protected uses. While hailed as a victory by the software and entertainment industries, the academic and scientific communities have been far less enthusiastic. The DMCA's goal of combating piracy is a noble one, but lurking is the danger that it comes at the expense of public access to protected works and future innovation. Despite America's long history of fair use protections in copyright law, many commentators have warned that consumers now find themselves unable to do many of the same things with copyrighted works that they previously could - anyone who might sell them the technology to access a protected work and enable fair use would find themselves in violation of the DMCA. Worse, early litigation dramatically expanded the definition of what constitutes a technological protection measure deserving of the law's respect. As the definition broadened, scholars feared that even modest innovations - ones that would never qualify for patent protection under existing law - could wind up receiving perpetual patent-like protection through the backdoor of the DMCA. Despite the experts' dire predictions, however, subsequent common law interpretation of the DMCA has reigned in many of its potential dangers - the judiciary's focus is rightly on the need to balance innovators' interests with the equally important goals of public access and enhancing overall social welfare. Nonetheless, coherent and uniform protection of fair use under the DMCA is likely best achieved through Congressional action.
Jerome W. Conn Biography Conn was born in New York City and studied for three years at Rutgers University before he entered the University of Michigan Medical School at Ann Arbor in 1928. The Great Depression of 1929 made it hard for his family to support his education, but his sisters managed to pay for it with their salaries. He graduated with honors in 1932 and started an internship in surgery before switching to internal medicine. Conn worked at the Division of Clinical Investigation where he worked under Louis H. Newburgh on the relationship between obesity and non-insulin dependent diabetes mellitus. Conn proved that normal carbohydrate tolerance could be reached in twenty of twenty-one subjects when they reached normal weight. He became fellow in 1935 and assistant professor in 1938. From 1943 Conn took on the Division of Endocrinology and started an investigation concerning acclimatization of military personnel to warm climates like in the South Pacific. He discovered that the excretion of sodium in sweat, urine and saliva was curtailed in these circumstances. At the Presidential address to the Society of Clinical Research Conn presented a thirty-four-year-old patient complaining of episodic weakness of the lower legs, almost to paralysis, with periodic muscle spasms and cramps in her hands for a total period of seven years. After extensive research he had found a condition he called primary hyperaldosteronism, later called Conn syndrome. There were elevated levels of aldosterone in her circulation, coming from a hormone producing adrenal adenoma. Conn wrote a total of 284 articles and book chapters and was recognized as a tutor stimulating others in research. His clinic was leading for years after in research on hyperaldosteronism. Conn was honored by being named L. H. Newburgh Distinguished University Professor in 1968. There were many other honors during his career; he was member of twelve national professional societies. His retirement was in 1974. He died in Naples, Florida.
By JOSEPH MURAYA, NAIROBI Kenya, July 8 – At least 12 million Kenyans take alcohol every day, according to the National Authority for the Campaign Against Alcohol and Drug Abuse (NACADA) Chairman John Mututho. Of these, Mututho told the National Assembly Committee on Administration and National Security, about two million are addicts who need rehabilitation services. He further said that the country consumes over 70 percent of alcohol produced in the East Africa region. “We have currently about 15 million Kenyans who are said to be drinking regularly and about 12 million Kenyans who drink daily,” he stated. He said despite this enormous challenge, NACADA has not established a single treatment and rehabilitation centre across the country but noted they have been supporting the Coast General Hospital. “I think it’s more honourable to admit a failure, we do not have any rehabilitation centre which is run by NACADA,” he stated. The NACADA boss was testifying before the committee on National Security on the close to 100 deaths caused by illicit liquor in May. In his response he said the authority had recommended the reduction of the duty on ethanol to discourage the use of lethal methanol in the manufacture of alcohol. In a report presented before the committee, not a single person has since been arrested other than a closure of a manufacturing company in Kisumu. Defending the authority, Mututho recommended that NACADA should have an inspectorate department to enforce their regulations. “We need to look at possible amendments so that we have a body which has muscles but as per now, the anti-narcotic people only sit on our board meetings, we do not know what they do in their offices,” he lamented. Among the measures which NACADA took after the deaths of Kenyans is to console and offer counselling services. NACADA Chief Executive Officer William Okedi however found himself on the spot after he said the counselling was done through television programmes and newspaper articles. Muhia also wondered why only a company in Kisumu was closed when majority of deaths occurred in Mount Kenya region and some parts of Eastern region.
Christian Lopes Career In 2006, Lopes was named the Under-13 National Baseball Player of the Year by Baseball America. He played for the United States national baseball team in the 2010 World Junior Baseball Championship. Lopes attended Valencia High School in Santa Clarita, California, for two years. As a sophomore, Lopes batted .453 with 15 home runs and 33 runs batted in. He was named the Santa Clarita Valley Player of the Year and the Foothill League's most valuable player. His family moved and he transferred to Edison High School in Huntington Beach, California. He committed to attend the University of Southern California. The Toronto Blue Jays selected Lopes in the seventh round of the 2011 MLB draft. He split the 2012 season between the Bluefield Blue Jays and the Vancouver Canadians, hitting a combined .278/.339/.462/.801 with 4 home runs and 33 RBI. He played for the Lansing Lugnuts in 2013, hitting .245/.308/.336/.644 with 5 home runs and 66 RBI. He spent the 2014 season with the Dunedin Blue Jays, hitting .243/.329/.350/.679 with 3 home runs and 33 RBI. He split the 2015 season between Dunedin and the New Hampshire Fisher Cats, hitting a combined .260/.339/.325/.664 with 2 home runs and 38 RBI. In 2016, he again split the season between Dunedin and New Hampshire, hitting .283/.353/.402/.755 with 6 home runs and 56 RBI. In 2017, he split the season between the GCL Blue Jays, Dunedin, and the Buffalo Bisons of the Class AAA International League, hitting a combined .269/.357/.421/.778 with 7 home runs and 46 RBI. After the 2017 season, Lopes signed a minor league contract with the Rangers. He played for the Round Rock Express of the Class AAA Pacific Coast League in 2018, hitting .261/.365/.408/.773 with 12 home runs and 52 RBI. He split the 2019 season between the Frisco RoughRiders and the Nashville Sounds, hitting a combined .265/.356/.422/.778 with 13 home runs and 65 RBI. Personal life His brother, Tim, is also a baseball player.
you kind of frantically go from one thing to the next and there isnt any time for thinking any more: a reflection on the impact of organisational change on relatedness in multidisciplinary teams Abstract This paper reflects on the relational impact on Clinical Psychologists of NHS organisational change in the context of cuts and reorganisation. The reflections illustrate one theme drawn from a study of eight Clinical Psychologists working within adult Community Mental Health Multi-Disciplinary Teams. The paper considers the impact of competition and change in healthcare on the ability to engage in reflective practice potentially affecting client care due to reduced joint-working, consistency and creativity. The paper considers how acts of kindness (compassion) within organisational contexts at all levels can facilitate relatedness, reflection and more human care. It concludes by considering how shifting from short-term planning evaluating efficiencies based on perceived financial value, to thinking more widely and long-term about relational value may be of benefit to clinicians, clients and the system as a whole.
The powering down of Fermilab's Tevatron particle accelerator on Friday marked the end of a quarter-century of U.S. dominance in high-energy particle physics. The Tevatron, which accelerates and collides protons and antiprotons in a four-mile-long (6.28 km) underground ring, has been replaced by the Large Hadron Collider under the French-Swiss border, which began operating in March 2010. Physicists at the U.S. lab will now turn to smaller, more focused projects -- such as building the most intense proton beam -- as they pass the high-energy physics baton to the European Organization for Nuclear Research's (CERN) bigger, better atom smasher. 'Nothing lasts forever at the edge of science,' said Pier Oddone, director of the Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois. Oddone said Europe has outspent the United States by a factor of three, and the United States now has to be very clever and define very carefully how it uses its resources. 'I think we can maintain a leadership position in the world. We are going to not be where we were 30 years ago where we led in every domain of particle physics, but we are going to lead in a narrower domain,' he said in a telephone interview. The highest-profile project on that front is an effort to confirm the startling discovery last week at CERN of particles that move faster than the speed of light. It now falls to scientists at Fermilab to confirm or disprove that as part of its MINOS experiment (Main Injector Neutrino Oscillation Search). That will use Fermilab's Main Injector to hurl an intense beam of neutrinos 455 miles (732 km) through the Earth to the Soudan Mine in northern Minnesota. If it can be verified, it will turn modern physics on its head. In its near 26-year run, the Tevatron has taught many lessons about how to build and manage an accelerator of its size and complexity, and these have played a major role in the construction of the 16.7-mile (27 km) LHC ring at CERN. 'We built this machine to discover how the world is put together,' Oddone said. Tevatron's shining achievement was the discovery in 1995 of the top quark, the heaviest elementary particle known to exist. Though it is as heavy as an atom of gold, the top quark's mass is crammed into an area far smaller than a single proton. 'Many machines were built around the world with the mission of discovering the top quark. It was only at the highest energy here that we found it,' Oddone said. The building of the Tevatron made contributions to the U.S. economy by bolstering the fledgling industry for superconducting cable to meet the Tevatron's need for 150,000 pounds (68,000 kg) of superconducting wire. And while scientists largely believe the machine has outlived its useful life, lack of funding was the final blow for the Tevatron after the U.S. Department of Energy decided not to spend the $35 million needed to extend the Tevatron's operation through 2014. As a result, many top U.S. physicists will continue research at a remote operation center that Fermilab has set up for scientists to monitor experiments at CERN. Others will relocate to Europe. 'We are whores to the machines. We will go to wherever the machines are to do our science,' said Rob Roser, co-spokesman for CDF, one of the two detectors that used the Tevatron. While the Tevatron will no longer be running experiments, Fermilab scientists have not given up hope of making a major contribution to finding the Higgs boson - thought to be the agent which turned mass into solid matter soon after the Big Bang that created the universe 13.7 billion years ago. Fermilab engineers and physicists have been furiously smashing particles together in the past several months, recreating the primal chaos of flying matter a tiny fraction of a second after the Big Bang. The hope is that they will have accumulated enough data before the Tevatron shutdown to establish if the elusive Higgs boson exists in its long-predicted form. If the particle does exist, scientists say it is running out of places to hide. 'We have cornered the Higgs into this particular space. By the time we analyse all the data, if it is not there, we will be able to say it is not there,' Oddone said. If the answer is no, scientists around the globe will have to rethink the 40-year-old Standard Model of particle physics which describes how they believe the cosmos works. But if it is found, it will be up to the larger collider at CERN to confirm it.
Total disc replacement surgery for symptomatic degenerative lumbar disc disease: a systematic review of the literature The objective of this study is to evaluate the effectiveness and safety of total disc replacement surgery compared with spinal fusion in patients with symptomatic lumbar disc degeneration. Low back pain (LBP), a major health problem in Western countries, can be caused by a variety of pathologies, one of which is degenerative disc disease (DDD). When conservative treatment fails, surgery might be considered. For a long time, lumbar fusion has been the gold standard of surgical treatment for DDD. Total disc replacement (TDR) has increased in popularity as an alternative for lumbar fusion. A comprehensive systematic literature search was performed up to October 2008. Two reviewers independently checked all retrieved titles and abstracts, and relevant full text articles for inclusion. Two reviewers independently assessed the risk of bias of included studies and extracted relevant data and outcomes. Three randomized controlled trials and 16 prospective cohort studies were identified. In all three trials, the total disc replacement was compared with lumbar fusion techniques. The Charit trial (designed as a non-inferiority trail) was considered to have a low risk of bias for the 2-year follow up, but a high risk of bias for the 5-year follow up. The Charit artificial disc was non-inferior to the BAK® Interbody Fusion System on a composite outcome of clinical success (57.1 vs. 46.5%, for the 2-year follow up; 57.8 vs. 51.2% for the 5-year follow up). There were no statistically significant differences in mean pain and physical function scores. The Prodisc artificial disc (also designed as a non-inferiority trail) was found to be statistically significant more effective when compared with the lumbar circumferential fusion on the composite outcome of clinical success (53.4 vs. 40.8%), but the risk of bias of this study was high. Moreover, there were no statistically significant differences in mean pain and physical function scores. The Flexicore trial, with a high risk of bias, found no clinical relevant differences on pain and physical function when compared with circumferential spinal fusion at 2-year follow up. Because these are preliminary results, in addition to the high risk of bias, no conclusions can be drawn based on this study. In general, these results suggest that no clinical relevant differences between the total disc replacement and fusion techniques. The overall success rates in both treatment groups were small. Complications related to the surgical approach ranged from 2.1 to 18.7%, prosthesis related complications from 2.0 to 39.3%, treatment related complications from 1.9 to 62.0% and general complications from 1.0 to 14.0%. Reoperation at the index level was reported in 1.0 to 28.6% of the patients. In the three trials published, overall complication rates ranged from 7.3 to 29.1% in the TDR group and from 6.3 to 50.2% in the fusion group. The overall reoperation rate at index-level ranged from 3.7 to 11.4% in the TDR group and from 5.4 to 26.1% in the fusion group. In conclusion, there is low quality evidence that the Charit is non-inferior to the BAK cage at the 2-year follow up on the primary outcome measures. For the 5-year follow up, the same conclusion is supported only by very low quality evidence. For the ProDisc, there is very low quality evidence for contradictory results on the primary outcome measures when compared with anterior lumbar circumferential fusion. High quality randomized controlled trials with relevant control group and long-term follow-up is needed to evaluate the effectiveness and safety of TDR. Abstract The objective of this study is to evaluate the effectiveness and safety of total disc replacement surgery compared with spinal fusion in patients with symptomatic lumbar disc degeneration. Low back pain (LBP), a major health problem in Western countries, can be caused by a variety of pathologies, one of which is degenerative disc disease (DDD). When conservative treatment fails, surgery might be considered. For a long time, lumbar fusion has been the ''gold standard'' of surgical treatment for DDD. Total disc replacement (TDR) has increased in popularity as an alternative for lumbar fusion. A comprehensive systematic literature search was performed up to October 2008. Two reviewers independently checked all retrieved titles and abstracts, and relevant full text articles for inclusion. Two reviewers independently assessed the risk of bias of included studies and extracted relevant data and outcomes. Three randomized controlled trials and 16 prospective cohort studies were identified. In all three trials, the total disc replacement was compared with lumbar fusion techniques. The Charit trial (designed as a noninferiority trail) was considered to have a low risk of bias for the 2-year follow up, but a high risk of bias for the 5-year follow up. The Charit artificial disc was non-inferior to the BAK Interbody Fusion System on a composite outcome of ''clinical success'' (57.1 vs. 46.5%, for the 2-year follow up; 57.8 vs. 51.2% for the 5-year follow up). There were no statistically significant differences in mean pain and physical function scores. The Prodisc artificial disc (also designed as a non-inferiority trail) was found to be statistically significant more effective when compared with the lumbar circumferential fusion on the composite outcome of ''clinical success'' (53.4 vs. 40.8%), but the risk of bias of this study was high. Moreover, there were no statistically significant differences in mean pain and physical function scores. The Flexicore trial, with a high risk of bias, found no clinical relevant differences on pain and physical function when compared with circumferential spinal fusion at 2-year follow up. Because these are preliminary results, in addition to the high risk of bias, no conclusions can be drawn based on this study. In general, these results suggest that no clinical relevant differences between the total disc replacement and fusion techniques. The overall success rates in both treatment groups were small. Complications related to the surgical approach ranged from 2.1 to 18.7%, prosthesis related complications from 2.0 to 39.3%, treatment related complications from 1.9 to 62.0% and general complications from 1.0 to 14.0%. Introduction Low back pain is a major health problem in Western countries. A variety of pathologies can cause low back pain, one of which is degenerative disc disease (DDD). It has been hypothesised that through disc dehydration, annular tears, and loss of disc height or collapse, DDD can result in abnormal motion of the segment and biomechanical instability causing pain. When conservative treatment fails, patients and health care providers may consider other treatment options such as surgery. Although the rationale for surgery is often not clear and despite the lack of convincing evidence in the literature regarding the effectiveness of surgery in the treatment of symptomatic DDD, the number of surgical procedures performed is continually increasing. For a long time, lumbar fusion (arthrodesis) has been the ''gold standard'' surgical treatment for DDD. However, long-term results are poor and complications common. An alternative surgical procedure, total disc replacement, has increased in popularity. The purpose of this technique is to restore and maintain spinal segment motion, which is assumed to prevent adjacent level degeneration at the operated levels, while relieving pain [4,. Replacing a degenerated joint instead of fusing it was considered for the spine due to the success of total knee and hip arthroplasty. The first described total disc replacement was the Fernstorm steelball endprosthesis in the late 1950s. Since that time, multiple disc replacement prostheses have been designed for use in the lumbar spine. A large majority would never be implanted in humans. The first prosthesis designed to be commercially distributed as an artificial disc was initiated in 1982 by Schellnack and Buttner-Janz. Currently, many different lumbar total disc prostheses are available and approved for the European market. In the United States, American Investigational Device Exemption (IDE) trials have let to FDA approval for Charit and Prodisc prostheses. In this article, we systematically review the available literature on the effectiveness and safety of currently available prostheses for TDR in patients with systematic DDD. Objective The objective of this systematic review was to assess the effectiveness and safety of total disc replacement surgery in patients with chronic low back pain due to DDD. The main research questions were: 1. What is the course of DDD complaints and/or symptoms following total disc replacement surgery? 2. What is the effectiveness of total disc replacement surgery compared to other treatments? 3. What is the safety of total disc replacement surgery? For this systematic review, we used the method guidelines for systematic reviews as recommended by the Cochrane Back Review Group. Below the search strategy, selection of the studies, data extraction, risk of bias assessment, and data analysis are described in more detail. All these steps were performed by two reviewers independently (KvdE and RO) and during consensus meetings, potential disagreements between the two reviewers regarding these issues were discussed. If they were not resolved a third reviewer (MvT) was consulted. Search strategy An experienced librarian performed a comprehensive systematic literature search. The MEDLINE, EMBASE and COCHRANE LIBRARY databases were searched for relevant studies from 1973 to October 2008. The search strategy consisted of a combination of keywords concerning the technical procedure (e.g. disc replacement, prosthesis, implantation, discectomy, arthroplasty) and keywords regarding the anatomical features and pathology (e.g. intervertebral disc degeneration, discitis, low back pain, lumbosacral region, lumbar vertebrae). These keywords were used as MESH headings and free text words. In addition, a search was performed using the specific names of the prostheses. The full search strategy is available upon request. Selection of studies The search was limited to studies published in English, German, and Dutch, because these are the languages that the review authors are able to read and to understand. Two review authors independently examined all titles and abstracts that met our search terms and reviewed full publications, when necessary. The reference section of all primary studies was inspected for additional references. For the assessment of the course of complaints and/or symptoms (research question 1), we included prospective cohort studies reporting on at least 20 cases and having a follow up period of more than 6 weeks. By definition, cohort studies do not provide information about effectiveness, so for assessment of the effectiveness (research question 2), we only included randomized controlled trials. When multiple articles were identified on the same study, but describing different follow up measurements, they were included. However, articles describing only one arm of the trial, or only describing the results of 1 centre of a multicentre trial were excluded. For assessing the safety (research question 3), we extracted data on all reported complications from the prospective cohort studies and randomized controlled trials we included for research questions 1 and 2. Furthermore, we included overview studies on complications. Case reports were excluded. Data extraction Two review authors independently extracted relevant data from the included studies regarding design, population (e.g. age, gender, duration of complaints), type of total disc replacement surgery, type of control intervention (e.g. no treatment, lumbar fusion), vertebral level(s) operated on, follow-up period, and outcomes. Primary outcomes that were considered relevant were pain intensity . All ODI scores and VAS scores were converted into 0-100 scale. Other outcome measures, such as physiological outcome, radiological outcomes, and patient satisfaction were considered as secondary outcome measures. Risk of bias (RoB) assessment Two review authors independently assessed the risk of bias (RoB) of the included studies. Controlled trials were assessed using a criteria list recommended by the Cochrane Back Review Group. The following criteria are scored yes, no or unsure: Was the method of randomization adequate? Was the treatment allocation concealed? Was the patient blinded to the intervention? Was the care provider blinded to the intervention? Was the outcome assessor blinded to the intervention? Was the dropout rate described and acceptable? Were all randomized participants analyses in the group to which they were allocated? Are reports of the study free of suggestion of selective outcome reporting? Were the groups similar at baseline regarding the most important prognostic indicators? Were co-interventions avoided or similar? Was compliance acceptable in all groups? Was the timing of the outcome assessment similar in all groups? Criteria 11 was scored not applicable because we consider compliance not relevant for surgical interventions. If studies met at least 6 of the 12 items, the RoB was considered low. Disagreements were resolved in a consensus meeting and a third review author was consulted when necessary. Full assessment is available upon request. The overall grading of the evidence was based on the GRADE approach. Search and selection A total of 1,962 references were identified from MED-LINE, EMBASE and the COCHRANE LIBRARY that were potentially relevant for this review on total disc replacement surgery. After checking titles and abstracts, a total of 112 full text articles were retrieved that were potentially eligible for answering all research questions. After reading full text, 21 articles reporting on 16 studies were relevant for answering research question 1, and 16 articles reporting on 3 studies were relevant for research question 2. Seven overview articles for answering research question 3. Figure 1 shows the search strategy process in a flow diagram. Reviewing the reference lists of these articles resulted in no additional studies. Assessment of risk of bias After assessing, the risk of bias of the controlled trials there was 20% disagreement between the two review authors. Consultation of the third reviewer was not necessary because disagreements were resolved in a consensus meeting. For the 24 months follow up, the reporting on the Charit trial was considered to have a low risk of bias. However, the reporting on the 5-year follow up was considered to have a high risk of bias. The reporting on the ProDisc trial was considered to have a high risk of bias as it only met 4 out of the 11 risk of bias criteria. The Flexicore study was also considered to have a high risk of bias as it only met 2 out of 11 risk of bias criteria ( Table 2). By design, the prospective cohort studies were not only included in the effectiveness analysis, but also used to describe the course of DDD complaints and/or symptoms after undergoing a TDR surgery. Outcomes What is the course of DDD complaints and/or symptoms following total disc replacement surgery? Charite (Table 3) The Charit prostheses is the first total disc prostheses, developed by Butter-Janz and Schellnack at the Charit clinic in former East Germany. The CharitIII became commercially available for the first time in the late 1980s. Lemaire et al. reported 2 articles, respectively, with 51 months follow-up in 1997 and with 11.3 years follow up in 2005 on the same population. These articles report a good or excellent clinical result, respectively, in 85 and 90%. Several other prospective cohort studies report positive results as well on VAS improvement (range 16-66 points), ODI improvement (range 14-51%) and patients' satisfaction (range 69-92%). Re-operations I: n = 11 (5.4%), C: n = 9 (9.1%) Guyer et al. Randomized Inclusion criteria: single level symptomatic DDD at L4-S1 confirmed by discography, back and/or leg pain without radiculopathy, VAS C 40 and ODI C 30, failed C 6 mo of conservative treatment Exclusion criteria: Previous thoracic or lumbar fusion, current or prior, fracture at L4-S1, osteoporosis, spondylosis, spondylolisthesis [ the patients were ''completely satisfied'' or ''satisfied'' with the results. ProDiscII, the second generation ProDisc, is reported on in several publications with follow up range 3 months to 2 years [27,28,. Primary outcome results suggest being positive, VAS improvement (range 40-62 points) and ODI improvement (range 21-48%). Moreover, the majority of patients seem to be satisfied with the results (range 79-100%). Acroflex (Table 3) Fraser et al. conducted two pilot studies and combined the results. The endplates were changed for the second study because of device failure. For the whole group, the functional impairment (ODI) improved from 14.9% and the Low Back Pain Score (LBPS) improved from 17.7 to 33 at 24 months follow-up. 50% of the patients were not working because of their back condition. Due to detection of mechanical failure, the randomized controlled trial has not been carried out. In conclusion, many studies suggest pain relief, improvement in functional status and patient satisfaction after TDR surgery. The overall outcome is positive, reduction of pain intensity (range 16-66 points) and improvement of functional impairment (range 14-51%). Moreover, the majority of patients seem to be satisfied with the results (range 69-100%). Unfortunately, detailed information on how outcomes were measured was often lacking. Although outcome results from observational studies suggest a positive course after TDR surgery, a drawback is that a significant amount of complications was reported as well (which will be described later), and a control group was lacking in these studies. What is the effectiveness of total disc replacement surgery compared to other treatments? Charite trial (Table 1) The Charit trial, which was designed as a non-inferiority trail, randomized 304 patients to either TDR with the Charit III disc (n = 205) or anterior interbody fusion with BAK cage (n = 99) with a follow-up of 2 and 5 years. The primary outcomes were pain (VAS), functional impairment (ODI), overall clinical success (defined by using four criteria: C25% improvement in ODI, device failure, major complications, and neurological deterioration). As a secondary outcome, patient satisfaction was measured. The improvements on pain intensity (-40.6 vs. -34.1) and functional impairment (24.3 vs. 21.6%), for the TDR and the BAK, respectively, did not differ significantly at 2-year follow up. The overall clinical success (indeed statistically tested on non-inferiority) revealed that the Charit group was non-inferior to the lumbar fusion group (57.1 vs. 46.5%; P \ 0.0001. P value based on the Blackwelder's test for equivalence). Patient satisfaction was significantly better in the Charit group (73.7%) compared to the control group (53.1%) (P \ 0.002). 5-year results, based on only 57% of the randomized patients and with a high risk of bias, were broadly in line with the 2-year results. At 5-year follow-up, outcomes on the composite score of clinical success showed that the Charit was non-inferior to the lumbar fusion group (57.8 vs. 51.2%; P \ 0.04. P value based on the Blackwelder's test for equivalence). There were no statistically significant differences in functional impairment and pain intensity. In conclusion, there is low quality evidence (based on one study only with a low risk of bias) that there are no clinically relevant differences on the Charit Blumenthal et al. Guyer et al. 1-level L5/S1 (n = 10), L4/L5 (n = 17), L4/L5* (n = 6), L3/L4 (n = 2) 2-level L3/L4/L5 (n = 1) L3/L4/L5* (n = 1) L4/L5/S1 (n = 15) 3-level L3/L4/L5/S1 (n = 3) *with concomitant L5-S1 arthrodes L5 (n = 1) L3/L4 and L5/S1 (n = 1) L3/L4/L5 (n = 5) L4/L5/S1 (n = 8) 3-level L3/L4/L5/S1 (n = 10), which had a high risk of bias, randomized 236 patients to either TDR with the ProDisc device (n = 161) or to anterior lumbar circumferential fusion (using femoral ring allograft and posterolateral fusion with autogenous iliac crest bone graft in combination with pedicle screws) (n = 75). Outcomes were reported with 2-year follow-up. Clinical success was defined using a combination of 10 outcomes as required by the FDA (Oswestry C 15 points, SF-36 improvement, device success, neurologic success and six radiographic outcomes: no migration, no subsidence, no radiolucency, no loss of disc height, fusion status and ROM). Clinical success was statistically significantly better in the ProDisc (54.3%) than the fusion group (40.8%) VAS (scale 0-100), ODI (scale 0-100) All ODI scores and VAS scores were converted into 0-100 scale (P \ 0.05). Although this trail was designed as a noninferiority study, it is unclear what statistical testing is applied. However, there were no significant differences between both groups on the mean functional impairment (-28.9 vs. -22.9%) and pain intensity scores (-39 vs. -32). In conclusion, there is very low quality evidence (based on 1 study only with a high risk of bias and inconsistent findings) for contradictory results on the primary outcome measures at the 2-year follow up for the ProDisc when compared with anterior lumbar circumferential fusion. Flexicore trial (Table 1) The Flexicore trial, with a high risk of bias reported the initial results of 76 patients from two clinics involved in a randomized multicentre controlled trial comparing the Flexicore device (n = 44) versus anterior lumbar circumferential fusion (n = 23) with 2 year follow-up. These 76 patients are only a small proportion of all randomized patients (n = 401) included in the complete trail. Overall, dropout rate was high, 33 patients (75%) in the intervention group and 16 patients (70%) in the control group after two years. Improvement in pain intensity (VAS -70 vs. -62) and functional impairment (Oswestry -56 vs. -46%) was slightly better in the Flexicore group than in the fusion group, but the authors did not report whether this difference was statistically significant or not. Because these are preliminary results, in addition to the high risk of bias, we refrain from drawing conclusions based on this study. In general, these results suggest no clinical relevant differences between TDR surgery and fusion techniques and a small overall success rate in both groups (approximately 50%). What is the safety of total disc replacement surgery? Although some studies reported no major complications, other cohort studies describe a wide range (1.0-91.0%) of complication rates following TDR. The majority of these studies reported complication rates ranging from 10 to 40%. Complications can be separated into those related to the surgical approach (e.g. vascular injury, nerve root damage, retrograde ejaculation) range from 2.1 to 18.7%, related to the prosthesis (e.g. subsidence, migration, implant displacement, implant failure, end plate fracture) range from 2.0 to 39.3% and related to the treatment (e.g. wound, pain, neuromusculoskeletal) range from 1.9 to 62.0%. General surgical related complications ranged from 1.0 to 14.0%. Reoperation at index level was seen in 1.0-28.6% (Table 4). These reported complication rates and reoperation rates have to be interpreted carefully, because they have been described poorly. In the Charit trial, overall complication rates published by Blumenthal et al. were 29.1% for TDR and 50.2% for fusion at 2-year follow-up. Device failures necessitating reoperation were reported in 5.4% of patients in de TDR group and 9.1% of patients in the fusion group at 2 year follow-up (Table 5). However, in the FDA report on the Charit trial much higher scores of adverse effects (TDR group 181.9% and fusion group 189.6%) were reported. In an article from McAfee et al., analysing the incidence of reoperations, even higher reoperation rates in the Charit trial are reported (6.3% in de TDR group and 10.1% in the fusion group). In the ProDisc trial, there was a similar discrepancy between the article and the FDA report. The overall complication rate as reported by Zigler et al. were 7.3% and 6.3% for TDR and fusion, respectively, but in the FDA report on the ProDisc trial much higher scores on adverse events were reported (TDR group 255.5% and fusion group 270.7%). Reoperation was necessary for 3.7% TDR patients and 5.4% fusion patients according to Zigler et al. The number of patients needed a reoperation was similar in the FDA report; however, the included number of patients in the trial was higher so the percentage of reoperation in the FDA rapport was slightly higher (Table 5). Geisler et al. analysed only the neurological complications in the Charit trial. The incidence was no higher in patients with the Charit (16.6%) than patients with BAK fusion (17.2%) (P [ 0.3). Major neurologic complications in the Charit group (e.g. burning or dysesthetic leg pain, motor deficit in index level, nerve root injury) were reported in 4.9% and in the fusion group (e.g. burning or dysesthetic leg pain, motor deficit at the index level) in 4%. One device related major complication, nerve root injury, was reported in the TDR group. Leary et al. reported on 18 patients requiring an anterior revision procedure for repositioning or removal of the Charit prosthesis because of complications. Three patients required revision of two levels. One patient had both levels revised in a single procedure, whereas two patients required staged procedures in order to revise both implants. Therefore, 21 implants were revised via 20 anterior procedures in 18 patients. Six revision cases were performed within the early postoperative period (7-14 days), all as a result of implant migration or dislocation. Late revision cases were required in 14 cases (range 3 weeks-4 years) due to implant migration, dislocation, end plate fractures, subsidence or persistent low back pain. Van Ooij et al. reported in several publications patients following implantation of the Charit prosthesis who experienced complications. Over the last 10 years, 75 patients with persisting back and leg pain and being unsatisfied with their clinical condition have been seen and analysed. An overview on late complications after TDR: subsidence (n = 39), prosthesis too small (n = 24), adjacent disc degeneration (n = 36), degenerative scoliosis (n = 11), facet joint degeneration on CT scan (n = 25), anterior migration (n = 6), posterior migration (n = 2), breakage metal wire (n = 10), wear (n = 5), severe osteolysis (n = 1), subluxation PE core (n = 1). 46 out of these 75 needed one or more salvage operations after their TDR. Fifteen patients were receiving posterior fusion without removal of the prosthesis. Because of persisting pain, afterwards 4 patients had their prosthesis removed in an additional operation. In 22 patients, 26 prostheses were removed and an anterior and posterior fusion was performed. In addition, seven patients received posterior fusion elsewhere, and in two patients, the disc prosthesis was removed elsewhere. Intraoperatively, the surgeon encountered three times vessel damage. In conclusion, a wide range of complications rates following TDR (1-90.0%) was found in all cohort studies. The majority of the studies reported complication rates ranging from 10 to 40%. Reoperation at index level was reported in 1.0-28.6%. The three randomized controlled trials published overall complication rates range from 7.2 to 28.6% in the TDR group and 6.7 to 50.2% in the fusion group. The overall reoperation rate at the index level ranged from 3.7 to 11.4% in the TDR group and 5.4-26.1% in the fusion group. However, much higher rates were reported in FDA reports on the Charit and ProDisc trials. Discussion In this article, we systematically reviewed the available literature on the clinical course, effectiveness, cost-effectiveness, and safety of TDR in patients with symptomatic DDD. Sixteen prospective cohort studies were identified that assessed the course of complaints and symptoms. These studies suggest pain relief, improvement in functional status and patient satisfaction after TDR. However, the quality of reporting on outcomes was often poor, hampering an adequate interpretation. In addition, a significant amount of complications was reported. These cohort studies lacked control group, which is necessary to evaluate effectiveness of TDR. Only three randomized controlled multicentre trials were identified that had assessed the effectiveness of TDR. The results show that there is low quality evidence (based on one study only with a low risk of bias) that there are no clinically relevant differences on the primary outcome measures between the Charit group and the BAK cage at 2 years follow up, and there is very low quality evidence (based on 1 study only with a high risk of bias) that there are no clinically relevant differences on the primary outcome measures at 5 years follow up. Furthermore, there is very low quality evidence (based on one study only with a high risk of bias) for contradictory results on the primary outcome measures for the ProDisc when compared with anterior lumbar circumferential fusion at the 2-year follow up. There is insufficient evidence on the Flexicore, because this trial had a high risk of bias, and should be considered as a preliminary report because it only reported on a small proportion of all included patients who participated in this multi centre trial. For assessing the complication rate, all reported complications were extracted from the cohort studies and randomized controlled trials included in this review, as well as overview studies on complication rates. A wide range of complications rates following TDR (1-91.0%) was found in the cohort studies. The majority of the studies reported complication rates ranging from 10 to 40%. Reoperation at the index level was reported in 1.0-28.6%. In the three randomized controlled trials, overall complication rates ranged from 7.3 to 29.1% in the TDR group and from 6.3 to 50.2% in the fusion group. The overall reoperation rate at the index level ranged from 3.7 to 11.4% in the TDR group and from 5.4 to 26.1% in the fusion group. However, much higher rates were reported in FDA reports on the Charit and ProDisc trials. No full economic evaluation was identified, so there is no evidence regarding the costeffectiveness of TDR. The course of DDD complaints and/or symptoms following total disc replacement surgery We identified 16 prospective cohort studies to evaluate the course of DDD complaints and/or symptoms. The outcome results suggested a positive course after TDR with a high proportion of patients satisfied with the result. However, these studies were of poor methodological quality and detailed information on how outcomes were measured was often lacking. For example, it was often unclear which criteria were used for clinical success and how return to work was measured. Furthermore, another drawback is that a significant amount of complications was reported as well. Moreover, these results have to be interpreted in light of controversy and limited literature regarding the causal relationship between DDD and chronic low back pain. Boden et al. reported on 67 asymptomatic individuals assessed for DDD with MRI. DDD was seen in 34% of the individuals between 20 and 39 years of age; 59% of individuals between 40 and 59 years of age, and in all but one (93%) between 60 and 80 years of age. Jensen et al. reported on 98 asymptomatic people after MRI and concluded that 64% of these people had an intervertebral disc abnormality. This challenges the rationale of surgery for DDD in the absence of convincing pathological pathways of DDD. The effectiveness of total disc replacement surgery compared to other treatments The Flexicore trial should be interpreted with great caution because of the high risk of bias. Of the three randomized controlled trials, the 2-year follow up of the Charit trial was considered to have a low risk of bias. However, the fusion technique with BAK cages and the iliac crest bone autograft used in this trial are techniques that are no longer commonly used because of poor outcomes. A better comparator would be the circumferential fusion technique which was used in the ProDisc and Flexicore trials. The use of autograft in all three studies may also be criticized as many surgeons now use both recombinant BMP-2 and/or percutaneous pedicle screw fixation when performing lumbar fusion. Use of an inadequate control intervention brings into question the clinical relevance of the results of the three trials. An additional concern is the fact that the literature is still controversial about the superiority of fusion compared to conservative treatment. For this reason, it can be interesting to compare the effectiveness of TDR to conservative treatment. At present, no studies comparing total disc replacement surgery to other treatments have been published. The three randomized controlled trials selected patients carefully, scrutinizing various contraindications for TDR. Because of this careful selection, the published trials do not provide evidence for the widespread use of TDR in all patients with DDD. The relevance of the clinical outcomes in the Charit and the ProDisc trials can also be challenged. First, modest success rates were observed in both the TDR and the fusion groups. In the Charit trial, only 57.1% of patients with TDR met all 4 criteria for success, when compared with 46.5% in the fusion group (P \ 0.0001). In the ProDisc group, only 53.4% of patients with TDR met all 10 FDA criteria for success, when compared with 40.8% in the fusion group (P = 0.0438). Second, in the Prodisc trial, 69.1% of TDR subjects improved by more than 25% on the Oswestry, when compared with 54.9% in the fusion group. In the Charit trial, 63.9% of TDR subjects improved by more than 25% on the Oswestry, compared to 50.5% in the fusion group. The use of the 25% benchmark for improvement should be interpreted against a background of a recently published consensus statement that advocates a 30% improvement in Oswestry as a benchmark for clinically relevant improvement. This recommendation focussed primarily on conservative interventions in a primary care setting. It was suggested that it might be more appropriate to use larger change scores as benchmarks for expensive and risky procedures. Third, one of the purposes of the device implementation is to reduce low back pain whereas the definition of success did not consider pain relief or opioid use. Finally, Oswestry and VAS cannot discriminate between pain that is residual from the iliac crest after fusion surgery versus the lumbar spine. Therefore, Oswestry and VAS may be artificially higher in the fusion group compared with TDR. The ODI was used in all included RCT's, but different versions of the ODI were used. Sasso used ODIv2.0. Blumenthal used the ODIv1.0 and Zigler used the ODI (chiropractic revised version ). Because different versions of the ODI are used, a direct comparison between studies is hampered. Zigler, however, holds the opinion that the differences between the various ODI versions are subtle and, they think, inconsequential. Davidson and Fairbanks hold the opinion that the amendments of this 'chiropractic revised version' are major and therefore this version cannot be compared with the official versions of ODI. The safety of the total disc replacement surgery Complications have been poorly described in the prospective cohort studies and the randomized controlled trials. It is interesting that the complications rates and reoperation rates are lower in the published articles than in the FDA reports. This illustrates the complexity of reporting on adverse effects. Compared to the journals where the papers were published, apparently the FDA requires exhaustive and detailed reporting of ''adverse events'' most of which have no relationship to the success or failure of the prosthesis. Complications associated with lumbar fusion include incomplete relief of pain, loss of motion, loss of sagittal balance, pseudoarthrosis, adjacent segment degeneration, and bonegraft donor site complication. However, a separate set of concerns exist in TDR. Wear debris leading to osteolysis and systematic effects, vertebral body damage, posterior migration or extrusion may lead to device failure and serious vascular complications. Prosthesis that fail to adequately replicate the physiologic kinematics of the lumbar spine may predispose the patient to facet joint degeneration. Without true motion preservation, the devices will merely act as interbody spacers with no potential to prevent adjacent level degeneration. Finally, reported complications for TDR show there can be severe and even life threatening, e.g. major vascular injury, major nerve root damage and device failure. However, these complication rates are low. Furthermore, in the two low risks of bias studies, the re-operation rates in the TDR group are slightly higher than in the fusion group. However, this has to be balanced against the fact that re-operation procedures for TDR are more complex. The use of intervertebral disc prostheses as an alternative to spinal fusion has been advocated to preserve segmental motion and to prevent adjacent degeneration. However, there is no consensus on this subject in literature. Some studies suggest adjacent level degeneration is prevented after TDR. However, other studies show adjacent disc degeneration after TDR. This could be the result of the DDD itself, spreading to multiple levels of the spine, and/or be a consequence of stresses on adjacent levels, generated from unphysiological motion and functioning of the disc prosthesis. Moreover, there is little knowledge regarding complications on the long term. Putzier et al. published a retrospective study with 17 years follow-up and reoperation was necessary in 11% of patients. It is important to know more about long-term complications because most operated patients are of relatively young age, between 30 and 50 years. A disc prosthesis used for TDR should survive for at least 40 years. It is very questionable if the lifetime of the designs now available will be that long as little is known about longterm behaviour of biomaterials in the spine. We do know that revision surgery can be dangerous because of adherence to great vessels and the nerve plexus. Studies that review long-term complications and longevity of the prostheses are highly recommended. Conclusion There is low quality evidence that there are no clinically relevant differences on the primary outcome measures between the Charit group and the BAK cage at 2 years follow up, and there is very low quality evidence that there are no clinical relevant differences on the primary outcome measures at 5 years follow up. For the ProDisc device, there is very low quality evidence for contradictory results on the primary outcome measures when compared with anterior lumbar circumferential fusion. Furthermore, reported complication rates varied from 1.0 to 91.0% in cohort studies and 7.3 to 29.1% in randomized controlled trials. Still lacking are high quality prospective, controlled, long-term follow-up studies, including a full economic evaluation taking into account all relevant cost when compared with the clinical benefit, and with relevant control groups to establish the efficiency and the longevity of the devices. The existing evidence, specifically regarding long-term effectiveness and/or safety is considered insufficient to justify the widespread use of TDR over fusion for single level degenerative disc. It is recommended that disc replacement surgery at this time only is performed within prospective scientific studies until further documentation of its efficiency is provided.
Lyle Township, Mower County, Minnesota History J.D. Woodbury built a log cabin along the Cedar River in section 33 (about two and a half miles west of the modern-day town of Lyle) in 1853 which he then sold to Benjamin Coe in 1855 before moving to Olmsted County. Section 4 started to be settled in 1854 by Eben Merry as well as by James Foster and his son, Return. John Tift also came in 1854 and started the small settlement of Troy. The township was officially organized in 1858 and it was later named for Robert Lyle, an early settler from Ohio. He came to the area in 1856 and went on to become judge of probate and an elected representative of the region in the Minnesota House of Representatives. He stayed in the area until 1868 when he moved to Missouri. Geography According to the United States Census Bureau, the township has a total area of 35.5 square miles (91.9 km²), all of it land. Demographics As of the census of 2000, there were 402 people, 146 households, and 113 families residing in the township. The population density was 11.3 people per square mile (4.4/km²). There were 157 housing units at an average density of 4.4/sq mi (1.7/km²). The racial makeup of the township was 99.00% White, 0.25% Native American, 0.25% Asian, and 0.50% from two or more races. There were 146 households out of which 34.2% had children under the age of 18 living with them, 71.2% were married couples living together, 2.7% had a female householder with no husband present, and 22.6% were non-families. 17.1% of all households were made up of individuals and 5.5% had someone living alone who was 65 years of age or older. The average household size was 2.75 and the average family size was 3.12. In the township the population was spread out with 23.9% under the age of 18, 11.4% from 18 to 24, 27.4% from 25 to 44, 27.1% from 45 to 64, and 10.2% who were 65 years of age or older. The median age was 38 years. For every 100 females, there were 119.7 males. For every 100 females age 18 and over, there were 120.1 males. The median income for a household in the township was $46,667, and the median income for a family was $46,500. Males had a median income of $30,000 versus $24,444 for females. The per capita income for the township was $19,116. About 6.6% of families and 8.4% of the population were below the poverty line, including 6.6% of those under age 18 and none of those age 65 or over. Troy The old town of Troy was on the Cedar River in sections 4 and 8, about eight miles (13 km) south of Austin. It was settled on March 24, 1857 by John Tift and it once had a dam, sawmill, grist mill and a hotel. It suffered the same fate as Cedar City in the spring of 1858. Cedar City John Chandler came to Lyle Township from Milton, Ontario in 1856 and took up a claim in section 4. He later gave up his rights to his claim in favor of Caleb Stock and John Phelps. These two men, along with T.N. Stone, built a dam of stones and timber and then built a sawmill and grist mill behind it. The rains of the spring of 1858 were unusually heavy and the new dam gave way to the unexpected water levels. The dam and mills were never rebuilt, but some of the old residents of the short-lived settlement lie in rest at nearby Cedar City Cemetery.
Consider This: honoring the memory of Texas Tech Officer Floyd East Jr. LUBBOCK, TX (KCBD) - One year ago Texas Tech University Police Officer Floyd East, Jr. was killed by a student he had taken into custody. The expressions of grief and concern for Officer East and his family reminded us all of what a special place Texas Tech is. But there was a darker reminder that night. It is the reminder that on any given day, a peace officer may not live to finish his shift. The women and men who serve as police, deputies, troopers and rangers know this all too well. And so do their families. Officer Floyd East, like too many who have gone before him, sacrificed his life in the service of what he loved to do, and for the safety and well-being of others. Consider this … I ask you to join me in honoring the service and sacrifice of Officer Floyd East. And we honor all of those who serve in law enforcement every day.