content
stringlengths
37
2.61M
Rapid Ultrastructural Changes of PSD and Extrasynaptic Axon-spine Interface Membrane during LTP Induced in Single Dendritic Spine Structural plasticity of dendritic spines is considered to be the basis of synaptic plasticity, learning and memory. Here, we performed ultrastructural analysis of spines undergoing LTP using a novel high throughput correlative light-electron microscopy approach. We found that the PSD displays rapid (< 3 min) reorganization of its nanostructure, including perforation and segmentation. This increased structural complexity is maintained over intermediate and late phases of LTP (20 and 120 min). In a few spines, segmented PSDs are connected to different presynaptic terminals, producing a multi-innervated spine in the intermediate and late phases. In addition, the area of extrasynaptic axon-spine interface (eASI) displayed a pronounced, rapid and sustained increase. Finally, presynaptic vesicle number increased slowly and became significantly higher at late phases of LTP. These rapid ultrastructural changes in PSD and surrounding membrane, together with the slow increase in presynaptic vesicle number, likely support the rapid and sustained increase in synaptic transmission during LTP.
Effects of histamine-1 receptor antagonism on leukocyte-independent plasma extravasation during endotoxemia. PURPOSE The purpose of this study was to investigate the role of histamine in mediating leukocyte-independent microvascular permeability and mast cell activation during endotoxemia. Microvascular permeability and mast cell activity were determined after inhibition of the L-selectin mediated leukocyte-adherence by fucoidin and after inhibition of histamine effects by the histamine H1-receptor antagonist diphenhydramine. MATERIALS AND METHODS In male Wistar rats, leukocyte rolling, leukocyte adherence, microvascular permeability, and mast cell activity were determined in mesenteric postcapillary venules using intravital microscopy. After pretreatment with the histamine H1-receptor antagonist diphenhydramine, animals in the ETX/H1-ANT group received a continuous infusion of endotoxin. Animals in the ETX group underwent the same procedure, but received saline 0.9% instead of diphenhydramine. In both groups, leukocyte adherence was prevented by administration of fucoidin. Animals in the control group received volume-equivalent saline 0.9%. RESULTS In the endotoxin-challenged groups, fucoidin prevented leukocyte rolling and reduced leukocyte adherence to values comparable to control group. In the ETX group and the ETX/H1-ANT group both microvascular permeability and mast cell activity increased significantly, starting at 60 minutes. Differences in mast cell activity between the ETX group and the ETX/H1-ANT group were significant at 60 minutes and at 120 minutes. Differences in microvascular permeability between the ETX/H1-ANT group and the ETX group were not significant. CONCLUSIONS The leukocyte-independent microvascular damage during early endotoxemia cannot be inhibited efficiently by the H1-receptor antagonist diphenhydramine, indicating that histamine seems to play only a minor role in that pathophysiology. Furthermore, mast cells do not seem to be involved in the development of leukocyte-independent plasma extravasation during endotoxemia.
Russian elites believe they can do what they want, whether in Kiev or London, because whatever the Westerners say, they are all for sale in the end. Photo by Ilia Yefimovich/Getty Images LONDON—Back in 2006, an energy company called Rosneft floated itself on the London Stock Exchange. Even for a Russian company, its prospectus, as I noted at the time, contained some unusual warnings. “Crime and corruption could create a difficult business climate in Russia,” the document noted; some directors’ interests “may cause Rosneft to engage in business practices that do not maximize shareholder value.” This was only fitting: Rosneft was created by a blatant act of thievery. A couple of years earlier, the Russian government had forced another oil company, Yukos, into bankruptcy by demanding $30 billion in back taxes and eventually sending its chairman, Mikhail Khodorkovsky, to a labor camp (from which he was recently released). Yukos’ assets were then sold to a mystery company that gave its address as that of a vodka bar in Tver. The mystery company in turn sold the property to Rosneft for a pittance, and no wonder. Rosneft’s major shareholder was the Russian government. Its CEO was President Vladimir Putin’s deputy chief of staff. As I also wrote at the time, the Rosneft sale established a principle: Illegally acquired Russian assets can receive the imprimatur of the international financial establishment, as long as they are sufficiently valuable. This general principle has since been applied to many Russian assets in the United States and Europe, especially Britain. American readers may not realize the extent to which millionaires and billionaires from the former Soviet Union dominate the London art and property markets. Some of that money represents oil profits. But some of it comes from theft. That tacit decision to accept all Russian money at face value has come home to roost in the past week. Some of the general European reluctance to apply economic sanctions to Russia is of course directly related to the Russian investments, interests, and clients of European companies and banks. But in fact, the laundering of Russian money into acceptability, in both Europe and the United States, has had far more important consequences in Russia itself. Most Russians don’t draw a line, as we do, between “economic issues” on the one hand and “human rights” on the other. In Russia, corruption and human rights are actually one and the same issue. The Russian elite controls the media and represses dissent precisely because it wants to protect its wealth. At the same time, the elite knows that its wealth derives directly from its relationship to the state, and thus it cannot afford to give up power in a democratic election. Western politicians who speak grandly about democracy while ignoring violations of their own anti-corruption laws back home are thus perceived in Russia as hypocrites. They also contribute to the Russian elite’s feeling of impunity: Putin and his colleagues can do what they want, whether in Ukraine, Georgia, or London, because everyone knows that whatever the Westerners say, they are all for sale in the end. This doublespeak also grates with the beleaguered Russian opposition, which no longer sees the West as particularly friendly or even appealing: No one wants to hear about human rights from someone whose businessmen are funding an oppressive state. But this state of affairs is not inevitable. Without sending a gunship or pressing a reset button, we could change our relationship overnight with the Russian government—and with ordinary Russians—simply by changing our attitude toward Russian money. It shouldn’t require a Russian invasion of Crimea to persuade Western governments to band together and deny visas to someone whose wealth comes from corrupt practices. It shouldn’t require a threatened Russian attack on eastern Ukraine for us to shut down the loopholes and tax havens we’ve created in the British Virgin Islands or the Swiss Alps. After all, this is money that corrupts our societies, too. The Western financial elite that has become dependent on foreign oligarchs’ cash is the same elite that donates to political parties and owns television stations and newspapers at home. The ex-politicians who sit on the boards of shady companies still have friends in power. But now there has been an attack. The Russian president has broken a series of international treaties, some of which were originally designed to protect Russia’s rights to its naval base in Crimea. In his public statements and actions, that same president has openly revealed his disdain for the West. His state-owned television channels are publishing stories that are verifiably untrue. This is our wake-up call: Western institutions have enabled the existence of a corrupt Russian regime that is destabilizing Europe. It’s time to make them stop.
A petition launched by Wikipedia founder Jimmy Wales to halt the extradition to the US of Sheffield Hallam University student Richard O'Dwyer has garnered 160,000 signatures in less than five days. O'Dwyer, 24, faces up to 10 years in US prison for alleged copyright offences relating to TVShack.net, a website that provided links to places where users could watch TV shows and films online. Wales's petition, which calls on the home secretary, Theresa May, to revoke her permission to extradite O'Dwyer, has picked up more than 75,000 signatures in the last 24 hours alone after being circulated among US supporters of Change.org. The petition is now the fastest-growing Change.org petition in the UK. In the Guardian article that launched the campaign, Wales said the extradition represented a battle between the film industry and general public. "Given the thin case against him, it is an outrage that he is being extradited to the US to face felony charges for something that he is not being prosecuted for here," Wales said. "No US citizen has ever been brought to the UK for alleged criminal activity that took place on US soil." O'Dwyer's extradition has been opposed by figures from all three political parties, including members of the culture, media and sports select committee, Louise Mensch and Tom Watson, the Liberal Democrat president, Tim Farron, and the chair of the home affairs committee, Keith Vaz. Graham Linehan, the writer of sitcoms The IT Crowd, Black Books and Father Ted, also signed the petition. He said: "It just seems to me that people like Richard are being punished for being able to navigate the modern world. The internet has changed everything. They're doing what comes naturally in these new uncharted waters and suddenly they're getting their collars felt by people who still have Hotmail addresses. "The internet means that commerce and communication and culture and morality is changing, and changing so fast that we struggle to keep up." Richard's mother, Julia O'Dwyer, who has campaigned for her son online and at protests for much of the last year, has welcomed the efforts to date and called on the Home Office to take note. She said: "I'm blown away by the response to Jimmy's petition. It's been a tough year campaigning for my son, but this outpouring of support from around the world has really made politicians sit up and take note of Richard's case. Now it's time for Theresa May to do the right thing by Richard." In a statement issued earlier this week, the Home Office stood by its decision to give permission to extradite O'Dwyer. "Richard O'Dwyer is wanted in the US for offences related to copyright infringement," it said. "The UK courts found there were no statutory bars to his surrender under the Extradition Act 2003 and, on 9 March, the home secretary, having carefully considered all relevant matters, signed an order for his extradition to the US. "Mr O'Dwyer has appealed against the decision of the district judge and an appeal hearing will be held in due course."
Over the past decade, IDEA has shown that high expectations, combined with careful preparation and hard work, can have tremendous results. As we prepare to open two new IDEA public charter schools in El Paso, I am thinking a lot about high expectations. On some level, this is expected. After all, our motto is College For All, and at our 79 schools across the Rio Grande Valley, San Antonio, Austin, Baton Rouge, Louisiana, and now El Paso, we seek to instill high expectations in each student the moment they walk through the door. In kindergarten, we assign them to houses named after prestigious colleges and universities. By middle school, they are taking pre-Advanced Placement (AP) classes and receiving college counseling. By the end of high school, they have visited more than 30 college and universities and taken 13 AP courses, readying for the rigors of higher education and, in many cases, earning college credits before they graduate. Over the past decade, IDEA has shown that high expectations, combined with careful preparation and hard work, can have tremendous results. As a tuition-free public charter school, IDEA draws its student body from the same mix of students as the traditional ISDs, and it has largely done so in areas that look much like El Paso. Our outcomes, however, are very different from most schools serving lower-income Texas communities: for 12 years, every one of our graduating seniors has been accepted to college or university. At a time when more and more employers are looking for highly skilled, adaptable employees and civic involvement demands engaged, critical thinking; raising the level of academic attainment and closing the achievement gap has never been more important. We are proud to play a role in this movement. But when I think about high expectations these days, it is only partially about our students. Because to have high expectations of them, we must have high expectations of each other. We must be willing to ask the same things of ourselves, our staff, and our community as we do of our students—hard work, preparation, and persistence. At IDEA, we model what we expect from our students with our own actions and commitments. For example, to ensure that IDEA El Paso’s schools are best positioned to replicate the success of our other campuses, twelve El Paso teachers recently moved themselves—and, in some cases, their families—to the Rio Grande Valley to participate in IDEA’s Teacher Fellowship program. Over the past year, they learned our program, our model, and our ideals from the inside out—as teachers at established IDEA schools. They will be returning to El Paso this Summer to help launch our two new schools - IDEA Edgemere and IDEA Rio Vista - which open in August. High expectations from the community are needed as well. I recently had the pleasure of participating in the Greater El Paso Chamber of Commerce’s State of Education event, where education and business leaders gathered to discuss ways we can create a college-ready culture in El Paso. The Council on Regional Economic Expansion and Educational Development (CREEED) led the conversation by discussing our region’s current educational attainment rates and where we need to be, while myself and other local educators, discussed the innovative approaches we’ve taken in our schools to increase student success. Community organizations like CREEED are an essential component to our education landscape. With their eye on increasing college readiness, and their connection to the most effective approaches to education, we need their involvement to ensure that our region is providing the best options for our students. It’s clear from this event that everyone understands that our region’s future rests in its young people, and that their success in life and career will be shaped by their educational attainment. It was a good start. But in order to close the achievement gap for all students and send more to college, we must broaden the conversation. We must extend our high expectations beyond the classroom and embrace them in the living room and around our neighborhoods. As we work together to prepare El Paso’s children for college and career we are helping chart a new course for our community. Ernie Cantu is executive director of IDEA Public Schools in El Paso.
Timing of the high-dose therapy in the area of new drugs. The treatment approach in patients with multiple myeloma (MM) has been essentially changed with introduction of novel agents such as thalidomide, bortezomib and lenalidomide. In patients eligible for autologous stem cell transplant, combinations of novel agents with chemotherapy have been recognized as induction regimens. New induction regimens have significantly increased the rate of complete remission before and after autologous stem cell transplant with positive impact on the length of progression-free survival followed by the possibility for further improvement with the application of consolidation or use of thalidomide and lenalidomide as maintenance therapy. These results offer new perspectives in the treatment of MM with a reasonable hope of cure.
Growth factor regulation of neural chemorepellent Sema3A expression in satellite cell cultures. Successful regeneration and remodeling of the intramuscular motoneuron network and neuromuscular connections are critical for restoring skeletal muscle function and physiological properties. The regulatory signals of such coordination remain unclear, although axon-guidance molecules may be involved. Recently, satellite cells, resident myogenic stem cells positioned beneath the basal lamina and at high density at the myoneural junction regions of mature fibers, were shown to upregulate a secreted neural chemorepellent semaphorin 3A (Sema3A) in response to in vivo muscle-crush injury. The initial report on that expression centered on the observation that hepatocyte growth factor (HGF), an essential cue in muscle fiber growth and regeneration, remarkably upregulates Sema3A expression in early differentiated satellite cells in vitro . Here, we address regulatory effects of basic fibroblast growth factor (FGF2) and transforming growth factor (TGF)-s on Sema3A expression in satellite cell cultures. When treated with FGF2, Sema3A message and protein were upregulated as revealed by reverse transcription-polymerase chain reaction and immunochemical studies. Sema3A upregulation by FGF2 was dose dependent with a maximum (8- to 1-fold relative to the control) at 2.5 ng/ml (150 pM) and occurred exclusively at the early differentiation stage. The response was highly comparable in dose response and timing to effects of HGF treatment, without any additive or synergistic effect from treatment with a combination of both potent upregulators. In contrast, TGF-2 and -3 potently decreased basal Sema3A expression; the maximum effect was at very low concentrations (40 and 8 pM, respectively) and completely cancelled the activities of FGF2 and HGF to upregulate Sema3A. These results therefore encourage the prospect that a time-coordinated increase in HGF, FGF2, and TGF- ligands and their receptors promotes a programmed strategy for Sema3A expression that guarantees successful intramuscular motor reinnervation by delaying sprouting and reattachment of motoneuron terminals onto damaged muscle fibers early in regeneration pending restoration of muscle fiber contractile integrity.
Texas coach Mack Brown is expected to resign by the end of the week, according to a source. "I know Mack, he's a friend, this is his decision, but he wants to tell his players and staff and not read it on the Internet," the source told ESPN. "That's why he reacted strongly to the [Orangebloods.com] report. "I'd be real surprised if it hasn't happened by Friday night with the [Texas] football banquet. I think it will be taken care of. It wouldn't drag on much longer." Orangebloods.com first reported Tuesday afternoon that Brown would step down after 16 years as the Longhorns' coach. Later Tuesday, Brown texted the website Horns247: "I haven't seen [the] article. I'm in Florida recruiting. If I had decided to step down, I sure wouldn't be killing myself down here. I have not decided to step down." According to a source, Mack Brown will resign this week after 16 seasons as coach at Texas. AP Photo/Eric Gay On Wednesday morning a source said that "there's nothing new today. I am expecting that Friday will be the day (of Brown's announcement)." The source said Tuesday, though, that discussions have been ongoing with Brown, Texas president Bill Powers and Brown's agent, Joe Jamail. "The talks were very friendly, and the conclusion was Mack would step down in the next couple of days," the source told ESPN. However, the source said Jamail is participating in a trial in Beaumont, Texas, which has slowed the process, and that there are a "lot of logistics" to work out. "Such as when he leaves, what his role will be," the source said. "A myriad of things that have to be worked out." The source reiterated Brown would not be coaching at Texas in 2014. "By the end of the week, that will be the outcome," the source told ESPN. "That will happen. It's a shame after 16 years he's not able to do it on his own with dignity and grace." Another source told ESPN's Joe Schad that Brown has had active discussions with Texas officials about his intention to resign and that there is a good chance it will become official this week. In a statement, athletic director Steve Patterson said: "We continue to discuss the future of Texas football. Mack Brown has not resigned. And, no decisions have been made." While nothing officially has been announced regarding Brown's future with the Longhorns, Texas already has a short list of candidates to replace him and it includes San Francisco 49ers coach Jim Harbaugh, sources told ESPN Senior NFL Insider Chris Mortensen.
Analysis of students understanding ability in scientific writing technique course The purpose of this research is to find out students understanding of the ability in writing scientific papers. This research was conducted in Chemistry Education students, Mathematics and Natural Sciences Faculties, Islamic University of Indonesia, academic year 2018/2019. This research was done with students with a total of 42 people. Retrieval of data in this research using test techniques in the form of multiple-choice questions. The instrument for the multiple-choice test consists of ten indicators to assess students understanding of ability in scientific writing paper technique. The result of this research showed that the ability of students understanding of writing scientific papers is good. The 70% received are categorized as good and very good, while the other two indicators get a portion of <70% with the less good and sufficiently good categories.
Single-use or disposable systems are rapidly increasing in biopharmaceutical industry due to the flexibility and cost-effectiveness of such systems. Disposable components in the systems are presterilized and qualified to regulatory requirements. Disposable systems are easy to adapt to different production purposes and it is easy and inexpensive to change a product line while good process reliability is at least maintained, if not even improved. There are several kinds of mixing systems in which disposable containers or bags may be used. One type of such mixing system is a bioreactor in which cells or microorganisms can grow. Mixing systems are also used to prepare for example buffer and media. The mixing systems may comprise a vessel which houses a disposable bag or container. The vessel may have a form of a cylinder, for example substantially circular cylinder. The bag is placed inside the vessel in an accurate manner so that for example different pipelines, mixers and sensors can be connected to the bag properly and accurately. US2011/0310696 shows a mixing system of this kind. To be able to place a disposable bag inside a vessel in an accurate manner, an easy access inside the vessel is important. The vessels may vary in size and can be adapted for bags of from about 20 liters up to about 2000 liters. Especially large mixing vessels housing a bag of the disposable type require a very large floor area, which may be a limiting factor. Therefore, there is a need for an arrangement that provides an easy access to the inside of the vessel in a space-saving manner.
Comparing two types of botulinumA toxin detrusor injections in patients with severe neurogenic detrusor overactivity: a casecontrol study To compare the efficacy of two types of botulinum toxin type A (BTXA; DysportTM, Ipsen Ltd, Slough, UK) or BotoxTM (Allergan Inc., Irvine, CA, USA) and examine the possible doseeffect relation for Dysport in those patients, as multifocal detrusor injections with BTXA are effective for severe neurogenic detrusor overactivity in adults.
Public Attention Formation in the "Diet Kantong Plastik" Social Movement The social movement to reduce plastic use was initiated by the Indonesian Plastic Bag Diet Movement (GIDKP). The development of technology and communication through digital media makes conversations and efforts to increase awareness about environmental issues increasingly echoed. This research uses a qualitative research approach with case study research. Data were collected from interviews, observation, and documentation and analyzed inductively through data reduction. The discussion results in this study were that the increase in the use of social media made the flow of information faster. GIDKP has attracted the public's attention with its informative and consistent content and message. The use of main actors and the timing of information broadcast are also consistent. However, closing the gap between the issue and the general individual is still necessary. This research implies that the variety of actors, increasing the frequency of information on digital channels, and combining offline-online activities can be increased to attract greater public attention. Keywords: Public Attention, Digital Based, Social Movement, Plastic Bag Diet
AFTER more than five years of negotiations, representatives from 12 countries in Asia and the Americas finally struck a deal today on the Trans-Pacific Partnership, an ambitious and contentious free-trade pact. It is the biggest and deepest multilateral trade deal in years, encompassing countries that account for 40% of the world’s economy. But it might prove even more important than that if it succeeds in its ambition to “define the rules of the road” for trade in Asia, as Michael Froman, America’s lead negotiator, put it. Mr Froman’s office estimates that TPP will see more than 18,000 tariffs on American products reduced to zero. But tariffs, which have already been greatly reduced among TPP’s members, are not the most touted bit of the treaty. More important are the minimum standards for the protection of intellectual property, workers and the environment. All parties will be compelled to follow the International Labour Organisation’s basic principles on workers’ rights, for example. By the same token, countries that do not live up to the deal’s environmental rules can be pursued through the same dispute-settlement mechanism that will be used to adjudicate commercial grievances. There are even rules barring countries from favouring state-owned enterprises—a big step for the likes of Malaysia and Vietnam. Get our daily newsletter Upgrade your inbox and get our Daily Dispatch and Editor's Picks. Two leaders will be particularly pleased to see a deal done. For Barack Obama, TPP represents the first (and possibly only) lasting evidence of his administration’s “pivot” towards Asia. It shows America’s continued commitment to the region, and its unwillingness to cede primacy to China. China’s success in recruiting American allies as founding members of its Asian Infrastructure Investment Bank earlier this year seems to have prompted America to redouble its efforts to square TPP away. Shinzo Abe, Japan’s prime minister, sees in TPP a chance to help the “third arrow” of his plan for economic revitalisation hit its mark. Big interest groups such as Japan’s farmers will no longer be quite so cosseted. Meanwhile, Mr Abe hopes that the promise of greater market access for Japanese exporters, at a time when the yen is relatively weak, will generate faster economic growth. In particular, TPP should boost trade between America and Japan—something to celebrate, since the pair are the world’s biggest and third-biggest economies. The stakes are lower for a group of other rich members—Australia, Canada, New Zealand—each of which nonetheless fought to extract concessions from America. Australia succeeded in trimming the period of protection from generic imitators that America demanded for biologic drugs from 12 years to eight; Canada preserved its quota system for various agricultural products, allowing only limited duty-free imports; New Zealand won greater access for its dairy exports. The full implications of the deal are not yet known, however, since TPP has been negotiated under a thick blanket of secrecy. This was intended to make it easier for the signatories to offer concessions without being pilloried at home. But it has stoked the anxieties of industry groups on both sides of the Pacific. It will be weeks before the agreement’s 30 chapters are translated and published in full. Moreover, lawmakers in the 12 participating countries must now approve the agreement. This should be straightforward in places like Japan, where the ruling party has a commanding majority. But Canada faces a knife-edge election on 19th October. One of the three main parties is campaigning against TPP, arguing that it will kill farm jobs. The biggest row will be in America, where Congress has 90 days to review the deal before putting it to an up-or-down vote, with no amendments. Although Republicans, traditionally the party of free trade, have a majority in both houses of Congress, they are divided on TPP’s merits. Donald Trump, a candidate for the Republican presidential nomination next year, has described it as “an attack on America’s business”. Hillary Clinton, the leading Democratic presidential contender, has also refused to endorse the deal, albeit not quite so flamboyantly. Such opposition is ill-advised. The slowing of the Chinese economy and a tepid global recovery from the financial crisis have led to a long-term slowdown in world trade. Indeed by some measures, trade is actually declining. This is worrying because trade remains the most reliable way for poor countries to become richer. TPP would undoubtedly help spur it, especially for the poorer members of the club. Moreover, TPP’s members claim that they are open to other countries joining the deal. That holds out the prospect of TPP not only freeing trade, but also of instituting a more predictable, rules- based business environment, even in places currently excluded from the deal. Its biggest failing—that it does not include China—could evaporate, if TPP’s members have the courage to push on.
1. Field of the Invention The present invention relates to the field of long line fishing gear and more particularly to an improved closure to attach a ganglion to a long line. More generally, the present invention relates to closure devices and more particularly to an improved method of removably attaching an object to a line. With many commercial fishing arrangements a main fishing line or long line is set, cast or dragged from a boat. The long line has a plurality of hooks attached to it at predetermined intervals. There are several methods of attaching hooks to the line. The main method is to provide a plurality of short lines and to attach one or more hooks to each short line, this setup being referred to as a ganglion. The short lines are then removably attached to the long line. A commercial fisherman feeds long lines into the water and retrieves them from the water many times a day. As the long line is fed out, each ganglion is attached to it and, as the long line is retrieved, each ganglion must be disengaged from the long line. This is done so that the long line can be retrieved mechanically by winding the line onto a reel on the boat without the line becoming tangled. It also tends to prevent the workers from becoming caught on the hooks and allows for the storage of very long lines in small spaces. Ganglions are attached to long lines by either tying the ganglions directly to the long line (conventional gear) or by tying the ganglions to a ganglion "snap clip" and then fixing the ganglion snap clip onto the long line. Untying ganglions on a conventional line is very difficult and time consuming for the fisherman and therefore is only done when the boat is in port. At sea, however, broken or twisted ganglions may have to be retied. Therefore ganglion snaps have come into use. 2. Description of the Related Art A typical ganglion snap has been shown in FIG. 2 of the patent to Hague et al, U.S. Pat. No. 4,862,633. A similar snap is shown in FIG. 3 of the patent to Bates, U.S. Pat. No. 4,524,535. A severe shortcoming of the prior art snaps is that these snaps, including that shown in Bates, require a squeezing or compression of the members to attach or release the snap to or from the line. This repetitive manipulation results in fatigue, soreness, and muscle strain and has been considered a cause of carpal tunnel syndrome in many fisherman. Thus, there is a need for a simple, rapid, and non-strenuous method of attaching and disengaging the ganglions.
Effects of Statins on Progression of Subclinical Brain Infarct Background: Subclinical brain infarct (SBI) is associated with subsequent stroke and cognitive decline. A longitudinal epidemiological study suggests that statins may prevent development of SBI. We investigated the effects of statins upon development of brain infarct by performing a post-hoc analysis of the Regression of Cerebral Artery Stenosis (ROCAS) study. Methods: The ROCAS study is a randomized, double-blind, placebo-controlled study evaluating the effects of simvastatin 20 mg daily upon progression of asymptomatic middle cerebral artery stenosis among stroke-free individuals over 2 years. A total of 227 subjects were randomized to either placebo (n = 114) or simvastatin 20 mg daily (n = 113). The number of brain infarcts as detected by MRI was recorded at baseline and at the end of the study. The primary outcome measure was the number of new brain infarcts at the end of the study. Results: Among the 227 randomized subjects, 33 (14.5%) had SBI at baseline. At the end of the study, significantly fewer subjects in the active group (n = 1) had new brain infarcts compared with the placebo group (n = 8; p = 0.018). The new brain infarcts of subjects in the active group were subclinical. Among the placebo group, the new brain infarcts of 3 subjects were symptomatic while those of the remaining 5 subjects were subclinical. Among putative variables, multivariate regression analysis showed that only the baseline number of SBIs (OR = 6.27, 95% CI 2.416.5) and simvastatin treatment (OR = 0.09, 95% CI 0.010.82) independently predicted the development of new brain infarcts. Conclusions: Consistent with findings of the epidemiological study, our study suggests that statins may prevent the development of a new brain infarct.
Agent-Based Model Exploration of Latency Arbitrage in Fragmented Financial Markets Computerisation of the financial markets has precipitated an arms-race for ever-faster trading. In combination, regulatory reform to encourage competition has resulted in market fragmentation, such that a single financial instrument can now be traded across multiple venues. This has led to the proliferation of high-frequency trading (HFT), and the ability to engage in latency arbitrage (taking advantage of accessing and acting upon price information before it is received by others). The impact of HFT and the consequences of latency arbitrage is a contentious issue. In 2013, Wah and Wellman used an agent-based model to study latency arbitrage in a fragmented market. They showed: (a) market efficiency is negatively affected by the actions of a latency arbitrageur; and (b) introducing a discrete-time call auction (DCA) eliminates latency arbitrage opportunities and improves efficiency. Here, we explore and extend Wah and Wellman's model, and demonstrate that results are sensitive to the bid-shading parameter used for zero-intelligence (ZIC) trading agents. To overcome this, we introduce the more realistic, minimally intelligent trading algorithm, ZIP. Using ZIP, we reach contrary conclusions: (a) fragmented markets benefit from latency arbitrage; and (b) DCAs do not improve efficiency. We present these results as evidence that the debate on latency arbitrage in financial markets is far from definitively settled, and suggest that ABM simulation-a form of decentralised collective computational intelligence-is a productive method for understanding and engineering financial systems.
PWE-157 Reducing anastomotic leak rate in ivor lewis oesophagectomy testing a change from mechanical to semi-mechanical anastomosis Introduction The reduction in mortality after oesophagectomy seen over the last decade has not been matched by a reduction in anastomotic leaks with a leak rate of 510% accepted as a standard of care. Leaks increase postoperative mortality, morbidity and length of stay and reduce cancer survival. We present the results of a modified semi-mechanical intrathoracic anastomotic technique (SM) originally described by Orringer et al 1for the cervical anastomosis. Method Centralisation of oesophagogastric cancer surgery took place in our region in January 2010, bringing together five surgeons from three surgical units. For consistency, all five surgeons used a standard circular stapled anastomotic technique (CS) from 2010, but the leak rate was deemed too high. As a result, three of the five surgeons (A, B, and C) changed to a modified SM technique (surgeon A changed in November 2011, surgeon B in December 2011 and surgeon C in May 2013). This anastomosis joins the obliquely cut end of the oesophagus to the side of the proximal gastric conduit with a single 25 mm firing of a 45 mm linear stapler posteriorly (ATS45, Blue cartridge, Ethicon), the remainder of the anastomosis being completed with single layer inverting interrupted 3/0 PDS. The other two surgeons (D and E) continued to use the CS technique. Anastomotic leak rates, mortality and length of stay were evaluated to test this change in technique. Results Between Jan 2010 and February 2015 inclusive 376 patients underwent curative intent, 2-stage Ivor Lewis oesophagectomy for cancer. One patient died in hospital (0.27% mortality). The anastomotic leak rate Surgeons A, B and C using CS was 16.8%(15/92) and SM 0.8%(1/127) for surgeons D and E the rate was 10.2%(16/157). The median post-operative length of stay between groups was 11(756), 10(658) and 12(751) respectively.Abstract PWE-157 Figure 1 Conclusion There was a profound reduction in anastomotic leak rate for all three surgeons who changed from circular to semi-mechanical anastomosis. Disclosure of interest None Declared. Reference Orringer MB, et al. Eliminating the cervical esophagogastric anastomotic leak rate with a side to side stapled anastomosis. J Thorac Cardiovasc Surg. 2000 Feb;119:277-88
SARS-CoV-2's closest relative, RaTG13, was generated from a bat transcriptome not a fecal swab: implications for the origin of COVID-19 RaTG13 is the closest related coronavirus genome phylogenetically to SARS-CoV-2, consequently understanding its provenance is of key importance to understanding the origin of the COVID-19 pandemic. The RaTG13 NGS dataset is attributed to a fecal swab from the intermediate horseshoe bat Rhinolophus affinis. However, sequence analysis reveals that this is unlikely. Metagenomic analysis using Metaxa2 shows that only 10.3 % of small subunit (SSU) rRNA sequences in the dataset are bacterial, inconsistent with a fecal sample, which are typically dominated by bacterial sequences. In addition, the bacterial taxa present in the sample are inconsistent with fecal material. Assembly of mitochondrial SSU rRNA sequences in the dataset produces a contig 98.7 % identical to R.affinis mitochondrial SSU rRNA, indicating that the sample was generated from this or a closely related species. 87.5 % of the NGS reads map to the Rhinolophus ferrumequinum genome, the closest bat genome to R.affinis available. In the annotated genome assembly, 62.2 % of mapped reads map to protein coding genes. These results clearly demonstrate that the dataset represents a Rhinolophus sp. transcriptome, and not a fecal swab sample. Overall, the data show that the RaTG13 dataset was generated by the Wuhan Institute of Virology (WIV) from a transcriptome derived from Rhinolophus sp. tissue or cell line, indicating that RaTG13 was in live culture. This raises the question of whether the WIV was culturing additional unreported coronaviruses closely related to SARS-CoV-2 prior to the pandemic. The implications for the origin of the COVID-19 pandemic are discussed. Introduction Understanding the origin of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and coronavirus disease 2019 (COVID- 19) is vital for preventing future pandemics. There are two main hypotheses regarding the origin of the COVID-19 pandemic. The zoonosis hypothesis proposes that the progenitor of SARS-CoV-2 jumped from a bat or intermediate host to a human. This scenario requires that the infected bat or intermediate host came into close contact with a human in a nonresearch setting which allowed the transmission to occur. The contrasting lab leak hypothesis proposes that SARS-CoV-2 was transmitted into the human population from a research related activity such as a laboratory experiment. The RaTG13 coronavirus genome, sequenced by the Wuhan Institute of Virology (WIV), is phylogenetically the closest known relative to SARS-CoV-2 1, and its apparent provenance from the intermediate horseshoe bat Rhinolophus affinis has been used to support the proposed zoonotic origin of SARS-CoV-2. However, the original Nature publication describing the RaTG13 genome sequence was sparse regarding sampling location and date of sequencing of RaTG13. A fragment of the RNA dependent RNA polymerase (RdRp) was the first part of RaTG13 to be sequenced, initially labelled as 'RaBtCov/4991', and subsequently renamed 'RaTG13' in the Nature paper (the link between the two was identified in ). Further details were elaborated in an Addendum, which gave the date of sequencing of RaTG13 as 2018, and the sampling location as a mine in Mojiang, Yunnan Province, China, which had been associated with the death in 2012 of three miners who had been clearing bat guano, from a virus like respiratory infection. 1 While RaTG13 shows 96.2 % sequence identity with the SARS-CoV-2 genome, a new Rhinolophus malayanus sarbecovirus genome from Laos, BANAL-52, shows 96.9 % nucleotide sequence identity with SARS-CoV-2 (data not shown). However, a maximum likelihood phylogenomic tree shows that RaTG13 is the closest relative to SARS-CoV-2, with strong support. Stranded mRNA Library Preparation kit (Illumina) was used to produce the sequencing library for sequencing on the HiSeq 3000 (Illumina) platform. A NGS dataset generated from an anal swab obtained from R.affinis (although the species is likely incorrect, as discussed below in Results) by Li et al. at the WIV was used as a comparison. This dataset was used to generate the BtRhCoV-HKU2r (Bat Rhinolophus HKU2 coronavirus related) genome (National Center for Biotechnology Information, NCBI, accession number MN611522). The raw sequence data were obtained from the ENA (accession number SRR11085736) and were labelled as being generated from an R.affinis anal swab and sequenced on an HiSeq 3000 plaform. Likewise, the dataset was described as being generated from an R.affinis anal swab in the publication describing its genome sequence, using the QIAamp Viral RNA Mini Kit and TruSeq Library Preparation kit (Illumina) A transcriptome generated from a R.sinicus splenocyte cell line by the WIV was used as an additional comparison, and was obtained from the ENA (accession number SRR5819066). The protocol used to generate the dataset was not described on its ENA webpage, however it described as being sequenced using a HiSeq 2000 platform. A NGS dataset generated by the EcoHealth Alliance from an oral swab from the bat Miniopterus nimbae from Zaire, and which contained Ebola, was obtained from the SRA (accession number SRR14127641). The dataset was described on its SRA webpage as being generated using the VirCapSeq target protocol, and sequenced using a HiSeq 4000 platform. In this work, the four datasets will be described as the RaTG13, BtRhCoV-HKU2r anal swab, splenocyte transcriptome and Ebola oral swab datasets, respectively. Microbial analysis Metaxa2 was used to identify forward reads that match small subunit (SSU) rRNA from mitochondria, bacteria and eukaryotes present in the four datasets. Phylogenetic affiliation was assigned to the lowest taxonomic rank possible from the read alignments by Metaxa2. Mitochondrial genome analysis Forward reads from the RaTG13, BtRhCoV-HKU2r anal swab and splenocyte transcriptome datasets were mapped to a variety of mitochondrial genomes corresponding to mammalian species known to have been studied at the WIV, using fastv. Mitochondrial rRNA analysis Some reads from the RaTG13, BtRhCoV-HKU2r anal swab and splenocyte transcriptome datasets were identified as corresponding to mitochondrial SSU rRNA using Metaxa2. These were assembled using Megahit. The resulting contigs were used to query the NCBI nr database using Blast, in order to determine its closest match. The contig generated from the RaTG13 dataset was used to make a phylogenetic tree of mitochondrial SSU rRNA genes from different Rhinolophus species, obtained from the NCBI (accession numbers are listed in Supplementary Table 1). Sequence alignment, model testing and phylogenetic tree construction was conducted using Mega11. First, a nucleotide alignment was constructed using Muscle. DNA model testing was conducted and the general time reversible (GTR) model was determined to be the best fit to the data using the Akaike Information Criteria. Then, a maximum likelihood analysis was conducted using an estimated gamma parameter and 1000 bootstrap replicates. Viral titer analysis The number of forward reads mapping to the RaTG13 genome from the RaTG13 NGS dataset were determined using fastv. Eight novel coronavirus genome sequences in addition to the BtRhCoV-HKU2r anal swab dataset were generated from NGS datasets derived from bat anal swabs from southern China by Li et al.. Fastv was used to determine the number of reads mapping to these nine coronavirus genomes from their respective NGS datasets. Nuclear genome mapping analysis Raw sequences from the RaTG13 and BtRhCoV-HKU2r anal swab datasets were mapped to a variety of mammalian nuclear genomes. The Rhinolophus ferrumequinum genome (NCBI accession number GCA_ 004115265.3) was the closest related bat genome to R.affinis available, and was used for mapping. In each case, the most recent assembly was used for mapping. First, the reads were trimmed and filtered using fastp, using polyX trimming, and filtering reads with > 5 % of reads with a quality threshold of Q < 20. Then the reads were mapped using the splicing aware mapper BBMap (https://sourceforge.net/projects/bbmap/), using the default parameters and the usemodulo option. Transcriptome analysis In order to assess the proportion of reads that mapped to gene sequences, reads from the RaTG13 dataset were mapped to a previous version of the Rhinolophus ferrumequinum bat nuclear genome (NCBI accession number GCA_014108255.1), as a gff annotation file was not available for the most recent version of the genome assembly (GCA_ 004115265.3). The corresponding annotation file GCA_014108255.1_mRhiFer1.p_genomic.gff was incompatible with the sam file containing mapped reads, due to differences in chromosome naming between the annotation file and corresponding genome assembly file. This was corrected by modifying the sam file using the following commands: sed -i 's/ Rhinolophus ferrumequinum isolate mRhiFer1 scaffold_m29_p_*, whole genome shotgun sequence//g' mappingfile.sam followed by sed -i 's/ Rhinolophus ferrumequinum isolate mRhiFer1 mitochondrion, complete sequence, whole genome shotgun sequence//g' mappingfile.sam Quantification of the number of reads mapped to annotated genes was conducted using the bedtools multicov function. To do this, the sam file was first converted to a bam file and then sorted and indexed using SAMtools. (Table 1). This implies it has undergone rRNA depletion during preparation, probably using the Ribo-Zero procedure which is part of the TruSeq library preparation protocol. This procedure involves enzymatic degradation of rRNA from both eukaryotes and bacteria. It is unclear if the procedure preferentially degrades rRNA from eukaryotes or bacteria, thus altering the ratio of bacterial : eukaryotic SSU rRNA sequences in the dataset. Ratio of eukaryotic and bacteria SSU rRNA reads Consideration of the ratio of eukaryotic : bacterial SSU rRNA sequences reveals marked differences between the datasets. The ratio is 8.3 : 1 for the RaTG13 dataset, 88.7 : 1 for the splenocyte transcriptome dataset and 3.7 : 1 for the Ebola oral dataset (Table 1), indicating that eukaryotic SSU rRNA dominates these samples. However, in contrast the BtRhCoV-HKU2r anal swab dataset has a ratio of 1 : 5.4, indicating that bacterial SSU rRNA dominates this dataset, as expected with fecal material. The ratio of eukaryotic : bacterial SSU rRNAs in the RaTG13 dataset is inconsistent with that of the BtRhCoV-HKU2r anal swab dataset, and consequently appears inconsistent with fecal material. Microbial SSU rRNA analysis Microbial taxonomic analysis provides a fingerprint that can be used to track the source of a sample by identifying taxa characteristic of the microhabitat from which it was derived. The results of the taxonomic analysis are displayed in Table 2. The has scientific relevance as bat coronaviruses present in oral mucosa are more likely to be transmissable via aerosols, than those that present in higher abundance in fecal swabs. Thus, determining whether RaTG13 was generated from an oral swab would provide better understanding of the emergence of SARS-CoV-2 (which is transmitted via respiratory droplets and aerosols rather than the fecal route ). A taxonomic analysis of the SSU rRNA sequences from the Ebola oral swab dataset is instructive. Firstly, there is a low proportion of reads corresponding to Lactococcus spp. While some sequences in the BtRhCoV-HKU2r anal swab dataset might be expected to originate from the bat insectivorous diet, only a few arthropod nuclear rRNA sequences were observed in the BtRhCoV-HKU2r, RaTG13 anal swab and splenocyte transcriptome datasets (0.01 %, 0.02 % and 0.01 % of eukaryotic SSU rRNA sequences, respectively). However, the Ebola oral swab dataset has substantially more (0.4 % of eukaryotic SSU rRNA sequences). This is consistent with the insectivorous diet of M.nimbae, the bat species from which the oral swab was taken. Rhinolophus spp. are also insectivorous, and so the lower relative proportion of arthropod SSU rRNA sequences in the RaTG13 sample is an additional inconsistency with an oral swab. The observations listed above indicate the differences between Ebola oral swab microbiota, which are consistent with an oral microhabitat, with the microbiota present in the RaTG13 sample. These data indicate that the RaTG13 sample was not derived from an oral swab. Mitochondrial analysis Reads from the RaTG13, BtRhCoV-HKU2r anal swab and splenocyte transcriptome datasets were mapped onto a range of mammalian mitochondrial genomes (Table 3). Reads from the RaTG13 dataset mapped most efficiently to the R.affinis mitochondrial genome with 75335 reads mapping with 97.2 % coverage. 18017 reads mapped with 40.4 % coverage to the R.sinicus mitochondrial genome. This implies that the sample originated from R.affinis or a Rhinolophus species more closely related to R.affinis than R.sinicus. The low proportion of total reads mapping to the R.affinis mitochondrial genome (0.3 %) suggests the RaTG13 sample was subjected to DNase treatment during preparation, which is an optional step in the QIAamp Viral RNA Mini Kit protocol. This is consistent with the observation that the majority of reads that map to the Rhinolophus ferrumequinum genome map to annotated protein coding genes (discussed below), implying little host nuclear DNA was present in the sample. Reads from the BtRhCoV-HKU2r anal swab dataset mapped most efficiently to the R.sinicus mitochondrial genome, with 29.8 % coverage and 10019 reads mapping, in contrast to the R.affinis mitochondrial genome, which mapped with 14.9 % coverage and 6278 reads mapping. This indicates that the sample was derived from R.sinicus or a more closely related Rhinolophus species than R.affinis, and is consistent with the phylogenetic analysis below. This contradicts the description of the sample as being derived from R.affinis. Lastly, reads from the splenocyte transcriptome dataset mapped most efficiently to the R.sinicus mitochondrial genome, with 94.5 % coverage and 170591 reads mapping, in contrast to the R.affinis mitochondrial genome, which mapped with 32.2 % coverage and 88220 reads mapping. This indicates that the sample was derived from R.sinicus or a more closely related Rhinolophus species than R.affinis. Mitochondrial rRNA analysis While mapping to the mitochondrial genome gives a convincing indication of the general phylogenetic affinities of the NGS datasets, mitochondrial SSU rRNA confers more precision. A 1139 bp contig generated by Megahit from SSU rRNA sequences extracted from the RaTG13 dataset using Metaxa2 was found to match R.affinis mitochondrial mitochondrial SSU rRNA (NCBI accession number MT845219) with 98.7 % sequence identity, with 8 mismatches (Figure 1). A maximum likelihood phylogenetic tree indicates that the RaTG13 contig was most closely related to R.affinis mitochondrial SSU rRNA, compared to other Rhinolophus species for which full length mitochondrial SSU rRNA sequences were available ( Figure 2). Mitochondrial SSU rRNA sequences generated by Metaxa2 from the BtRhCoV-HKU2r anal swab dataset were likewise assembled using Megahit. A 960 bp contig aligned to Rhinolophus sinicus sinicus mitochondrial SSU rRNA (NCBI accession number KP257597.1), with only one mismatch. This is surprising given that the anal swab sample is described as having been obtained from R.affinis The eight mismatches of the 1139 bp contig to R.affinis mitochondrial SSU rRNA (derived from the subspecies himalayanus, sampled from Anhui province ) implies that the dataset was derived from a genetically distinct population/subspecies of R.affinis, or a closely related species. This is consistent with the observation that the R.affinis taxon has nine subspecies with marked morphological and echolocation differences, and might actually represent a species complex. In addition, Rhinolophus stheno, a species closely related to R.affinis, was identified as being present in the mine in Mojiang in 2015. Unfortunately, no mitochondrial SSU rRNA sequence is currently available from this species. Evidence of sequence contamination Megahit-assembled contigs corresponding to plant nuclear SSU rRNA sequences were recovered from both the RaTG13 and BtRhCoV-HKU2r anal swab datasets. A 250 bp contig was recovered from the RaTG13 dataset that showed 100 % identity to Gossypium hirsutum (cotton) nuclear SSU rRNA. A 366 bp contig was recovered from the anal swab dataset that showed 100 % identity to Zea mays (maize) nuclear SSU rRNA, while a 390 bp contig showed 98.5 % identity to Arabis alpine (alpine rock cress) nuclear SSU rRNA, with 6 mismatches (data not shown). Table 4). These data shows that the viral titer in the RaTG13 sample was relatively low (7.2 x 10 -5 of total reads map to the coronavirus genome), compared to the nine anal swab samples generated by Li et al. (which includes the BtRhCoV-HKU2r anal swab sample), which ranged from 3.0 x 10 -5 to 4.9 x 10 -2 of total reads mapping to the respective coronavirus genomes. Unfortunately, there are no coronavirus datasets generated from cell lines by the WIV available for comparison. Finally, it was found that raw reads generated from Rhinolophus larvatus (SRA accession number SRR11085733) mapped to the BtHiCoV-CHB25 genome, and not Hipposideros pomona as reported in the supplementary file msphere.00807-19-st002.xlsx. Genome mapping analysis In order to identify the origin of the bulk of reads in the RaTG13 dataset, they were mapped to a variety of mammalian genomes, corresponding to species for which cell lines were known to be in use at the WIV, as well as Rhinolophus ferrumequinum, which is the bat genome most closely related to R.affinis available ( Table 5). The results show that the reads most efficiently map to the R.ferrumequinum genome, with 87.5 % of reads mapping. An even higher percentage of reads would be expected to map to the exact Rhinolophus species used to generate the RaTG13 sample (R.affinis or closely related species, as identified in the phylogenetic analysis above). The high percentage of reads mapping to R.ferrumequinum is inconsistent with a fecal swab, which is expected to have a majority of reads mapping to bacterial sequences. This is because fecal material is typically dominated by bacteria, with only a small amount of host nucleic acid present. Consistent with this expectation, only 2.6 % of the reads from the BtRhCoV-HKU2r anal swab sample mapped to the R.ferrumequinum genome ( Table 5). In addition, the results appear inconsistent with an oral swab. This is because only 27.9 % of reads from the Ebola swab sample map to the Miniopterus natalensis genome (NCBI accession number GCF_001595765.1), which was the most closely related bat genome to M.nimbae available. However, only the forward reads from the NGS dataset were used for the mapping, as the reverse reads were not available. In addition, the RNA purification method used is unclear from the NCBI sample webpage. These two factors means that the Ebola oral swab mapping results may not be directly comparable to the RaTG13-R.ferrumequinum mapping results. Transcriptome analysis Further analysis of the RaTG13 reads mapped to the R.ferrumequinum genome shows that 62.1 % of mapped reads map to protein coding genes. 92.1 % of protein coding genes have at least one read that maps to it. These data confirm that the RaTG13 sample represents a transcriptome. In addition, the result indicates that the sample did not have large amounts of DNA present as this would lead to mapping to parts of the genome that do not code for protein coding genes, which is the large majority of the bat genome. This supports the mitochondrial genome mapping results that the sample was subjected to DNase treatment, which is an optional step in the QIAamp Viral RNA Mini Kit. Discussion The data presented here indicate that the RaTG13 genome was not generated from a bat fecal swab, but rather a Rhinolophus sp. cell line or tissue. Given that the original RaBtCov/4991 RdRp sequence fragment was described as having been generated from a R.affinis fecal swab, then the chain of events leading from that original sample to the sample used to generate the genome sequence is unclear. As far as the author is aware, no live animals were reported as being captured in the collecting expeditions to Mojiang between August 2012 to July 2013 and there is no reported precedent for virus isolation from bat tissue at the WIV. If the sample was derived from a dead bat, it is hard to understand how the sample became depleted, as stated by Zheng-liShi, given that tissue would have yielded substantial RNA. The presence of a substantial proportion of reads corresponding to the Escherichia spp. and Lactococcus spp. in the NGS reads ( Table 2) would also be hard to understand. There are two examples of mislabelling of coronavirus samples by researchers at the WIV identified here. The BtRhCoV-HKU2r anal swab sample appears to have been derived from R.sinicus sinicus and not R.affinis as described. In addition, the BtHiCoV-CHB25 anal swab sample was derived from R.larvatus and not H.pomona as reported. These observations indicate that sample collection and processing were error prone, lending some credence to a lab leak scenario for the origin of the COVID-19. Given that these samples may contain potential pandemic pathogens (PPPs), this is of great concern. In particular, it is of note that in a Master's thesis by Yu Ping, supervised by Zheng-li Shi and Cui Jie, work on the Ra4991_Yunnan (RaTG13) sample and other bat samples is described as being conducted in a 'BSL-2 cabinet', but it is unclear if the cabinet was situated in a BSL-2 lab or a regular lab. Bat coronaviruses are PPPs and so should be handled in a BSL-3 lab. A further curiosity is that neither Yu Ping or Cui Jie are listed as authors in the Nature paper describing RaTG13. The data imply that RaTG13 may have been in live culture at the WIV since before June 2017, which is when the Illumina sequencing of the RaTG13 (RaBtCOv/4991) sample appears to have begun (information generated by Francisco de Asis de Ribera). This implies that isolation of RaTG13 was successfully conducted on the original sample collected from the Mojiang mine after its collection in July 2013 and before June 2017. It is unclear if RaTG13 is currently in live culture at the WIV. Of high relevance to the work described here is the statement on page 119 in the recently released EcoHealth-WIV grant 1R01Al 110964-01 that the WIV had successfully isolated coronaviruses using bat cell lines: 'We have developed primary cell lines and transformed cell lines from 9 bat species using kidney, spleen, heart, brain and intestine. We have used these for virus isolation, infection assays and receptor molecule gene cloning.' (italics the author's) Conclusion In the Nature paper describing RaTG13, and its Addendum, there is no specific statement that the RaTG13 raw reads were generated from a fecal swab. However, the dataset is labelled as such at the GSA, SRA and ENA, the original RaBtCoV /4991 sample from the Mojiang mine is described as having been a fecal swab, and the Master's thesis describing the genome sequencing of RaTG13 describes it as having been generated from an anal swab/fecal pellet. Consequently, the Nature paper describing RaTG13 and its database entries should be amended to state the true provenance of RaTG13. Pertinent to this is the observation that serial passaging during stock maintenance typically causes SARS-CoV-2 to diverge genetically, given that is rapidly adapts to the cell culture conditions. This means that the reported RaTG13 genome sequence would be expected to have picked up mutations during its laboratory sojourn, from its date of isolation. This is consistent with the observation that the genome has an excess of synonymous mutations compared to the closely related Rhinolophus malayanus RmYN02 coronavirus (which was sampled in 2019 ), in relation to SARS-CoV-2. This could be interpreted as being the result of evolutionary change occurring from its 2013 collection date, which could only happen if the virus was in culture (as noted in a preprint of at https://www.biorxiv.org/content/10.1101/2020.04.20.052019v1). The observations outlined here have important implications regarding the origin of the COVID-19 pandemic. Zheng-li Shi, lead author of the Nature paper, has stated that RaTG13 was not in live culture at the WIV, as follows: "I would like to emphasize that we have only the genome sequence and didn't isolate this virus" and "we did not do virus isolation and other studies on it". The data presented here suggest otherwise, and point to the possibility of additional coronaviruses closely related to SARS-CoV-2 in culture at the WIV that have not yet been disclosed. This conclusion increases the plausibility of the lab leak scenario for the origin of COVID-19. Acknowledgements The investigation into the origin of the COVID-19 pandemic has represented a paradigm shift for how forensic and epidemiological investigations are conducted, with decentralized online groups of investigators making significant contributions. This work is the product of discussions with numerous investigators based on Twitter, some anonymous, several associated with the Decentralized Radical Autonomous Search Team Investigating COVID-19 (DRASTIC) (https://drasticresearch.org/), and others independent. The author has made widespread use of material generated by online investigators, including information generated by @BillyBostickson, @TheSeeker268, @franciscodeasis, @Daoyu15, @pathogenetics, @mrandersoninneo and @Ayjchan. The Master's thesis of Yu Ping was identified and translated by @TheSeeker268 and @franciscodeasis. The EcoHealth-WIV NIAID grant 1R01Al 110964-01 was made available by an FOIA made by The Intercept (theintercept.com).
Towards a Bad Bitches Pedagogy In this paper, I present a personal narrative approach, grounded in Connelly and Clandinins ontological and epistemological stance that humans are story-telling organisms1 to discuss my construction of a uniquely working class Black feminist educator identity. This narrative inquiry is an adapted counter-methodological researcher approach that was born out of an interlinkage of my explorations into the histories of Black women educators, hip hop feminisms, and Higginbothams respectability politics,2 as it is understood in popular cultural terrain, and as the concept contrasts with and complements notions of (dis)respectability. I situate the paper within a critical hip hop feminist framework and access raunch aesthetics use of the sartorial and performative bad-assedness to understand how I have come to craft a transgressive teacher identity. By embracing a vernacular transgressive archetype of the bad bitch pedagogue, I analyze and complicate my own intersectional identity as a working-class Black woman who navigated an adversarial bourgeoisie traditionalist educational system as a teacher, unwed custodial parent, cultural worker and advocate for Black youth.
A 60-ished broker is walking down Wall Street one evening when he spots a pan-handler, a somewhat younger man in battered Vietnam-era fatigues. As he passes the guy, the broker pups a couple of dollars in the man's cup and asks, "You were in the 'Nam?" The panhandler looks up, says, "Yes," then pauses, looks closely at the broker, and says, "Major? It's me, Spec-4 Wilson!" "My God, Wilson! What the hell are you doing here?" "Well, sir, after I got out of the Army nothing seemed to go right for me. I drank too much, couldn't hold a job, got into trouble. But I'm OK now. Folks help out." He pauses, and then says, "Major, you look great!" "I got out as a lieutenant colonel. Went into the market and I've been doing really well." Wilson says, "Well that's great sir. You have a great day. And thanks." The Major starts to walk off. Then he stops for a second, thinking. He turns around, and says "Wilson, you were the best orderly I ever had. I'm a rich man now. Let help you out." "Look, come work for me. You can be my valet. I've got a big house in mid-town, with plenty of room, and my wife won't mind." "Sir, I really don't want to be a problem." "No problem, and you'll just be doing the same sort of thing you did when we were back in the 'Nam. You know, keeping my stuff in order, making sure I'm up in the morning -- just like in the old days." So the Major hails a cab and the two pile in. On route uptown, they stop at a men's shop and the Major buys Wilson some good civilian clothes. Then he takes him for lunch at McSorley's, where they have a few drinks and talk about old times. They get to the Major's house, a large brownstone in the East 60s, fairly late. The Major gives Wilson a little tour, shows him were various things are stored, gives him a room, and says "I'll see you in the morning." At 4:00 a.m., Wilson wakes up. At precisely 4:15 he opens the door to the master bedroom, walks in, smacks the Major's wife on the behind and says, "OK baby, here's five bucks, time to get back to the village."
Father’s Day is a short film directed by Betty Ouyang that takes viewers into the struggle of being an Asian artist in the city of Los Angeles. In the film, Betty Ouyang and Angelita Bushey portray two Chinese-American sisters who struggle to sit through a lunch with their father (Larry Wang) to celebrate Father’s Day. Betty is a screenwriter and Angelita is an actress, and both of them are fighting hard to keep on top of the challenges life throws at them. As the family dives into an uncomfortable conversation regarding the sisters’ strained careers, issues of eviction, addiction, and much more spring up, unveiling the ugly side of Los Angeles’s glamour. It is a common stereotype that Asians are skilled in math and science and often become doctors or lawyers. Father’s Day ties in themes that break this misconception. In the film, it is said that the sisters were sent to Ivy League schools, implying that they were perhaps raised under the typical Asian standard of doing well academically. However, upon graduating, instead of becoming medically or mathematically skilled as probably expected, the sisters become artists, a route often frowned upon by Asian parents. Much of Betty and Angelita’s ambitions in Father’s Day serves as a reminder that there are many other things that Asians can strive to be. Something quite spectacular about Father’s Day is the character development that was shown within the span of 15 minutes. When we first meet Betty and Angelita’s father, we see him as a Chinese parent who had just come to drop by to visit his daughters, and the initial impression of Betty and Angelita themselves is of two bickering sisters. However, by the end of the film, we know much more about the three characters. It is evident that Betty and Angelita’s father is genuinely concerned about the future of his two daughters, and is trying to keep the family together as a single parent. It can also be seen that he is desperately trying to make amends with Betty and Angelita so that they are aware of the help that he can give them. We also figure that although the two have contrasting ideas on how to go about this, Betty and Angelita only want their father to worry less about them. Betty is a more straightforward character, while Angelita believes that only some things should be said to keep their father from further unease. In the end, however, both sisters support each other and care about their family. Father’s Day also sheds light on the darker side of the bright, Hollywood based lives that is expected of artists that live in Los Angeles. Los Angeles is known to artists as the Land of Opportunity, but Betty and Angelita show that this is not the case for everyone. On the other hand, they also prove that hard work and dedication will pay off, despite the time it may take to become successful. Father’s Day is an excellent portrayal of problems that can come up while striving to achieve one’s dreams, especially as an Asian-American. This was originally posted on The Alhambra Source.
One of the biggest names in comedy is coming to town. On Tuesday, Louis C.K. added new dates to his tour, including four nights in Austin. The beloved stand-up comedian will perform at ACL Live at the Moody Theater January 4-7, 2017. He swung by Austin during the Oddball Comedy & Curiosity Festival in 2014, but this is his first solo gig in Austin since 2012. This is the latest leg of a tour that started in May. The comedian's only other Texas dates are November 9-10, 2016, at the Music Hall at Fair Park in Dallas. He played Houston in July. Tickets to the Austin shows are on sale now for $50 each through ACL Live. The small price reflects his wishes to keep shows affordable. "As always, I'm keeping the price of my tickets down to an average of $50, including all charges. I know that's not nothing. But it's less than more than that," he said in an open letter to fans.
Interface dermatitis along Blaschko's lines Linear dermatoses are fascinating entities that likely reflect embryologically derived cutaneous mosaicism, even when they occur after childhood. Adult blaschkitis is a rare, relapsing inflammatory dermatitis that most often presents in middle age. It presents clinically as a pruritic eruption of linear papules, vesicles and plaques, and is most commonly found to have features of spongiotic dermatitis on pathology. However, the clinical and histopathologic presentation of lichen striatus in adults may be similar to those of adult blaschkitis. A case in which blaschkitis was suspected clinically is presented, in which the biopsy showed noncharacteristic microscopic features resembling erythema multiformea finding rarely reported in the literature to date. We present this case and a brief review of the most commonly acquired linear eruptions following Blaschko's lines with the goal of expanding the histopathologic findings that may be encountered in adult blaschkitis. Moreover, the clinical and histopathologic overlap between the entities of blaschkitis and lichen striatus is explored, acknowledging that these entities may exist on a clinicopathologic spectrum. In the diagnosis of linear eruptions, clinicopathologic correlation is important for arriving at an accurate final diagnosis.
User Centred Quality Health Information Provision: Benefits and Challenges Recent research indicates people are increasingly looking to the Internet for health information. Equally however, there is increasing frustration with the sheer volume, lack of relevance and at times dubious quality of information retrieved. The Breast Cancer Knowledge Online project sought to build a user sensitive portal to assist women with breast cancer and their families overcome these problems and to facilitate the retrieval of information which would better meet the individual and changing needs of users. The research outcomes discussed in this paper describe the approach taken to building the metadata-driven portal, the outcome of usability testing of the portal, and the limitations of such an ambitious project.
An acceptance, role transition, and couple dynamics-based program for caregivers: A qualitative study of the experience of spouses of persons with young-onset dementia Objective In this study, we assessed a support program based on acceptance, role transition, and couple dynamics for spouses of people with young-onset dementia. The qualitative feedback from the caregivers experience is analyzed. The goal was to explore how this home-based support program is perceived and to appraise the impact of the different approaches that were offered. Design A thematic analysis was conducted on the answers to the end-of-session questionnaires and the follow-up semistructured interviews. Results Five themes emerged from the analyses. They highlighted caregivers ability to overcome their emotional struggle as well as the control of their loved ones behaviors. The results also showed the possibility for caregivers to access new ways to support their loved ones and to maintain the quality of their relationship. Conclusion These findings represent preliminary evidence of this programs efficacy for caregivers.
. Examination of bone marrow aspiration is an important tool in the diagnosis of haematological diseases. First attempts of bone marrow sampling took place at the beginning of the twentieth century. Thereafter, numerous methods were proposed and different materials were described. The commonly accepted sites for sampling are sternum and the iliac crest. We describe here a sampling procedure for each site. Bone marrow aspiration is a safely investigation, but not recommended for patients with impaired haemostasis. The physician must be aware of its side effects and complications which could occur. The consequence of the complications varies according to the type of iatrogenic injury. Prevention and rapid diagnosis are a crucial point in the management of bone marrow aspiration accidents. To avoid malpractice, the procedure should be taught by senior physicians including theoretical as well as practical learning. The purpose of the learning is a high quality of care to ensure patients the best comfort in subsequent bone marrow examinations, this point being particularly important in paediatrics.
The use of nearinfrared spectroscopy (NIRS) in surgical clipping of giant cerebral aneurysm: P 079 in hot flashes. Discussion: Although CRPS was once thought to be entirely sympathetically mediated, there is a growing body of evidence implicating central mechanisms in its pathophysiology. The stellate ganglions sphere of innervation has been shown to extend beyond the sympathetic nervous system and to include key central structures including the insular cortex. While PRFs mechanism is only partially understood, its electrical fields may interrupt key connections between the peripheral and central nervous systems. The reductions in smoking and hot flashes were not surprising, as the insular cortex has been shown to play a key role in smoking addiction and to be active during hot flashes. Conclusion: PRF, a minimally invasive technique, appears to offer substantial benefit to patients with chronic CRPS.
Reserve antidepressants for cases of severe depression, Dutch doctors are told Antidepressant drugs such as serotonin reuptake inhibitors (SSRIs) and tricyclic depressants should be prescribed as the first treatment only in cases of severe depression, says new guidance from the Dutch College of General Practitioners. It recommends that drug treatment should not be the first step for patients exhibiting only depressive symptoms, a new category distinct from depressiona move that professional associations believe could substantially reduce the Netherlands one million users of antidepressants. The revised evidence based professional guidance recommends that antidepressant treatment should be prescribed from the outset only if the patients depression is accompanied by severe suffering or social dysfunctioning
Blagsnarst, a Love Story Plot On the way to work, Stan (Seth MacFarlane) gets annoyed by Roger's surprise entrance and clinginess which affects the entire family. Accompanying Francine on a trip to the mall, Roger picks up a pheromone trail and follows it until he finds a crashed spaceship with a female alien (Kim Kardashian) who was also attracted by his scent. They take her home to meet Stan, who becomes afraid that the CIA will discover her. Even as he worries, the CIA is conducting an investigation of her crash site. Roger and the alien become sexually attracted to each other which distresses the entire family. But after their fling, Roger is ready to dump her only to find she is ready to settle down with him. Roger tries to claim he already has a girlfriend but the ruse fails. Despite her insistence that Roger make things work, he decides to take her into the countryside. Pulling over at a gas station, Roger calls the CIA to turn her over at a bed and breakfast for them. Stan gets word of the CIA on their trail and he admits that he called them as they can be heard approaching. Stan points out that she can lead them back to the family and Roger takes her away and proposes they go their separate ways. Handing her cash, he leaves her behind as Bullock spots her dog disguise from a helicopter and attempts to intercept her after it falls off as Roger cheers her on. But when the CIA corners her, he decides to intervene by building a high-powered rifle out of stones, sticks and his bubble gum. He manages to shoot down a tree which crashes onto the copter. As they leave together, she continues with her future plans together until Roger bails out of the car and it crashes in flames. The female alien crawls out as the fire burns off her fur, revealing a curvy woman in a fur bikini who walks away from the wreckage. The episodes ends with the reveal that the entire story (and the series on FOX) is from a book Stan is reading about how Kim Kardashian was born. Meanwhile, in prison, Marylin Thacker is executed for the murder of her husband. Her son returns to the family home in sadness as he prepares for an election as attorney general. As his son is hit by a car outside, he discovers a secret hiding place under the floorboards containing The Golden Turd. Ignoring everything else in his life, he calls Wyatt Borden (voiced by Paul Reubens), a wealthy chemical dumper that pledges to help him and even make him president one day. Production In September 2013, it was revealed that Kim Kardashian would appear in an upcoming episode as a "love interest" for Roger. Reception "Blagsnarst, a Love Story" first aired on Fox in the United States on September 21, 2014. This is the final episode of American Dad! aired on Fox as part of the Animation Domination block, the show was quietly cancelled by Fox but was picked up by TBS for season 12. Critical reception for "Blagsnarst, a Love Story" was mostly positive. Alasdair Wilkins of The A.V. Club gave this episode an A- stating: "The episode features Kim Kardashian, a move that should feel like shameless stunt-casting, except she’s on hand to play a furry pink alien; this is in keeping with American Dad’s admirable habit of reeling in impressive guest voices and casting them in unrecognizable roles. The episode features Roger and Stan at their most casually sociopathic, yet both find just enough hidden emotional depth for the episode to not feel completely mean-spirited."
Total electrification of large-scale nanophotonic arrays by frictional charges. Localized surface plasmon resonance (LSPR) of metallic nanostructures is a unique phenomenon that controls the light in sub-wavelength volumes and enhances the light-matter interactions. Traditionally, the excitation and measurement of LSPR require bulky external light sources, and efforts to scale down to nano-plasmonic devices have predominantly relied on the system's miniaturization and associated accessories. Addressing this, here we show the generation and detection of LSPR wavelength (LSPR) shifts in large-area nanostructured Au surfaces using frictional charges generated by triboelectric surfaces. We observe a complex interplay of the localized surface plasmons with frictional charges via concurrent spectroscopic and triboelectric measurements undertaken for the detection of bioconjugation in the streptavidin-biotin complex. When subjected to multivariate principal component analysis, a strong correlation between the triboelectric peak-to-peak voltage output response and the LSPR shift is observed. Furthermore, we reveal a landscape of the interfacial events involved in the electrical generation/detection of the LSPR by using theoretical models and surface characterization. The demonstrated concept of electrification of plasmon resonance thus provides the underlying basis for the subsequent development of self-powered nano-plasmonic sensors and opens new horizons for advanced nanophotonic applications.
Mortality from myocardial infarction. To the Editor. It is not surprising to me that neither Weinblatt et al (1982; 247:1576) nor Goldberg et al 1 were able to demonstrate that care after myocardial infarction (MI) was a significant reason for the drop in mortality from this disease during the late 60s and early 70s. It is more difficult to mend a broken egg (heart), than it is to prevent it from breaking. In the course of time, preventive measures will most likely be found to be the reason for this drop in mortality. The editorial comment by Dr Havlik (1982;247:1605) that accompanied the article by Weinblatt and co-workers, that this will be a "bitter pill" for those who have hoped that improved care after MI would be the main reason for the decline in cardiovascular mortality, is probably true. It is also sad that this may come as a surprise to many physicians. Currently, it
What Role The U.S. Military Is Playing At The Mexican Border NPR's Mary Louise Kelly talks to retired Lt. Gen. Steve Blum about the military logistics of law enforcement and engaging migrants at the border. About 8,000 U.S. military are deployed at the border. NPR's Mary Louise Kelly talks to retired Lt. Gen. Steve Blum about the military logistics of law enforcement and engaging migrants at the border. About 8,000 U.S. military are deployed at the border. All right. The U.S. military was not involved in yesterday's clashes. As we just heard, it was Customs and Border Protection that was firing tear gas. But there are about 8,000 U.S. military now deployed at the border providing protection for Border Patrol, helping with logistics and support. For more on the role those troops are playing, let me bring in retired Lieutenant General Steve Blum, who led the National Guard from 2003 to 2008. General Blum, welcome back to ALL THINGS CONSIDERED. STEVE BLUM: It's good to be back. KELLY: What is your reaction to yesterday's events, to the use of tear gas across the U.S.-Mexico border? BLUM: Well, the use of tear gas anytime is unpleasant. It's intended to be. And it's an irritant, can be dangerous. But what I have understood is that Border Patrol agents felt threatened by rocks and other objects that were being hurled at them. And of course, a rock can be a lethal weapon. If it hit you in the head, you're just as dead as if you were shot by a bullet. So if they had legal authority to use it under the rules of use of force, then I guess it was an option open to them. KELLY: One of the things that I think is maybe giving people pause is the photographs from the border yesterday were showing women and children who were downwind and who were affected by the tear gas. Those are optics. But does that change at all in your mind the way the situation should have unfolded? KELLY: It sounds as though you are uncomfortable with some aspects of what unfolded yesterday. BLUM: Everybody should be uncomfortable when you witness women and children in distress. It's unsettling - I'm sure - to the Border Patrol agents that employed the gas themselves. But they probably enjoyed less things thrown at them like rocks and other hard objects that were causing them harm. KELLY: To the question about rules of engagement and whether lethal force might be permissible if U.S. forces fear for their own security, and then they feel the need to act in self-defense, President Trump has said, yes, that he can envision that scenario. Defense Secretary Mattis has said no, that he says military police should use batons, should use shields, should not be carrying firearms. Do you see the defense secretary playing a role here of trying to rein in the White House when it comes to questions about lethal force? BLUM: (Laughter) I don't know. My personal opinion has been for 43 and a half years when I was in uniform, I always insisted that my soldiers were armed and able to protect themselves at all times. I feel very strongly that the armed forces of the United States are called the armed forces for a reason. And when you put soldiers where their safety is at risk, I want them to have the ability to defend themselves and protect others if necessary. That's pretty clear-cut. KELLY: In this current situation, does that risk escalation? BLUM: Not really. You could make the argument that if they realize the soldiers are able to protect themselves, nobody is going to take liberties with them. It actually may make the situation safer. I mean, pretend it's someone you care about that's in the armed forces, and they're down on the border. Everybody coming across there is not someone who's, you know, seeking asylum or even a better life. Some of those people are coyotes. They traffic in people. They're drug cartel members. And they do some horrific things, far worse than the effects of tear gas. And you have to be able to protect yourself. KELLY: Do you see a distinction between military police and other active-duty members of the military in terms of authorization to use lethal force? BLUM: Yes, absolutely I do, just as civilian law enforcement can do things that regular citizens can't do. But clearly, any soldier that's up front along the border, it would not be unreasonable for them to be armed. KELLY: General Blum, thank you. BLUM: Good talking to you again. KELLY: That is retired Lieutenant General Steve Blum. He used to run the National Guard. He also was deputy commander at Northern Command, which is in charge of U.S. military forces at the southwest border.
Acute effects of hydroxyethylrutosides on capillary filtration in normal volunteers, patients with venous hypertension and in patients with diabetic microangiopathy (a dose comparison study). The acute effects of hydroxyethylrutosides on capillary filtration were studied in 12 normal subjects, 25 patients with venous hypertension and 22 diabetics with microangiopathy. The two groups of patients randomly received a single oral dose (500 or 1000 mg) of hydroxyethylrutosides. A single dose of 500 mg was used for normal volunteers. In the following 6 hours capillary filtration was studied with straingauge plethysmography. The decrease in capillary filtration was evident within the first hour and was at its peak between the second and fourth hour. After 6 hours it was still significantly below baseline values in patients. The 1000 mg dose was significantly more effective in both groups of patients. This study confirms the efficacy of hydroxyethylrutosides in decreasing capillary filtration. It suggests that the effect of one dose lasts at least 6 hours and also that the higher dose is more effective.
Association Between Plasma ADAMTS-9 Levels and Severity of Coronary Artery Disease Genome-wide association studies have shown that a disintegrin and metalloproteinase with thrombospondin motifs 9 (ADAMTS-9) is associated with the development of atherosclerosis. We assessed the level of ADAMTS-9 in patients with coronary artery disease (CAD) and its severity and prognosis. We selected 666 participants who underwent coronary angiography in our hospital and met the inclusion and exclusion criteria; participants included non-CAD patients, patients with stable angina pectoris (SAP), unstable angina, non-ST-segment elevation myocardial infarction, or ST-segment elevation myocardial infarction. The serum level of ADAMTS-9 was higher in patients with CAD than in non-CAD patients (37.53 ± 8.55 ng/mL vs 12.04 ± 7.02 ng/mL, P <.001) and was an independent predictor for CAD (odds ratio = 1.871, 95% CI: 1.533-2.283, P <.001). Subgroup analysis showed that compared with the SAP group, the acute coronary syndrome groups had higher serum levels of ADAMTS-9. In addition, the level of ADAMTS-9 was related to the SYNTAX score (r = 0.523, P <.001). Patients with acute myocardial infarction (AMI) with elevated levels of ADAMTS-9 had a higher risk of major adverse cardiovascular events (MACE) within 12 months than those with lower levels (log-rank = 4.490, P =.034). Plasma ADAMTS-9 levels may be useful for the diagnosis of CAD and as predictors of MACE in AMI patients.
Sebastián Domínguez Early life Domínguez was born in Buenos Aires and moved to Rosario at an early age with the relocation of his parents'. There, he played in the youth system of Newell's Old Boys. His father coached Lionel Messi in the club's youth divisions. Club career Primak made his professional debut for Newell's Old Boys in 1998, at 18 years. At an early age he was purchased by a third-party and loaned to Talleres de Córdoba along with his teammate Maxi Rodríguez, but he was never authorized to play for the club and returned to Newell's shortly after. He played as a defensive midfielder until 2004, when he was moved to centre back by coach Américo Gallego and he captained the team that won the 2004 Apertura tournament, thus breaking the club's 12-year title drought. The central defender played 18 games (out of 19) during the tournament, all of them as a starter. After winning the league championship with Newell's, Domínguez was purchased by third-party Media Sports Investment for $2.5 million (US), that loaned him along with fellow Argentine players Carlos Tevez and Javier Mascherano to Brazilian side SC Corinthians. With his new club, he won the 2005 Campeonato Brasileiro Série A. During the 2007 January transfer window, Domínguez returned to Argentina to play for Estudiantes de La Plata, under Diego Simeone's coaching. He played with the team the whole year, after which he was signed by Mexico side América in January 2008. Although he helped the team to win the 2008 InterLiga and scored in the derby against Chivas de Guadalajara, he was released from his contract at year-end. On 27 January 2009, Vélez Sársfield signed the defender on a free transfer. He immediately established himself as a starter as center back playing along Nicolás Otamendi. Domínguez featured in all 19 games of his first tournament in Vélez, the 2009 Clausura, helping the team to claim the national championship. He was also a regular along Fernando Ortiz as centre back in Vélez' 2011 Clausura winning campaign (playing 16 games), as well as the team's 2011 Copa Libertadores semi-finalist campaign (playing all 12 games and scoring one goal). Domínguez also helped Vélez to win the 2012 Inicial, starting 18 games and scoring two goals. In that year he was also selected by the fans of Vélez on an online poll as the club's best player of the year. In 2013, the defender obtained two further titles with Vélez: the 2012–13 Superfinal (defeating his former team Newell's Old Boys) and the 2013 Supercopa Argentina (defeating Arsenal de Sarandí), in both of which he was a starter for his team. During that year Domínguez also played his 200th official game with the club in a 1–1 draw with All Boys. International career On 31 August 2009 Domínguez was called by coach Diego Maradona for the Argentine national team, along fellow Vélez Sársfield teammates Nicolás Otamendi and Emiliano Papa. With Argentina, the defender started the World Cup qualifier games against Brazil and Paraguay (both defeats for Argentina). Domínguez was later called by coach Alejandro Sabella for the 2011 and 2012 editions of the friendly competition Superclásico de las Américas, in which he captained the national team. He was also called by Sabella for 2014 World Cup qualifying matches, including the last two against Peru and Uruguay. However, he did not take part of the squad for the World Cup. Personal life Domínguez studied architecture, but did not get far in his career. He plays guitar and harmonica and enjoys Argentine and British rock. In 2014, Domínguez graduated as a football coach in Argentina.
Quantifying the Potential for SnowIce Formation in the Arctic Ocean We examine the regional variations and longterm changes of the potential for snowice formation for level Arctic sea ice from 1980 to 2016. We use daily sea ice motion data and implement a 1D snow/ice thermodynamic model that follows the ice trajectories while forcing the simulations with ModernEra Retrospective analysis for Research and Applications, Version 2 and ERAInterim reanalyses. We find there is potential for snowice formation in level ice over most of the Arctic Ocean; this is true since the 1980s. In addition, the regional variations are very strong. The largest potential is typically found in the Atlantic sector of the Arctic Ocean, particularly in the Greenland Sea, where precipitation is highest. We surmise that, in addition to the annual amount of solid precipitation, potential for snowice formation is controlled by two main factors: the initial secondyear/multiyear ice thickness in the autumn and the timing of firstyear ice formation. Introduction Snow on sea ice is a critical factor for sea ice evolution (Sturm & Massom, 2010;). The high reflectance and thermal insulation properties of snow modulate the growth and decay of sea ice (Maykut, 1978;Sturm & Massom, 2010). Snow can also contribute to the sea ice thickness via superimposed ice (;Kawamura, 1997) and snow-ice (e.g., Leppranta, 1983) formation. Superimposed ice forms when meltwater from snow or rainfall percolates downward through the snowpack and refreezes at the ice/snow interface. Snow ice forms when seawater floods and refreezes at the snow/ice interface due to a heavy snow load that submerges the ice surface below sea level. Snow ice is therefore a mixture of snow and seawater, and it can provide a habitat for snow infiltration communities, with implications on carbon export and transfer (). A younger and thinner Arctic sea ice system becomes more sensitive to snow accumulation (). A relatively thick snow pack can limit the thermodynamic growth of sea ice and at the same time promote sea ice growth via snow-ice formation (Merkouriadi, Cheng, et al., 2017). This mechanism was observed in the Atlantic sector of the Arctic Ocean during the Norwegian young sea ICE (N-ICE2015) expedition, where negative freeboards were widespread. Snow ice is a common phenomenon in the seasonal sea ice zone and in the Southern Ocean (;;). However, it has not been considered prevalent in the Arctic (Sturm & Massom, 2010;), where thick perennial sea ice once dominated. It is unclear whether the snow ice observed during N-ICE2015 is common for this part of the Arctic or became a more widespread phenomenon due to the recent thinning of Arctic sea ice (;Lindsay & Schweiger, 2015) and/or the increased frequency of storms that bring precipitation in the region. Here, we attempt to shed more light into the potential for snow-ice formation on Arctic sea ice. Our purpose is to examine the regional and interannual variations of the potential for snow-ice formation, from 1980 to 2016. We look separately into the response of first-year (FYI) and second-year (or older) (SYI/MYI) ice. We use the term "potential" because we specifically examine the conditions for negative freeboard and for level ice only. Although a negative freeboard is a precondition for snow-ice formation, in itself, it is not sufficient to trigger flooding of the bottom of the snow pack. We used information from ice motion and implemented a 1-D thermodynamic sea ice and snow model along these trajectories to examine the potential for snow-ice formation. The model was forced with two atmospheric reanalyses that have significant differences in precipitation amounts; one represents relatively low ERA-Interim (ERA-I) and the other relatively high (Modern-Era Retrospective analysis for Research and Applications, Version 2 ) precipitation amounts in the Arctic Ocean (). We implemented HIGHTSI in a Lagrangian framework to examine Arctic snow-ice distributions. Ice motion vectors are derived by satellite products and are provided from the National Snow and Ice Data Center (). Based on the motion vectors, we performed Lagrangian tracking of ice parcels over the Arctic Ocean and its marginal seas from 1980 to 2016. This resulted in a daily sea ice motion product of 25-km spatial resolution. Throughout this period, ice parcels disappear and new parcels are being generated. At any given time, the Arctic simulation domain can hold a total of 60,000 individual ice parcels. At each time step, the MicroMet meteorological preprocessor (Liston & Elder, 2006) was used to extract the atmospheric forcing based on the position of each ice parcel. Ice concentration data from Cavalieri et al. were used to initialize an ice parcel. We considered ice parcels initialized when ice concentration exceeded a 15% concentration threshold. We used atmospheric data from reanalyses to force HIGHTSI, including 10-m wind speed, 2-m air temperature and relative humidity, and total precipitation, while MicroMet provided the solid precipitation, downwelling shortwave, and longwave radiation. We used ERA-I and MERRA-2 atmospheric reanalyses (;) in order to examine the snow-ice sensitivity to the magnitude of precipitation over sea ice. These reanalyses have shown relatively good agreement for air temperature and timing of precipitation events (Merkouriadi, Cheng, et al., 2017), although there is a warm bias in both products during the lowest temperatures in winter. But especially, they exhibit significant differences in the magnitude of precipitation (;;Merkouriadi, Cheng, et al., 2017) with ERA-I producing relatively low and MERRA-2 producing relatively high precipitation amounts (;Merkouriadi, Cheng, et al., 2017). HIGHTSI simulations began each year on 1 August, and run through one full year at a time, using a 3-hr time step. Based on the ice motion and concentration information, existing ice parcels on 1 August were considered SYI/MYI. On 1 August we assumed that there is no snow on SYI/MYI. We performed model experiments with four different initial thicknesses for the existing SYI/MYI parcels on 1 August (h 0 = 0.5, 1, 1.5, and 2 m). Thus, we conducted eight experiments in total, four with ERA-I and four with MERRA-2 forcing. Initial ice thickness of 2 m was likely more common in 1980s and 1990s, whereas thickness of 1.5 m and less is becoming more typical in recent years (Kwok & Untersteiner, 2011). We acknowledge that a uniform initial SYI/MYI thickness over the entire ice-covered Arctic Ocean is not realistic. However, our purpose is to examine the interdecadal sensitivity of snow-ice formation to the regional patterns and trends of weather conditions and sea ice motion. For the same reason, we chose a constant, low ocean heat flux (F w = 1 W m −2 ). In a similar study we carried out north of Svalbard, in a region where ocean heat flux is of greatest importance due to the proximity to the North Atlantic, we concluded that the choice of ocean heat flux did not significantly affect the results (Merkouriadi, Cheng, et al., 2017). These simplifications allow us to examine the sensitivity of snow-ice formation to a limited number of factors, keeping in mind our level ice assumption. The outputs of the HIGHTSI model experiments for each ice parcel at each time step are snow-ice layer thickness, thermal ice thickness (i.e., total ice thickness minus snow-ice thickness), and snow depth. Here we only show results related to snow-ice thickness. Superimposed ice formation is also calculated in HIGHTSI, but it is not part of the snow-ice volume presented here. For computational efficiency, superimposed ice was not included in the analysis. Thermal ice thickness and snow depth results were analyzed to facilitate our interpretation and discussion. After we conducted the simulations, the model output was gridded to the 25 25 km Equal-Area Scalable Earth grid (EASE grid), provided by National Snow and Ice Data Center. At each time step, the parcels' location was used to calculate the overlap between the parcel and the EASE grid cell. The overlap is calculated as fractional area of the EASE grid cell. The fractional area was then multiplied by the sea ice concentration of the parcel, and the result was used to weigh the parcels' contribution to each EASE grid cell. This procedure of area-and concentration-weighted averages within the EASE grid cells conserves the examined parameters. In order to look separately into FYI and SYI/MYI, existing parcels on 1 August were considered to be SYI/MYI. New parcels that appear after 1 August each year were considered to be FYI. Hereafter, for simplicity we show results based on MERRA-2 experiments because they generally give a better fit to observations (Merkouriadi, Cheng, et al., 2017). The results from all model experiments are summarized in the supporting information ( Figures S1 and S2). First, we look at regional variations and long-term trends of snow ice in all ice types. Afterwards, we compare FYI and SYI/MYI. Regional Variations First, we examined the regional variations of the potential for snow-ice formation in the Arctic Ocean. In each year, we found the day of maximum snow-ice volume, and we extracted the snow-ice data from that day. We averaged the annual maximum snow-ice thicknesses over different decades (1980-1990, 1990-2000, and 2000-2016) because different decades are likely representative of different SYI/MYI initial thickness. The day of maximum snow-ice volume ranges from 21 March to 18 May and is on average on 30 April for the period 1980-2016. This day has shifted earlier by 1 week in recent years compared to the 1980s. In Figure 1 We find that there is potential for snow-ice formation over most of the Arctic Ocean. However, regional variations are strong (Figure 1). Snow ice is much more prominent in the Atlantic sector of the Arctic Ocean. The Greenland Sea has the greatest potential for snow-ice formation, with snow-ice layer thickness mostly above 0.1 m, reaching up to 0.5 m locally. This is observed in all the experiments. The Atlantic sector receives the highest precipitation in the Arctic (;Merkouriadi, Gallet, et al., 2017;) and is characterized by a large number of storm events in autumn and winter ;;Woods & Caballero, 2016). These results from our model experiments are supported by the frequent observations of snow ice north of Svalbard during the N-ICE2015 campaign in spring 2015. The model experiments also indicate that observations in 2015 were not exceptional, as there is high potential for snow-ice formation in this region during the entire period. However, we also note that on average, for most of the central Arctic and the Pacific sectors, snow-ice thickness is less than 0.05 m (<0.05-m probability = 61% for MERRA-2 and 85% for ERA-I). The maximum snow-ice thickness is obviously controlled by the initial SYI/MYI thickness in our model simulations. For smaller initial thickness, the area of higher potential for snow-ice formation propagates north, towards the central Arctic. Interestingly, in earlier decades, and for larger initial thickness, snow-ice potential in Barents and Kara Seas was higher compared to recent years (2000-2016; Figure 1). We observe the same for the Chukchi Sea, where snow-ice potential is higher in 1980-1990 compared to later years, even for h 0 = 2 m (Figure 1). This is likely related to the significant decrease of snow depth in Barents and Chukchi Seas in recent years, compared to the climatology by Warren et al., as a result of more seasonal ice and thus a shorter accumulation period (). Long-Term Trends We looked into the interannual variation and long-term trends for the period 1980-2016. To do this, we first calculated the total snow-ice volume on the day of maximum snow-ice volume in each year. Then we examined the trends of the maximum potential snow-ice volume, assuming all ice is level. The annual maximum snow-ice volume in the Arctic Ocean ranges from 615 ± 95 to 856 ± 153 km 3 for MERRA-2 and from 367 ± 34 to 454 ± 51 km 3 for ERA-I experiments. Annual maximum snow-ice volume from ERA-I is about half (55%) of the snow-ice volume from MERRA-2 experiments (not shown). In MERRA-2, there is a statistically significant decrease in snow-ice volume, ranging from −6 to −12 km 3 / year, across experiments of different initial SYI/MYI thickness (Figure 2c). In ERA-I, there is no significant trend. Note that the trends are calculated from a constant initial SYI/MYI thickness throughout the examined period. Thus, the trends we show do not include any effect by the thinning of the SYI/MYI with time. The decreasing trends are possibly due to snow depth decline over most regions of the Arctic (except the Greenland Sea) (). Snow depth decline is likely related to the smaller ice extent and thickness in recent years (Comiso, 2002(Comiso,, 2012), which results in less snow accumulation on sea ice and consequent decrease of the potential for snow-ice formation. However, in reality, thinner SYI/MYI in recent years would lower the rate of decrease, rendering the trend not significant. Even though the average trend of snow-ice volume in MERRA-2 is negative, there are strong regional variations (Figure 2a). The strongest negative trends are found in the Barents and Kara Seas, possibly the result of later FYI formation contributing to less snow accumulation in late fall (). Sporadic, strong positive trends are found in the Greenland Sea and are most likely due to the intensification of storms that bring more precipitation to this part of the Arctic. We calculated the snow-ice fraction relative to sea ice thickness on the day of maximum snow-ice volume over the Arctic Ocean. Snow-ice fraction was on average 9-12% for MERRA-2 and 6-7% for the ERA-I experiments. The snow-ice fraction demonstrates strong regional variations that follow very similar patterns to the snow-ice layer thickness (Figure 1). In the Greenland Sea, the snow-ice fraction is largest. For example, in MERRA-2 and for h 0 = 1 m, snow-ice fraction in the Greenland Sea ranges from 10% to 50%, even up to 80% sporadically along the coast of Greenland. These high fractions along the coast of Greenland are found in all MERRA-2 and ERA-I experiments, regardless of the initial SYI/MYI thickness. There is a decline in snow-ice fraction by 0.06% per year in MERRA-2 experiments (Figure 2d). The opposite is observed in ERA-I experiments, where snow-ice fraction increases by 0.03% per year (not shown). ERA-I produces less precipitation than MERRA-2. This results in less snow on sea ice, thicker sea ice, and 10.1029/2019GL085020 Geophysical Research Letters therefore less snow-ice thickness in ERA-I compared to MERRA-2. Our experiments showed that the long-term trends of snow and sea ice volume are decreasing in both ERA-I and MERRA-2, but the rates are different. The increasing trends of snow-ice fraction in ERA-I indicate that the rate of snow loss (−9 km 3 /year) is relatively small compared to the rate of sea ice loss (−57 km 3 /year). For comparison, in MERRA-2, the rate of snow volume loss is −17 km 3 /year, and the rate of sea ice volume loss is −39 km 3 /year. Most of the snow volume decrease is explained by a decrease in sea ice extent. There is simply less ice for the snow to accumulate on top of. FYI Versus SYI/MYI FYI has a much larger contribution to the total snow-ice volume than SYI/MYI (Figure 3a). On average, FYI contributes 79% of the total snow-ice volume in MERRA-2 and 92% in ERA-I experiments. The SYI/MYI contribution obviously shows large variations depending on the initial thickness (Figure 3b), ranging from 13% to 37% in MERRA-2 and from 4% to 23% in ERA-I experiments. The smallest contribution corresponds to the thickest SYI/MYI initial thickness (h 0 = 2 m), and the largest contribution corresponds to the thinnest SYI/MYI initial thickness (h 0 = 0.5 m; Figure 3b). However, when we look at the snow-ice thickness, we notice that it can be larger in SYI/MYI compared to FYI but only for the thinnest initial thickness (h 0 = 0.5 m). For all other initial thickness scenarios, mean snow-ice thickness in SYI/MYI equals or is less than in for FYI. Therefore, even though FYI contributes on average much more snow-ice volume, it does not consist of thicker snow ice. This is due to the fact the extent of FYI is larger than for SYI/MYI. On average, across all experiments, FYI fraction of the total ice volume is 77% on the day of maximum sea ice volume. Conclusions We examined the potential for snow-ice formation on level ice in the Arctic Ocean by implementing a 1-D ice/snow model on sea ice trajectories over the period 1980 to 2016. There is potential for snow-ice formation (i.e., negative freeboard) over most of the Arctic Ocean, even in earlier decades when thicker perennial sea ice extent was larger compared to recent years. However, regional variations are eminent. The largest potential snow-ice thicknesses are found in the Atlantic sector of the Arctic, especially in the Greenland Sea, where snow-ice is mostly above 0.1 m and locally reaching 0.5 m across all experiments. Snow-ice thickness for most of the central Arctic and the Pacific sectors is less than 0.05 m (<0.05 m probability = 61% for MERRA-2 and 85% for ERA-I). The snow-ice fraction to the sea ice thickness on the day of maximum snow-ice volume averages 10% for MERRA-2 and 6% for ERA-I experiments. The snow-ice fraction values demonstrate regional variations, with patterns similar to the snow-ice thickness variations. The annual maximum snow-ice volume is obviously affected by the initial SYI/MYI thickness in our model experiments, ranging from 615 ± 95 to 856 ± 153 km 3 for MERRA-2 and from 367 ± 34 to 454 ± 51 km 3 for ERA-I experiments. For constant initial SYI/MYI thickness, we notice a significant decrease of the annual maximum snow-ice volume, ranging from −6 to −12 km 3 /year, across all MERRA-2 experiments. However, this is likely partly compensated by the thinning of SYI/MYI during the examined period that can increase the potential for snow-ice formation. On average, FYI contributes 79% of the total snow-ice volume in MERRA-2 and 92% in ERA-I experiments. This is mainly because FYI occupies a greater area of the Arctic Ocean that has also increased in recent years (Comiso, 2012;). The Arctic Ocean is going through a transition from a predominantly multiyear sea ice system to a thinner, younger, first-year seasonal ice system (). In a previous case study, Merkouriadi, Cheng, et al. showed that snow-ice formation is controlled by the thickness of the SYI/MYI in the beginning of the growth season and the timing of the FYI formation relative to precipitation events in autumn and early winter. By expanding this experiment to the whole Arctic, we show here that FYI is likely the main contributor of snow ice. Thus, we expect to find snow ice contributing to the sea ice mass balance and acting as a sink for the snow in the future Arctic: especially in the Atlantic sector that has the largest precipitation. Snow contribution in sea ice via superimposed ice formation is accounted for in HIGHTSI model, but it is not included in our results of snow ice and was not analyzed in this study. Superimposed ice is more likely to occur in late spring or early summer (later than the date of maximum snow ice we have analyzed here), when snow is melting. However, in a warming Arctic with potential increase of rain-on-snow events, superimposed ice may become increasingly important; therefore, it should be examined more closely. This relatively idealistic study examined conditions for level ice and assumed that snow ice forms given the condition of negative freeboard. Therefore, the absolute values of snow ice may be overestimated. Ice dynamics including ridging and opening of leads, with consequent snow loss, were not taken into account. We recommend that snow-ice formation is realistically accounted for in sea ice modeling studies in the Arctic Ocean, especially in the Atlantic sector where the potential for snow-ice formation is highest. The choice of reanalysis has a significant impact on the results, mainly due to different precipitation amounts. This suggests that one has to be cautious interpreting studies of snow on sea ice where only a single reanalysis has been used (i.e. ).
Scarlett Johansson on Playing "Iron Man 2's" Black Widow: "She's This Crazy Badass" Scarlett Johansson will rock a skintight superhero suit in this weekend's "Iron Man 2" – and she has the attitude to go with it. "This is a no-b***s*** character," the star told the Summer 2010 issue of V Magazine of her role as the Black Widow. "It's not that she's non-feeling, she just gets the job done. She's part of something bigger, and she knows it. She's this crazy badass, and she has no time for f***ing around." The tough-as-nails role was one the actress, and wife of fellow big screen star Ryan Reynolds, told the mag fits with her new-found maturity. "I'm 25, and for some time I've played these characters who are kind of figuring it out, transforming from young girls to young women," she said. "I don't feel like a girl anymore. And I feel like my life and career are on a different path than they had been. There's a lot of road behind me." She also had praise for "Iron Man 2" director Jon Favreau, approving of his casting on the project – which included fresh franchise faces Mickey Rourke, Sam Rockwell and Don Cheadle as well as herself. "I think Jon just takes people that he really respects and squeezes them into superhero costumes," she laughed. "Iron Man 2" is due in theaters today.
Fortune 500: The Gold Standard of American Business Success? The Fortune 500 shouldn’t be the benchmark of business success. The Value Investor 500 is a better standard. The Fortune 500 is advertised this year as "The Gold Standard of Business Success." The list consists of the top 500 publicly-traded companies by revenue. While revenue is important, it should never be the gauge of success for a business or management team. You would never point to a morbidly obese person as the gold standard of healthy living, but this essentially is what the Fortune 500 does. The most important questions in business are how much value is created for others and how well are resources being used. Making revenue the be-all and end-all promotes bad behavior for CEOs, management teams, and boards. There is a better way. We tend to glorify the largest companies and their management teams, Fortune magazine calls the companies on the Fortune 500 "winners ." However, their size says nothing about their health. Enron (No. 5 in 2002), WorldCom (No. 25 in 2000), Fannie Mae (No. 25 in 2003), and General Motors (No. 3 in 2007) all reached the top 25 of the Fortune 500 before failing spectacularly. Management teams inherently have an incentive to grow their company's size, if executives' compensation is highly correlated to the size of the company. This is regardless of whether they are creating value for shareholders or destroying value for shareholders. The Fortune 500 also encourages empire building: the pursuit of growth to increase an organization's size, power, and influence with no regards to whether it is beneficial for shareholders (the owners of the company). Companies move up the Fortune 500 by growing the business and making acquisitions. You move down the list if your revenue compared to others falls. When CEOs say things like "Our goal is to be a Fortune 500 company," or "We're not just some cheap Chinese company making a cheap phone, we're going to be a Fortune 500 company," or "Eventually, we're going to be a Fortune 100 company," be aware. Those could be the signs of an empire builder. A great example is provided by Procter & Gamble (NYSE:PG) and General Electric (NYSE:GE) in 2006. At the time these companies were respectively the No. 24 and No. 7 companies on the Fortune 500. Fortune interviewed the companies' CEOs, and the interviewer noted, "the same big idea motivates virtually everything they do-another mantra easy to say but hard to execute: organic growth." The U.S. economy was growing at about 3% annually at the time. A.G. Lafley, the CEO of Procter & Gamble, had declared a goal of 4%-6% annual organic growth. This just after his company had completed a $57 billion acquisition of Gillette. Jeffrey Immelt, CEO of General Electric, declared a goal of 8% organic growth. So while these men were leading the 24th and seventh-largest companies in the U.S., they aimed to grow those businesses two to three times faster than the U.S. economy. To meet P&G's growth targets, Lafley has to find about $7 billion of new revenue this year, equivalent to a company the size of Barry Diller's IAC/Interactive. At GE, Immelt has to find about $15 billion of new revenue, equal to the size of Nike. And if they succeed, of course, they'll have to turn around and find even more next year. How did it turn out? Procter & Gamble shareholders would have done just as well putting their money in an index fund, while GE investors would have done far better in an index fund. The Fortune 500 also offers disincentive for companies to get smaller as they drop down the list. Fortune highlights those that are no longer on the Fortune 500 as the "dearly departed." This comes even as companies that have become smaller and more focused are truly in a better place than when they were overweight with multiple different types of businesses. A great example is actually Fortune Brands, which is not to be confused with Fortune magazine. Fortune Brands was a diversified holding company that operated businesses focused on home furnishings (Moen faucets, Simonton windows), golf products (Titleist), and liquor (Jim Beam). In 2011, Fortune Brands separated its three businesses into separate companies. Fortune Brands' golf business was bought out, but the remaining two businesses became Beam and Fortune Brands Home & Security (NYSE:FBHS). What was the result? While the businesses became smaller and no longer were on the Fortune 500, results at both Beam and Fortune Brands Home & Security improved. Beam was acquired in January 2014 by Suntory and Fortune Brands & Home Security continues to do well. Thankfully, some management teams get this. In recent months, Hewlett-Packard, eBay, and Symantec have all announced plans to split up their various businesses to make them stronger. Historically, the gold standard was how much gold a dollar could be exchanged for, making gold the benchmark for the value of a dollar. The gold standard of American business success thus implies that these companies are the benchmark for corporate success. However, the benchmark should be outstanding performance rather than size. Owner earnings answers the question of "how much value is the business creating for its owners?" That is how much cash flow the business generates for its owners after accounting for all capital expenditures required to maintain its long-term competitive position. In sports, your team has to win championships, or it really can't be called a great team. In business, the measure is financial -- return on invested capital. I think that, to be considered great, a company must have sustained returns on invested capital substantially in excess of other companies in its industry. By taking the top 500 companies in the U.S. by free cash flow, and then ranking them by their five-year average return on invested capital, I came up with a qualitative list of the 500 companies in the U.S., which I call the Value Investor 500. The Value Investor 500 is certainly far from perfect, but it is far closer to a gold standard of American business success than the Fortune 500. Among the top 25 companies on the Value Investor 500, you'll find some companies you would expect (Apple, Microsoft, MasterCard), as well as some names that might surprise you. Warren Buffett could easily be called the gold standard of American business success. Buffett went from humble beginnings to being one of the richest people in the world. His company, Berkshire Hathaway, has been slowly climbing the Fortune 500 for nearly 60 years. Berkshire first made the Fortune 500 in 1956 at No. 431 after the merger of two textile companies, Berkshire Fine Spinning Associates and Hathaway Manufacturing. The companies merged in an effort to survive the declining U.S. textile market in the northeastern U.S.. Buffett bought Berkshire in the 1960s after it hit hard times and fell off the list. Buffett grew the business through investing, and in 1989 the company reemerged on the Fortune 500 at No. 205. Berkshire is No. 4 on the list for 2014. Why do I bring this up? The old management of Berkshire Hathaway thought the solution to the company's business problems was to merge, grow larger, and gain economies of scale in a declining business. After entering the Fortune 500 at No. 431, Berkshire Hathaway's ranking declined annually for the next three years before the company fell off the list in 1960. Buffett, on the other hand, built up Berkshire Hathaway by following the same principles of the Value Investing 500. He invested in businesses with long-term competitive advantages that produce sustainable earnings for their owners and allow for high returns on invested capital. If that method is good for one of the best investors in the world, consider it for yourself.
Sentinel-2/MSI absolute calibration: first results Sentinel-2 is an optical imaging mission devoted to the operational monitoring of land and coastal areas. It is developed in partnership between the European Commission and the European Space Agency. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. It will offer a unique combination of global coverage with a wide field of view (290km), a high revisit (5 days with two satellites), a high resolution (10m, 20m and 60m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). CNES is involved in the instrument commissioning in collaboration with ESA. This paper reviews all the techniques that will be used to insure an absolute calibration of the 13 spectral bands better than 5% (target 3%), and will present the first results if available. First, the nominal calibration technique, based on an on-board sun diffuser, is detailed. Then, we show how vicarious calibration methods based on acquisitions over natural targets (oceans, deserts, and Antarctica during winter) will be used to check and improve the accuracy of the absolute calibration coefficients. Finally, the verification scheme, exploiting photometer in-situ measurements over Lacrau plain, is described. A synthesis, including spectral coherence, inter-methods agreement and temporal evolution, will conclude the paper.
Cryptocurrencies like bitcoin can make an attractive investment for many people on Wall Street. But famed financial analyst Gary Shilling, president of A. Gary Shilling & Company says that this emerging currency is too opaque and complicated for him to invest. The following is a transcript of the video. "Now listen, I want you to explain to me what this really is." "Well you know, ok it's a controlled deal and these minters. There's only so many of them." And I said: "How about the guys behind this?" "Well you know nobody ... well we think we know who he is." "Great discovery, wonderful investment, but I won't tell you the details." And a lot of suckers in London invested in this. And the last line was: The guy collected all the money, closed up, left for the continent that evening never to be heard of again." I'm just very suspicious of things that are not transparent. If I can't understand it, I don't want to invest in it.
The Republican plan to repeal and replace Obamacare looks troubled. With a key report predicting it will cut healthcare coverage for 24 million Americans, the plan drafted by House Speaker Paul Ryan and backed by President Donald Trump may not muster the votes to pass Congress. It’s even possible Republicans could set aside plans to repeal Obamacare to focus on other priorities, such as tax reform. But the GOP plan offers an important preview of changes in federal benefit programs that may be inevitable, and affect millions of Americans eventually. A key provision of the Ryan legislation, known as the American Health Care Act, or AHCA, involves cutbacks to Medicaid, which provides healthcare to the poor. The plan would cut $880 billion in Medicaid spending by 2026, forcing 14 million people out of the program, according to the nonpartisan Congressional Budget Office. The AHCA would also cut federal subsidies established under Obamacare, which help lower-income people pay for health insurance. That and a few other provisions would reduce the rolls of the insured by another 10 million. All told, the CBO predicts the number of uninsured Americans would rise to 52 million by 2026, which is slightly higher than the number before Congress passed Obamacare in 2010. Those sharp cutbacks in coverage could be the Ryan bill’s undoing, since several Republican senators say they won’t vote for legislation that reduces healthcare coverage in their states. That could bring the Senate tally well below the 50 votes needed for passage of the bill. But the time may not be far off when a more urgent budget crunch necessitates deep cutbacks somewhere—and today’s debate over healthcare reveals what the easiest targets are. Where to cut spending? The national debt is now $20 trillion, which is 105% of the nation’s GDP. Nobody’s sure when, but at some point Washington will hit its borrowing limit and have to start cutting spending. Four big programs—Medicare, Medicaid, Social Security and defense—account for about 60% of federal spending, so when it’s time for cutbacks, that’s where the big money is. Of those four programs, Medicaid is probably most vulnerable, since the lower-income people it covers have less political power than seniors covered by Medicare or Social Security, or the muscular defense industry. So the Ryan plan is probably an apt preview of a more strenuous effort to cut Medicaid in the future. The first to feel the hit would be people trying to enroll in Medicaid for the first time, since part of the Ryan plan would limit new enrollees, while current enrollees age out of the program and join Medicare instead. There would also be caps on the amount of federal Medicaid payments to states, instead of the open-ended system in place now. That could force states to reduce what they cover under Medicaid and find other ways to shave costs for those who do have coverage. A few states are already going further, which they’re permitted to do, since Medicaid is a hybrid state-federal program. Maine is considering a five-year lifetime limit on Medicaid enrollment for “able-bodied” people in its program, along with work and education requirements. A proposal in Kentucky would require Medicaid enrollees to work or demonstrate that they’re looking for work. At least 15 states require drug tests for various forms of public assistance, and Wisconsin wants to extend drug tests to people who receive food stamps, which would require federal approval. Seema Verma, Trump’s appointee to run the agency that oversees Medicare and Medicaid, favors such reforms and could ease rules on states to encourage more of them. Reining in federal entitlements Such tough-love provisions are generally considered the conservative approach, while liberals push for more generous aid to the needy. But the conservative view may become more mainstream, if budget pressures worsen, for one simple reason: middle-income taxpayers will bear the brunt of a budget crunch, with many supporting cutbacks that once seemed more affordable. Lower-income families pay modest taxes, and sometimes no taxes at all, while drawing benefits funded by those who pay more in taxes. Generous entitlements become less appealing when taxes have to go up to keep them in place. Story continues
We don’t yet know the details of the Afghanistan escalation, but these headlines are the best guess we have now, based on White House leaks. We are in the deepest economic crisis in over 70 years, with the highest unemployment rates in living memory, and deep cuts in education and social services. We should not be forced to tighten our belts to pay for their imperial adventures. Do you think things are headed in the wrong direction? We in the San Diego Coalition for Peace and Justice think so, and we know that many of you agree with us. Join us on the day after Obama announces his war escalation plan. We will gather in front of Congressional Representative Susan Davis‘ office to demand that she oppose any funding for the wars and occupations. End the Occupation of Afghanistan Now! Healthcare, Jobs and Education – No to War and Occupation! Tell you friends, coworkers, neighbors and fellow students! Bring your signs and banners! Spread the word! Bring candles and flashlights! Be there! I would support this if we were conquering other nations and not just wasting money. We should take over an oil bearing nation or two and be done with it. Hell, let’s take over everything. That is what large military is for. For making other nations pay tribute by choice or by force. Give me a few weeks at the big ole red button and you will see the strife end. It’s like out military is the World Police and our police are citizen harassers. So how are all those demonstrations working for you?
Bharat Sanchar Nigam Limited (BSNL), the state-run telecom service provider, has launched a new offer to take on the likes of Bharti Airtel, Vodafone India, Idea Cellular and Reliance Jio. Under the newly launched BSNL chaukka offer, the telco is now offering 4 GB data per day for 90 days to its customers recharging with new STV- Rs 444. The telecom operator said that the new promotional offer follows the Triple Ace Rs 333 offer which comes with 3 GB of daily data for 90 days. “The benefits customer will get under the new STV- 444 “BSNL CHAUKKA “of Rs. 444/- truly unlimited data with 4 GB data/day with the validity of 90 days,” the telco said in a statement issued on Thursday. The new offers come at a time when almost all telecom operators are offering bundled plans with 1-2GB data usage with unlimited voice service to take of Reliance Jio, which is offering daily data with bundled voice and digital services through its Dhana Dhan Dhan plans for 90 days. It would be interesting to see if any telco would try to match BSNL’s offer, which is the highest per day data limit in India currently. The new offer effectively brings per GB data cost to less than a rupee. R.K.Mittal, Director (CM) BSNL Board said, “We are committed to providing affordable and efficient services to all segment of our mobile customers. We offer best prices to our customers considering the present trend of Indian telecom industry.”
Evidence-based practice in an age of relativism: toward a model for practice. Evidence-based practice (EBP) is considered a hallmark of excellence in clinical practice. However, many social workers are uncertain about how to implement this approach to practice. EBP involves integrating clinical expertise and values with the best available evidence from systematic research while simultaneously considering the client's values and expectations--all within the parameters of the agency mandate and any legislative or environmental considerations. This article explores the feasibility of EBP and attempts to steer a course between those who advocate an EBP model that may appear unachievable to many clinicians and those who dismiss it outright on philosophical grounds. Five areas that affect the feasibility of EBP are explored: misconceptions about EBP, confusion about philosophical issues, questions about the quality of evidence needed to support EBP, substantive knowledge domains required for practice, and issues related to knowledge transfer and translation. An important theme of this analysis is the central role of clinical judgment in all aspects of EBP.
The ongoing economic crisis in Venezuela is driving people to leave the country by the hundreds of thousands—often crossing borders on foot—seeking better lives in Brazil, Colombia, Ecuador, Peru, and beyond. They are fleeing a nation that now experiences frequent power outages and water shortages, and suffers from a severe lack of food and basic medical supplies. Hyperinflation has become such a burden that new currency was recently issued, at a conversion rate of 100,000 bolivars (old currency) to 1 sovereign bolivar (new). The IMF estimated that Venezuela’s rate of inflation might reach 1,000,000 percent this year. Just this week, several new economic measures will take effect, including a more-than-3,000 percent hike in the minimum wage. The rising numbers of refugees are causing problems in bordering countries as well, with countries like Ecuador and Peru tightening restrictions on immigration. Gathered here: a look inside Venezuela over the past few months, and at some of those who chose to leave their ailing country behind.
Who and Where: People and Location Co-Clustering In this paper, we consider the clustering problem on images where each image contains patches in people and location domains. We exploit the correlation between people and location domains, and proposed a semi-supervised co-clustering algorithm to cluster images. Our algorithm updates the correlation links at the runtime, and produces clustering in both domains simultaneously. We conduct experiments in a manually collected dataset and a Flickr dataset. The result shows that the such correlation improves the clustering performance. INTRODUCTION Given a large corpus of images, we want to cluster them such that images semantically related are grouped in one cluster. Semantics of an image refer to the information that image carries. For example, the face on the image is usually used to identify who. The background of the image refers to the location where the person was. All components together can convey what story has happened. In our case, we focus on two entities: who and where. While there has been considerable work on automatic face recognition in images and even a modest effort on location recognition, the coupling of the two is basically unexplored. An image which contains both people and location implies the cooccurrence of instances in two domains. For example, multiple photos taken at the same private location increase the confidence that similar faces on those photos are from a same person. Within a short time window, the same person on several photos indicates the affinity of locations. Our framework, shown in Fig. 1, consists of two domains: people and locations. We take into account the inter-relation between two domains to enhance clustering in each domain. Three types of relations in people and location domains are considered: peoplepeople location-location people-location. A set of image patches is extracted and described in each domain. The similarity between patches within each domain is defined based on the visual appearances. The co-occurrence constraints are satisfied if patches from two domains appear in a same image. This relationship reflects the consistency of clustering results which is not embodied from visual appearances in a single domain. We formulate the clustering task as an optimization problem which aims to minimize the within cluster distances and maximize the consistency across domains. We show this problem can convert to the semi-supervised kernel k-means clustering similar to. However, we generate clustering results for two domains at the same time. During the iterative clustering process, constraints across domains and within domains keep updated. The main idea is that the clustering result in one domain can aid the clustering in the other domain. We validate our approach with photos gathered from personal albums and a set of public photos crawled from Flickr. Our contributions are threefold: 1) we propose a co-clustering algorithm for image clustering, focusing on people and locations; the algorithm couples both domains and explores underlying crossdomain relations; 2) our algorithm can simultaneously produce the clustering results of people and location, and outperforms clustering separately on each domain and the baseline co-clustering algorithm; 3) our algorithm is formulated as an optimization problem, which can be solved by through semi-supervised kernel k-means. It is robust and converges fast in practice. RELATED WORK Face is an important kind of visual objects in images, which is crucial to identify people. In recent years, there have been a lot of efforts in face detection, recognition and clustering. The basic idea is to either represent a face as one or multiple feature vectors, or parameterize the face based on some template or deformable models. In addition to treating faces as individual objects, some researchers have been seeking for help from context information, such as background, people co-occurrence, etc. Davis et al. developed a context-aware face recognition system that exploits GPS-tags, timestamps, and other meta-data. Lin et al. proposed a unified framework to jointly recognize the people, location and event in a photo collection based on a probabilistic model. Most location clustering algorithms are relying on the bag of words model. Large-scale location clustering has been recently demonstrated in, which use the GPS information to reduce the large-scale task down into a set of smaller tasks. Hays et al. proposed an algorithm for estimating a distribution over geographic locations from the query image using a purely data-driven scene matching approach. They leveraged a dataset of over 6 million GPS-tagged images from the Flickr. When the temporal data is available in the corpora, it also helps to localize sequences of images. However, clustering in people and location domains are usually treated as separate tasks. Location patches in photos with faces are not well exploited. While GPS information of the photo is not easily accessible, we propose a co-clustering algorithm, which simply use patches of the photo itself to discover the correlation in these two domains. OUR APPROACH In this section, we present the co-clustering framework to simultaneously cluster images in people and location domains. We have two major steps. The first step is pre-processing. We extract face and location patches from the corpus of images, and compute the visual features. The next step is co-clustering. The people-people, location-location and people-location relations are generated and updated. We describe the detail for each step below. Pre-processing We here describe how to extract features from people and location domains, and discover the relations between both domains. People Domain: We use Viola-Jones face detector to extract face patches from an image. To obtain high accuracy, a nested detector is applied to reduce the false positive rate. Every face will have a corresponding face patch. All face patches are normalized to the same size. We adopt the algorithm in to detect seven facial landmarks from each extracted face patch. For each input face patch, four landmarks (outer eye corners and mouth corners) are registered to the pre-defined positions using the perspective transform. Then all seven facial landmarks are aligned by the computed perspective transform. For each landmark, two SIFT descriptors of different scales are extracted to form the face descriptor. We build a face graph over all face patches in the image collection. In the graph, each vertex represents a face patch. The weight of the edge is the similarity of face descriptors of two face patches. Location Domain For each image, Hessian affine covariant detector is used to detect interest points. The SIFT descriptor is extracted on every interest point. The method similar to the work of Heath et al. is used to discover the shared locations in the image collection. The content-based image retrieval is applied to find top related images, and avoid quadratic pairwise comparisons. Lowe's ratio test is used to find the initial correspondences. RANSAC is used to estimate the affine transform between a pair of images and compute feature correspondences between images. For every location patch, two types of features are extracted: a bag of visual words and a color histogram. The bag of words descriptor summarizes the frequency that prototypical local SIFT patches occur. It captures the appearance of component objects. For images taken in an identical location, this descriptor will typically provide a good match.The color histogram characterizes certain scene regions well. These two types of features are concatenated to represent the location patch. A location graph is built similarly to the face graph. Each vertex in the graph represents a location patch. The weight of the edge is the similarity of location descriptors of two location patches. Inter-relations across Domains To co-cluster across the people and location domains, several basic assumptions are made as fol-lows. Cannot Match Link. One person cannot appear twice in one image. Therefore, there is a cannot match link between a pair of face patches in the same image. Here we do not consider the exceptions like the photo collage or mirrors in the image. If two locations are far away according to the ground truth e.g. GPS signals, and two face patches appear in these two locations during a short time period, there is a cannot match link between this pair of patches. This assumption comes from that people cannot teleport within a short time period, for example, one people cannot appear in San Francisco and in New York within an hour. Must Match Link. Two location patches are connected by a must match link if there is an affine transform found between them in the location graph construction. Because the links verified by RANSAC have high accuracy, we trust that they connect patches in the same location. Two location patches are connected by a must match link if they appear in the same image. Two different buildings may appear in the same image, therefore, in our setting, one location is defined as an area which may contain different backgrounds. Two location patches are connected by a must match link if they co-occur with the same people within a short time period. This assumption also comes from the fact that people cannot move too fast. Possible Match Link. Two face patches that appear in the same location but not in the same image probably belong to the same people, due to the strong co-occurrence between the location and the face. This is true if the place has special meaning to the person, for example, his/her home or office, where he/she visits frequently. However, the assumption is not always true. For example, at tourist attractions, every people would take photos there. Therefore, the locations do not contribute much for the clustering in people domain. A weight is needed for the locations to distinguish the public locations and private locations. Private location is more helpful for clustering in people domain, while public location will introduce many noise. Problem Formulation We formulate our people and location co-clustering as an optimization problem. Given a set of feature vectors X = {xi} n i=1, the goal of the standard k-means in each domain is to find a k-way disjoint partitioning (S1,..., S k ) such that the following objective is mini-mized: where mc is the cluster center of c. The matrix E is defined as pairwise squared Euclidean distances among the data points, such that Eij = ||xi − xj|| 2. We introduce an indicator vector zc for the cluster Sc. where z T c zc is the size of cluster Sc, and z T c Ezc gives the sum of Eij over all xi and xj in Sc. Now the matrixZ is defined such that the cth column ofZ is equal to zc/(z T c zc) 1/2.Z is an orthonormal matrix,Z TZ = I k. Let NF be the number of face patches and NL be the number of location patches. kF is the number of face clusters and kL is the number of location clusters. By considering the relations between people and location domains, we write the objective as: EF and EL are pairwise squared Euclidean distance matrices in people and location domains. To integrate the must match constraints and cannot match constraints, the distance of the must match link is set to 0 and the distance of the cannot match link is set to +∞. fF and fL with constraintsZ T FZF = I k F andZ T LZL = I k L are the standard k-means optimization problems in people and location domains respectively. The binary people-location co-occurrence matrix CF L is defined as: the ith column of CF L is the location patches that co-occur with the face patch i. For example, if the first column of CF L is (0, 0, 1, 0, 1, 0,... ) T, which means the first face patch co-occurs with the third and the fifth location patches in the same image. CF LZF is a clustering of location patches which is based on the face clustering resultZF. Our goal is to maximize the consistency between the location clusteringZL and CF LZF. Location patches are weighted differently to reflect different semantic meanings of the people and location interactions. It is not difficult to discover the similarity between the definitions of fF L and fLF except the weight matrix Ti and P. fF L optimizes the consistency that locations co-occur with the same people during a short time period should be one location. Ti is a NL NL binary diagonal matrix that non-zero entries on the diagonal indicate these location patches are taken within a short time period. For example, Ti = diag(0, 1, 0, 1, 1,... ) means the second, the fourth and the fifth location patches have similar timestamps. There are t time constraints that are automatically learned from the meta-data of images. fLF optimizes the consistency that private locations are useful to identify people. P is a NL NL diagonal weight matrix. It defines a score for each location patches. The private locations have larger weights and the private locations have small weights. The diagonal matrix P is defined as: where Pii approximates 0 at public locations such as landmarks and it is approximate 1 at private locations. Li is the location cluster that i belongs to. NF L i is the number of people appear in location Li. Alternative Optimization The optimization problem is not convex when the optimization variables involveZF andZL. Therefore, we use the alternative optimization by fixing variables in one domain and optimize on other variables and do this iteratively. When fixing variables, e.g.ZF. The problem becomes a semi-supervised kernel k-means problem, which can be solved easily. We solve the problem following this sequence until the convergence:ZL →ZF → P →ZL →ZF → P →. The firstZF andZL are computed using the standard kernel k-means without cross-domain relations. After the initial clustering results are known, the weight matrix P can be computed using equation and in the following iteration, the semi-supervised kernel k-means is used to integrate the cross-domains relations. Semi-supervised Kernel K-means We now briefly describe the existing semi-supervised kernel kmeans algorithm. The objective is written as the minimization of: where M is the set of must match link constraints, C is the set of cannot match link constraints, wij is the penalty cost for violating a constraint between xi and xj, and ci refers to the cluster label of xi. The first term in this objective function is the standard kmeans objective function, the second term is a reward function for satisfying must match link constraints, and the third term is a penalty function for violating cannot match link constraints. The penalties and rewards are normalized by cluster size: if there are two points that have a cannot match link constraint in the same cluster, we will penalize higher if the corresponding cluster is smaller. Similarly, we will reward higher if two points in a small cluster have a must match link constraint. Thus, we divide each wij by the size of the cluster that the points are in. Let A be the similarity matrix Aij = x T i xj and let be the matrix such thatij = Aii + Ajj. Then, E = − 2A. By replacing E in the trace minimization, the problem is equivalent to the minimization of tr(Z T ( − 2A − 2W )Z). We calculate tr(Z TZ ) as 2tr(A), which is a constant and can be ignored in the optimization. This leads to a maximization of tr(Z T (A + W )Z). If we define a matrix K = A + W, our problem is expressed as a maximization of tr(Z T KZ) and is mathematically equivalent to unweighted kernel k-means. Alternative Optimization IfZF is fixed andZL is optimized. The objective fLF can be written as: The objective fF L can be written as: We obtain the following optimization problem: WLi + QL)ZL) where AL is the affinity matrix in the location domain. This optimization problem can be solved by setting the kernel matrix KL = 2AL + t i=1 WLi + QL. Similarly, ifZL is fixed andZF is optimized. Using the fact that tr(AB) = tr(BA) we can rewrite the fLF and fF L, and obtain the following optimization problem: where AF is the affinity matrix in the face domain. This optimization problem can be solved by setting the kernel matrix KF = 2AF + t i=1 WF i + QF. EVALUATIONS We conduct experiments on two datasets to validate our approach. The first dataset contains images collected from personal albums with labeled ground truth. The second one uses a larger dataset crawled from online photo service: Flickr. We choose K-means with constraints as the baseline algorithm. We also compare the performance of clustering on the each single domain by normalize cut and Kmeans without any constraint. We use the RandIndex to evaluate the performance of the clustering. Personal Albums This dataset contains 111 images collected from personal albums. In total it has 11 people and 13 locations. In the location domain, the top 50 image candidates are selected for the pairwise geometric verification. For each image, the bounding box of matched interest points is extracted as the location patch. A bag of words histogram (1000 visual words), 256-bin color histogram are extracted from each location patch. The dimension size of features in the location domain is 1, 256. We use a weight ratio of 1 : 1 for BoW:color features. All feature vectors are L2 normalized. In total, there are 146 face patches and 266 location patches. In the dataset, each image associates a timestamp in the Exif header. The mean-shift is used to cluster images in the time sequence and a matrix Ti is defined for each cluster of images. We cluster the face and location patches using the normalized cuts based on their appearance features as the baseline. K-means with constraints are also compared by adding the initial must match links and cannot match links in each domain. Figure 2 shows results in the people domain and the location domain. From Figure 2, we observe the steady improvement on the clustering results when the number of clusters is larger than 2. The k-means with constraints are quite sensitive to the number of clusters. The best RandIndex values of methods across all K values are ordered as: Co-clustering, k-means-with-constraints and Normalize cuts. The values for these methods except Co-clustering do not vary much. The performance gain of Co-clustering in the location domain is very significant. It's mainly resulted from the must match link within the location domain. For the people domain, the difference in the clustering performance is very big, however, the steady increase over K is still promising. Online Photo Sets Dataset preparation: We use 140 names of public figures to query Flickr and filter out images without geo-location information. In total, we collect 53,800 images. We then filter out images without faces. The ground truth of the people domain is obtained directly from names. The ground truth of the location is obtained by clustering the longitude and latitude associated with images. We use the agglomerative clustering to discover location clusters. We consider each geo-location data including the longitude and the latitude as a point in the two dimensional space. In this dataset, we set the number of locations to be 100. Figure 3 shows RandIndex values on the people domain and the locatin domain comparing k-means, Normalized cuts and Coclustering over different K values. The improvement is not as big as that in the personal album dataset. It is mainly caused by the noise of the image set. The ground truth of the location domain is clustered by geo-location information which is not necessary equal to the location in the image. The ground truth of the people domain could also contain noise e.g. different people with the same name may appear together within one cluster. One future work is to find efficient algorithm to deal with the noise. CONCLUSION We present a novel algorithm to co-cluster the people and location simultaneously. The relations across domains are used to enhance the clustering in single domain. We validate our approach using two datasets, and the experiment show that our algorithm performs better
Large-Area Nanofabrication of Partially Embedded Nanostructures for Enhanced Plasmonic Hot-Carrier Extraction When plasmonic nanoparticles are coupled with semiconductors, highly energetic hot carriers can be extracted from the metalsemiconductor interface for various applications in light energy conversion. However, the current quantum yields for hot-electron extraction are generally low. An approach for increasing the extraction efficiency consists of maximizing the contact area between the surface of the metal nanostructure and the electron-accepting material. In this work, we developed an innovative, simple, and scalable fabrication technique that partially embeds colloidal plasmonic nanostructures within a semiconductor TiO2 layer without utilizing any complex top-down nanofabrication method. The successful embedding is confirmed by scanning electron microscopy and atomic force microscopy imaging. Using visible-pump, near-IR probe transient absorption spectroscopy, we also provide evidence that the increase in the surface contact area between the nanostructures and the electron-accepting material leads to a...
. BACKGROUND To study the effect of human papillomavirus (HPV) 16 E6/E7 and TPA (12-O-tetradecanog-1-phorbol-13-acetate) on malignant transformation of human embryo oral tissue. METHODS Recombinant plasmid with HPV 16 E6/E7 was constructed and transfected into human embryo oral tissue. The oral tissue with HPV 16 E6/E7 gene or without the gene was inoculated into the hypophloeodal of right shoulder in scid mice, respectively. The study was conducted in four groups: the first group was the oral tissue transfected plasmid with HPV 16 E6/E7 plus TPA, which were inoculated into 8 scid mice; the second group was only oral tissue transfected with plasmid with HPV 16 E6/E7 into 6 scid mice; the third group was normal oral tissue plus TPA inoculated into 6 scid mice, and the final group was only normal oral tissue inoculated into 5 scid mice. Three days after inoculation, TPA was injected at the left shoulder of the mice once a week. Twelve weeks after inoculation, tumor was found in 7 scid mice from the first group. HPV 16 E6/E7 gene in tumor tissues was analyzed by PCR. RESULTS The rate of tumor formation was 7/8 in the first group; no tumor was found in the other groups. Pathological diagnosis of the tumor was fibrohistiocytoma. HPV 16 E6/E7 gene was detected by PCR in tumor tissues. CONCLUSION With the cooperating action of TPA, human oral tissue containing HPV 16 E6/E7 gene could cause malignant transformation in scid mice.
Scientific position paper of the Movement Disorder Society evaluation of surgery for Parkinson's disease. Task Force on Surgery for Parkinson's Disease of the American Academy of Neurology Therapeutic and Technology Assessment Committee. There are numerous options now available for the surgical treatment of patients with Parkinsons disease (PD). The field is advancing rapidly, and there are many variations in the techniques, even for a single procedure. This has led to some confusion as to what is inappropriate, what is experimental, and what is now acceptable for standard care. The Therapeutics and Technology Assessment Committee of the American Academy of Neurology assembled a task force to write a position paper on this topic which has now been approved by the Academy and published inNeurology. 1 This document has also been approved by the Scientific Issues Committee (Chair, David Brooks, MD) and the International Executive Committee of the Movement Disorder Society. A brief summary of the results is noted here; the original should be consulted for details. The document was prepared by formal procedures and is based entirely on the published literature through December 1998. Almost all papers included were class III, peer-reviewed, used, validated methods of assessment and provided consistent clinical data. Although most studies considered were prospective, only two included a concurrent control group, a requirement necessary to meet class II evidence. 2,3 Publications in 1999 are noted here as updates but recommendations have not been altered.
Linear perturbations of Einstein-Gauss-Bonnet black holes We study linear perturbations about non rotating black hole solutions in scalar-tensor theories, more specifically Horndeski theories. We consider two particular theories that admit known hairy black hole solutions. The first one, Einstein-scalar-Gauss-Bonnet theory, contains a Gauss-Bonnet term coupled to a scalar field, and its black hole solution is given as a perturbative expansion in a small parameter that measures the deviation from general relativity. The second one, known as 4-dimensional-Einstein-Gauss-Bonnet theory, can be seen as a compactification of higher-dimensional Lovelock theories and admits an exact black hole solution. We study both axial and polar perturbations about these solutions and write their equations of motion as a first-order (radial) system of differential equations, which enables us to study the asymptotic behaviours of the perturbations at infinity and at the horizon following an algorithm we developed recently. For the axial perturbations, we also obtain effective Schr\"odinger-like equations with explicit expressions for the potentials and the propagation speeds. We see that while the Einstein-scalar-Gauss-Bonnet solution has well-behaved perturbations, the solution of the 4-dimensional-Einstein-Gauss-Bonnet theory exhibits unusual asymptotic behaviour of its perturbations near its horizon and at infinity, which makes the definition of ingoing and outgoing modes impossible. This indicates that the dynamics of these perturbations strongly differs from the general relativity case and seems pathological. We study linear perturbations about non rotating black hole solutions in scalar-tensor theories, more specifically Horndeski theories. We consider two particular theories that admit known hairy black hole solutions. The first one, Einstein-scalar-Gauss-Bonnet theory, contains a Gauss-Bonnet term coupled to a scalar field, and its black hole solution is given as a perturbative expansion in a small parameter that measures the deviation from general relativity. The second one, known as 4-dimensional-Einstein-Gauss-Bonnet theory, can be seen as a compactification of higher-dimensional Lovelock theories and admits an exact black hole solution. We study both axial and polar perturbations about these solutions and write their equations of motion as a first-order (radial) system of differential equations, which enables us to study the asymptotic behaviours of the perturbations at infinity and at the horizon following an algorithm we developed recently. For the axial perturbations, we also obtain effective Schrdinger-like equations with explicit expressions for the potentials and the propagation speeds. We see that while the Einstein-scalar-Gauss-Bonnet solution has well-behaved perturbations, the solution of the 4-dimensional-Einstein-Gauss-Bonnet theory exhibits unusual asymptotic behaviour of its perturbations near its horizon and at infinity, which makes the definition of ingoing and outgoing modes impossible. This indicates that the dynamics of these perturbations strongly differs from the general relativity case and seems pathological. I. INTRODUCTION With the advent of gravitational wave (GW) astronomy, it is now possible to explore directly, via GW signals, the strong gravity regime that characterises the merger of two black holes. So far, GW observational data are in agreement with the general relativity (GR) predictions, but it is important to test this further with more precise and more abundant data that will be available in the near future. In parallel, as a way to guide the analysis of future data, it is useful to anticipate possible deviations from the GR predictions by exploring alternative theories of gravity. While the full description of a black hole merger in a model of modified gravity might be a daunting task, given the complexity that it already represents in GR, the ringdown phase of the merger appears simpler to consider in a broader range of theories, since it involves the study of perturbations of a single black hole. It is nevertheless already a challenging task since black hole solutions in modified gravity theories are much more involved than GR solutions. In the present work, we restrict our study to the case of non rotating black holes in scalar tensor theories, corresponding to the case of most known solutions. The most general family of scalar-tensor theories with a single scalar degree of freedom are known as degenerate higher-order scalar-tensor (DHOST) theories and perturbations of non rotating black holes within this family or sub-families, such as Horndeski theories, have been investigated in several works. For black hole solutions in Horndeski theories with a purely radially dependent scalar field, the axial perturbations were investigated in and the polar perturbations in, in both cases by reducing the quadratic action for linear perturbations to extract the physical degrees of freedom. This analysis was extended in to include a linear time dependence of the background scalar profile, although the stability issue was subsequently revisited in. Axial modes were further discussed in and in the context of general DHOST theories. The perturbations of stealth black holes in DHOST theories were investigated in and more recently in. Note that, beyond non-rotating black holes, perturbations of the stealth Kerr black hole solutions found in were analysed in. The approach adopted in relies on the definition of master variables in order to rewrite the quadratic Lagrangian for perturbations in terms of the physical degrees of freedom. The procedure to identify the master variables can however be quite involved, as illustrated in for stealth black holes. It is moreover strongly background-dependant and a general procedure might not exist. In and, we introduced another approach which focuses on the asymptotic behaviours of the perturbations, allowing to identify the physical degrees of freedom in the asymptotic regions, namely at spatial infinity and near the horizon. Since quasinormal modes are defined by specific boundary conditions (outgoing at spatial infinity and ingoing at the horizon), this is in principle sufficient to understand their properties and compute them numerically. Their asymptotic behaviour can also be used as a starting point to solve numerically the first-order radial equations and thus obtain the QNMs complex frequencies and the corresponding radial profiles of the modes. In the present paper, we study scalar-tensor theories involving a Gauss-Bonnet term and we focus our attention on two types of models. First, we consider Einstein-scalar-Gauss-Bonnet (EsGB) theories, which contain a scalar field with a standard kinetic term and is coupled to the Gauss-Bonnet combination. We also investigate a specific scalar-tensor theory (4dEGB) that can be seen as a 4d limit of Gauss-Bonnet, in which it is possible to find an exact black hole solution (see also ). Both EsGB and 4dEGB models belong to DHOST theories, and more specifically to the Horndeski theories (which we prove using the expression of Lovelock invariants as total derivatives given in ). They however involve cubic terms in second derivatives of the scalar field. We thus need to slightly extend the formalism introduced in, which was limited to terms up to quadratic order, to include these additional terms. Perturbations of non rotating black holes in EsGB theories have been investigated numerically in and. In the present work, we revisit the analysis of these perturbations by applying our asymptotic formalism to the perturbative description of EsGB black holes recently presented in. We also give the Schrdinger-like equation for the axial modes. Concerning the 4dEGB black hole, the present work is, to our knowledge, the first investigation of its perturbations. For axial perturbations, the system contains a single degree of freedom and one can reformulate the equations as a Schrdinger-like equation. The outline of the paper is the following. In the next section, we introduce and extend the formalism that describes linear perturbations about static spherically symmetric solutions in scalartensor theories, allowing for second derivatives of the scalar field up to cubic order in the Lagrangian. Section III focusses on Einstein-scalar-Gauss-Bonnet theories. After discussing the background solution, which is known analytically only in a perturbative expansion, we consider first the axial modes and then the polar modes. We write their equations of motion and find their asymptotic behaviours near the horizon and at infinity, which is a necessary requirement to define and compute quasi-normal modes. We then turn, in section IV, to the 4dEGB black hole solution which is treated similarly. We can find that while the Einstein-scalar-Gauss-Bonnet solution has well-behaved perturbations, the solution of the 4-dimensional-Einstein-Gauss-Bonnet theory exhibits unusual asymptotic behaviour of its perturbations near its horizon and at infinity, which makes the definition of ingoing and outgoing modes impossible. We discuss these results and conclude with a summary and some perspectives. Technical details are given in several appendices. II. FIRST-ORDER SYSTEM FOR HORNDESKI THEORIES In this work, we study models that belong to Horndeski theories, which are included in the general family of DHOST theories. Their Lagrangian can be written in the form where R is the Ricci scalar for the metric g, E is the Einstein tensor, and we use the shorthand notations ≡ ∇ and ≡ ∇ ∇ for the first and second (covariant) derivatives of (we have also noted ≡ ). The functions F, P, Q and G generically depend on the scalars and X ≡ and a subscript X denotes a partial derivative with respect to X. In the following, we will consider only shift-symmetric theories, where these functions depend only on X. In a theory of the above type, we consider a non-rotating black hole solution, characterised by a static and spherically symmetric metric, which can be written as and a scalar field of the form We have included here a linear time dependence of the scalar field, which is possible for shift symmetric theories and was discussed in particular in for 4dEGB black holes, but later we will assume q = 0. In the rest of this section, we discuss the axial and polar perturbations in general, before specialising our discussion to the specific cases of EsGB and 4dEGB black holes in the subsequent sections. A. First-order system for axial perturbations Axial perturbations correspond to the perturbations of the metric that transform like (−1) under parity transformation, when decomposed into spherical harmonics, where is the usual multipole integer. Writing the perturbed metric as where g denotes the background metric (2.2) and h the metric perturbations, and working in the traditional Regge-Wheeler gauge and in the frequency domain, the nonzero axial perturbations depend only on two families of functions h m 0 and h m 1, namely while the scalar field perturbation is zero by construction for axial modes. In the following, we drop the ( m) labels to shorten the notation. As discussed in App. D, the system of 10 linearised metric equations for h 0 and h 1 can be cast into a two-dimensional system by considering only the (r, ) and (, ) components of the equations and using the 2-dimensional vector where we have introduced the function whose denominator F has been defined by Note that vanishes when q = 0. The resulting first-order system is of the form where the matrix coefficients depend on the theory and on the background solution, according to the expressions We have also introduced the parameter which we will be using instead of. Notice that the matrix M coincides with the results of in the cases where G = 0 and = r 2. B. Schrdinger-like equation and potential As discussed in detail in, one can introduce a new coordinate variable r * defined from a function n(r) according to, (2.13) and find a linear change of functions of the form such that the initial system (2.9) is transformed into an equivalent system in the "canonical" form At this stage n(r) is arbitrary and the function V is given in terms of the functions characterising the theory and of the background metric by the expression This formula generalises the result given in to an arbitrary function(r). When = 0, which is the case for q = 0, the system (2.15) immediately leads to the Schrdingerlike second-order equation for the function 1, which corresponds to a wave equation, written in the frequency domain, with a potential V and a propagation speed given by As expected, the speed of propagation depends on n, i.e. on the choice of the radial coordinate. When = 0, one can still get an equation of the form (2.17), but after the change of time variable where the indices a, b belong to {, }. The scalar field perturbation is parametrised by one more family of functions m according to In the following we will consider only the modes ≥ 2 (the monopole = 0 and the dipole = 1 require different gauge fixing conditions) and we drop ( m) labels to lighten notations. One can show that the (t, r), (r, r), (t, ) and (r, ) components of the perturbed metric equations, which are first order in radial derivatives, are sufficient to describe the dynamics of the perturbations. Therefore, the linear equations of motion can be written as a first-order differential system, satisfied by the four-dimensional vector where is proportional to the scalar field perturbation. The precise proportionality factor depends on the background solution and will be given explicitly later in each of the two cases considered in this paper. The form of the square matrix M can be read off from the equations of motion. III. EINSTEIN-SCALAR-GAUSS-BONNET BLACK HOLE In this section, we specialise our study to the case of Einstein-scalar-Gauss-Bonnet (EsGB) theories, where one adds to the usual Einstein-Hilbert term for the metric, a non-standard coupling to a scalar field which involves the Gauss-Bonnet term (3.1). Analytical non rotating black hole solutions were found in the case of specific coupling values in, and rotating solutions in the same setups in. A solution for any coupling form was obtained in, and a solution with an additional cubic galileon coupling was proposed in. All these solutions are given as expansions in a small parameter appearing in the coupling function. This small parameter parametrises the deviation from GR. A. Action The Einstein-scalar-Gauss-Bonnet action is given by where f () is an arbitrary function of and is the Gauss-Bonnet term in 4 dimensions. Although this action is not manifestly of the form (2.1), its equations of motion can be shown to be second order, which means that the theory can be reformulated as a Horndeski theory. This is explicitly shown in Appendix A, working directly at the level of the action. The corresponding Horndeski functions are given by Here, we are using the notation f (n) () for the n-th derivative of f () with respect to. B. Background solution To find a static black hole solution, we start with the ansatz corresponding to the gauge choice B = A and C = 1 in (2.2). An alternative choice would have been to assume C = 1 and B = A (see for example ). When the coupling function f is a constant, the term proportional to G in the action becomes a total derivative and is thus irrelevant for the equations of motion, which are then the same as in GR with a massless scalar field. One thus immediately obtains as a solution the Schwarzschild metric with a constant and uniform scalar field, where ∞ is an arbitrary constant. When f () is not constant, the above configuration is no longer a solution but can nevertheless be considered as the zeroth order expression of the full solution written as a series expansion in terms of the parameter assumed to be small, as it was proposed initially proposed in and recently developed in. Hence, we expand the metric components and scalar field as series in power of (up to some order N ) as follows, where the functions a i, c i and s i can be determined, order by order, by solving the associated differential equations obtained by substituting the above expressions into the equations of motion. One can see that the metric equations of motion expanded up to order N involve a N (r), c N (r) and s N −1 (r), while the scalar equation of motion at order N relates a N −1 (r), c N −1 (r) and s N (r). Then, it is possible to use this separation of orders to solve the equations of motion order by order. We need boundary conditions to integrate these equations and we impose that all these functions go to zero at spatial infinity. At first order in, one obtains the equations where the i are integration constants. The boundary conditions at spatial infinity impose 3 = 0. Furthermore, the constant 1 + 2, which can be interpreted as a shift of the black hole mass at first order in, can be absorbed by redefining as follows: Finally, the remaining terms proportional to 2 can be absorbed by the coordinate change As a consequence, at first order in, one simply recovers the background solution given in Eq. (3.5), up to a change of mass and a change of coordinate, which corresponds to taking As for the scalar field, its equation of motion yields, at first order in, with 1 and 2 constants. One can obviously absorb the constant 1 into a redefinition of ∞ while one chooses 2 so that s 1 (r) remains regular at the horizon. At order 2, one can repeat the same method to solve for a 2, b 2 and s 2. One can ignore the five integration constants that appear since they can be reabsorbed using the boundary conditions, mass redefinition and coordinate change, as previously. At the end, the metric and scalar functions read where we have introduced the constant 2 defined by In principle, it is possible to continue this procedure and find all coefficients up to some arbitrary order N in a finite number of steps, but the complexity of the expressions quickly makes the computations very cumbersome. Here, we stop at order 2, but one could proceed similarly for the next orders, for instance to obtain the numerical precision in the computation of quasinormal modes reached in. By taking into account the higher order corrections to the metric functions, the black hole horizon is no longer at r = but is slightly shifted to the new value Since r h is known only as a power series of, it is more convenient to work with the new radial coordinate dimensionless variable z, 20) in terms of which the horizon is exactly located at z = 1, at any order in. C. Axial modes: Schrdinger-like equation Let us now turn to the study of perturbations about this black hole solution, starting with axial perturbations (note that perturbations with the specific choice of coupling f () = were studied in the context of a stability analysis in ). As we have seen in section II A, the first-order system for the axial modes can be written in the form (2.9) and depends only on the functions, ∆ and, since here = 0 (because q = 0). In terms of the new radial coordinate z, these functions read, up to order 2, When goes to zero, one recovers the standard Schwarzschild expressions, as computed in. By substituting the above expressions into (2.18) and (2.16) and choosing n(z) = A(z), one can then obtain (up to order 2 ) the propagation speed from, and also the potential, These quantities have been illustrated in Fig. for some values of. Note that the potential is plotted as a function of the "tortoise" coordinate z *, defined similarly to r * in (2.13) with n = A:. Substituting the expression of A(z), obtained from (3.7), (3.15) and (3.20), This implies, in particular, the asymptotic behaviours at spatial infinity and at the horizon, Notice that all along the paper, we will be using the symbol for an equality up to sub-dominant terms in the z variable when z 1 at infinity and z − 1 1 at the horizon. More precisely, given two functions f (z) and g(z), we say that f (z) g(z) at z 0 (which can be here z 0 = ∞ or z 0 = 1) when Noting that c tends to 1 and V vanishes both at the horizon and at spatial infinity, the asymptotic behaviour of (2.17) is simply given by where we have rescaled the frequency according to As a consequence, at spatial infinity, using (3.27), the asymptotic solution i while the solution near the horizon takes the form, where we have used (3.28) when we replace z * by its expression in terms of z. Finally, the constants A ∞, B ∞, A hor and B hor can be fixed or partially fixed by appropriate boundary conditions. D. Axial modes: first order system and asymptotics In this subsection, we show that the asymptotic solutions, obtained previously in (3.32) and (3.33) from the Schrdinger-like equation, can be recovered directly from the first-order system which corresponds to (2.9) with the definitions (3.21). We will be making use of the algorithm presented in. First order system and asymptotics: brief review and notations The goal of the algorithm is to find a set of functions so that the original system is reexpressed in the simpler form where the matrices D i are diagonal 1 and x is a new variable, defined such that the asymptotic limit considered corresponds to x → +∞. For spatial infinity, we simply use x = z, whereas we choose x = 1/(z − 1) for the near horizon limit z → 1. More precisely, we start with the system whose matrix M is immediately obtained from (2.9) with (3.21), then we make the change of variable from z to x if necessary, and finally the algorithm described in provides us with the transfer matrixP defining the appropriate change of functions so that the new matrixM, given byM takes the diagonal form (3.34). Hence, we obtain immediately the asympotic behaviour of the solution by integrating the diagonal first-order differential system (3.34). We apply this procedure in turn to the spatial infinity and near horizon limits, up to order 2. Spatial infinity At spatial infinity, the coordinate variable is z and the first terms of the initial matrix M in an expansion in power of z read Applying the change of functions (3.36) with provided by the algorithm of, one obtains the new matrix which is diagonal up to order 1/z 2. Hence, the corresponding system can immediately be integrated and we find the behaviour of Y and of the metric coefficients (2.6) at infinity: where c ± are the integration constants. As expected, one recovers the same combination of modes as in (3.32). Near the horizon To study the asymptotic behaviour near the horizon, it is convenient to use the coordinate x defined by x = 1/(z − 1). Then, we study the behaviour, when x goes to infinity, of the system (4.37), reformulated as The expansion of the matrix M x in powers of x −1 yields (3.44) The algorithm provides us with the transfer matrix P = 0 0 1 1 x + −1 − 21 2 /10 1 + 21 2 /10 0 0 + 2(i + ) + 2 (10i − 53)/15 2(i − ) + 2 (10i + 53)/15 0 0 and one obtains a new differential system with a diagonal matrixM x, (3.46) Integrating the system immediately yields the near-horizon behaviour of the metric components, expressed in terms of the variable z, where c ± are integration constants (different from those introduced in (3.41)). As expected, we recover the combination of modes found in (3.33) from the Schrdinger-like formulation. E. Polar modes For polar modes, there is no obvious Schrdinger-like formulation of the equations of motion so the simplest approach is to work directly with the first-order system. The latter can be written in the form but the matrix S is singular in the GR limit when → 0. This problem can be avoided by using the functions leading to a well-defined system in the GR limit of the form where z is the dimensionless coordinate introduced in (3.20). We now consider in turn the two asymptotic limits. Spatial infinity We start by computing the expansion of M in powers of z; we obtain while 2 has been defined in (3.18). The matrices M i with i ≥ 1 are more involved than the three above matrices and we do not give their expressions here. Nonetheless, some of them enter in the algorithm briefly recalled earlier, in section III D 1, which enables us to diagonalise the differential system (3.50). The asymptotical diagonal form at infinity cannot immediately be obtained from equation (3.50), as the leading order matrix M −2 is nilpotent. As discussed in, for this special subcase of the algorithm, one must first obtain a diagonalisable leading order term, by applying a change of functions parametrised by the matrix which gives a new matrix M, as in (3.37), whose leading order term is now diagonalisable. The diagonalisation of the leading term can be performed using the transformation, which yields a matrix M of the form One thus finds four modes propagating at speed c = 1, two ingoing and two outgoing modes. We expect them to be associated with the scalar and polar gravitational degrees of freedom. In order to discriminate between the scalar and gravitational modes, it is useful to pursue the diagonalisation up to the next-to-leading order. This can be done by following, step by step, the algorithm of, which leads us to introduce the successive matrices P and P, Hence, we obtain a new vector Y whose corresponding matrix M is given by up to order 2. As a consequence, we can now easily integrate the equation for Y up to subleading order when z 1 (up to 2 ) and we obtain where c ± and d ± are integration constants while g ∞ ± (z) e ±iz z 3±i(1+ 2 /3) = e ±iz *, s ∞ ± (z) e ±iz z −1±i. (3.61) The two modes g ∞ ± follow the same behaviour as the axial modes obtained in (3.40): those can be dubbed gravitational modes, while the other two modes s ∞ ± correspond to scalar modes. We can then determine the behaviour of the metric perturbations K,, H 1 and H 0 by combining the matrices P (i), with i = 1,..., 4 as with the leading order terms of each coefficient of P given by Hence, the metric and the scalar perturbations are non-trivial linear combinations of the so-called gravitational and scalar modes. This shows that the metric and the scalar variables are dynamically entangled. Near the horizon The asymptotic behaviour of polar perturbations near the horizon is technically more complex to analyse than the previous case because we need more steps to "diagonalise" the matrix M and then to integrate asymptotically the system for the perturbations. However, the procedure is straightforward following the algorithm presented in. For this reason, we do not give the details of the calculation but instead present the final result. After several changes of variables, one obtains a first order differential system satisfied by a vector whose corresponding matrixM is of the form where the leading order term M −1 is, up to 2, given b (3.65) One recognises that the coefficients of M −1 correspond to the leading order term in the asymptotic expansion of ±iz * around z = 1, given in (3.28). Indeed, we see that where c ± and d ± are integration constants, and we introduced the polar modes (up to 2 ), Several remarks are in order. First, exactly as in the analysis of the asymptotics at infinity, one cannot discriminate between the gravitational mode and the scalar mode at leading order since they are equivalent at this order. Going to next-to-leading orders would be needed in order to further characterise each mode. Then, computing the behaviour of each mode at the horizon in terms of the metric perturbation functions, in a similar way to what was done at spatial infinity, is possible but not enlightening since the expressions are very involved. Finally, notice that the results above (3.61) and (3.69) are consistent with the behaviours found in, as one can see in their equation, and as one can see in their equations (6.62) and (6.63). IV. 4D EINSTEIN-GAUSS-BONNET BLACK HOLE In this section, we study another modified theory of gravity that involves the Gauss-Bonnet invariant G defined in (3.2). Its action is given by where is a constant and E the Einstein tensor. This action can be obtained as the 4D limit, in some specific sense, of the D-dimensional Einstein-Gauss-Bonnet action. As for Einsteinscalar-Gauss-Bonnet theories, this theory also belongs to degenerate scalar-tensor theories. It can be recast into a Horndeski theory with the following functions (see Appendix A): We will also assume that > 0, otherwise || is constrained to be extremely small. A. Background solution Let us now consider static spherically symmetric solutions of the form By solving the equations of motion for the metric and the scalar field derived from the action (4.1), one can find a simple analytical solution, as discussed in. The metric function A is given by. (4.4) This reduces to the Schwarzschild metric in the limit → 0, the parameter corresponding to twice the black hole mass in this limit. If 2 < 4, the solution is a naked singularity and is therefore of no interest. If 2 ≥ 4, the solution for the metric describes a black hole and its horizons can be found by solving the equation A(r) = 0 for r. This gives two roots, the largest one corresponding to the outermost horizon, The equation for the scalar field gives two different branches: Integrating this equation in the limit where r is large (i.e. r r h ), one obtains Hence, the branch = +1 leads to a divergent behaviour of the scalar field at spatial infinity. In this branch, moreover, does not vanish when the black hole mass goes to zero and we will see later that the perturbations feature also a pathological behaviour. For these reasons, we will mostly restrict our analysis to the branch = −1. In the following, it will be convenient to use the dimensionless quantities According to these definitions and (4.5), one can replace by (1 + )r h. Note that as 0 ≤ r h ≤. One can notice that both bounds can be reached: the case = 0 is the GR limit, while the case = 1 is an extremal black hole, as both horizons merge into one located at r h = √. The parameter is therefore similar to the extremality parameter Q/M for a charged black hole, and it is interesting to use it instead of when studying the present family of black hole solutions. Moreover, the outermost horizon is now at z = 1 and the new metric function is (4.10) Since depends on √ A, as shown in (4.6), it is also convenient to introduce the new function f (z) = A(z). The dynamics of axial modes is described by a canonical system of the form (2.9). Substituting (4.2), (4.6), (4.10), (4.11) into the definitions (2.8) and (2.10-2.11) and rescaling all dimensionful quantities by the appropriate powers of r h to make them dimensionless (or, equivalently, working in units where r h = 1), one gets the following expressions for F,, and ∆ : where we have used the explicit definition of f (z) and the expression of its derivative to obtain a simplified expression for. Here, we have kept the parameter unfixed: as we can see, it appears in the expression of F which means that it becomes relevant for the perturbations of the black hole solution. In the sequel, it will be convenient to express the quantities (4.12)-(4.14) in terms of the following three functions of z: A short calculation then leads to When we study the perturbations and their asymptotics, it is important to look at the zeros and the singularities of the expressions (4.19). For this reason, we quickly discuss the zeros of the functions i. We note that, for z > 0, the function 3, explicitly given by is strictly positive and the function 2 vanishes at This root is only relevant in our analysis if it lies outside the horizon, i.e. when z 2 > 1, which is the case if ≥ c with Hence, when < c, 2 remains strictly positive outside the horizon. Let us note that at the special value = c, the zeros of f and 2 coincide. Finally, the position of the zeros of 1 depends on the sign of. If = −1, then + f ≤ 0 and, since f ≥ 0, the product ( + f )( + f − 2zf ) is always positive, and therefore 1 > 0 outside the horizon. By contrast, if = +1, one finds numerically that 1 has a zero z 1 > 1. This is another reason (in addition to the behaviour of the scalar field at infinity discussed below (4.7)) to restrict our analysis to the case = −1. Let us summarise. When < c and = −1, the functions i do not vanish outside the horizon and then neither of the three functions F, and vanishes or has a pole for z > 1. Near the horizon, these functions behave as follows: At infinity, the behaviour is much simpler as the three functions (4.19) are constant and tend to 1. The propagation speed and the potential for = 2 are represented in Fig. for three different values of, satisfying the condition < c. We observe that the propagation speed diverges at the horizon z = 1, while the potential vanishes at this point. The potential can be negative in some region for sufficiently large values of. It is difficult to study analytically the sign of the potential but one can compute its derivative when z * → −∞ and one finds that it remains positive up to some value * (). We find that * ( = 2) 0.162917 numerically and that * ( → ∞) = c. In terms of the new coordinate z *, the Schrdinger-like equation is of the form The left-hand side of this equation can be seen as an operator acting on the space of functions that are square-integrable with respect to the measure w dz *. It is instructive to study the asymptotic behaviour of the solutions of (4.28), near the horizon and at spatial infinity. Near the horizon, using dz * = dz /f 2 and (4.24), one finds and the asymptotic behaviours for the potential and for w are where C 1 is a constant. It is immediate to rewrite these asymptotic expressions in terms of z *, using (4.29). Near the horizon, for z * → −∞, the potential decays faster than the right-hand side of (4.28) so that the differential equation takes the form whose solutions ar where I 0 and K 0 are the modified Bessel functions of order 0 while A 1 and A 2 are integration constants. Since I 0 (u) 1 and K 0 (u) − ln u when u → 0, the general solution behaves as an affine function of z * when z * → −∞ and is therefore square integrable with respect to the measure w dz * e z * /2 dz *. This means that the endpoint z * → −∞ is of limit circle type (according to the standard terminology, see e.g. ). Interestingly, the analysis of the axial modes near the horizon in our case is very similar to that near a naked singularity as discussed in. In contrast with the GR case, none of the two axial modes is ingoing or outgoing, which means that the stability analysis of these perturbations differs from the GR one. For the other endpoint (at spatial infinity), z * z → +∞, the asymptotic behaviours of the potential V and the functions w, according to (4.27) and (4.26), are given by which coincides with the GR behaviour at spatial infinity. In particular, V goes to zero and w goes to one, so that one recovers the usual combination of ingoing and outgoing mode where B 1 and B 2 are constant. If contains a nonzero imaginary part, then one of the modes is normalisable and then this endpoint is now of limit-point type. As we have already said previously, the analysis of axial perturbations in this theory is very different from the analysis in GR. The main reason is that we no longer have a distinction between ingoing and outgoing modes at the horizon. The choice of the right behaviour to consider might be guided by regularity properties of the mode. Indeed, if we require the perturbation 2 1 to be regular when z * → −∞, then we have to impose A 2 = 0. The problem turns into a Sturm-Liouville problem, which implies that 2 is real. A very similar problem has been studied in another context in where the authors showed that 2 > 0 when V > 0, which implies that the perturbations are stable. Here we can make the same analysis as in, and we expect the stability result to be true at least in the case where V > 0, i.e. when is sufficiently small, as explained in the discussion below (4.27). Let us close this subsection with a final remark. It is always possible to use, instead of the tortoise coordinate, a different coordinate z *, for example by choosing n(z) such that c = 1 everywhere. This corresponds to the choice In this new frame, the potential is changed and can be written in the form where Q is a polynomial of order 28 of nonzero constant term whose coefficients depend on z. This potential is represented on Fig. for different values of. 2 The regularity concerns the metric components themselves and not directly the function1. The asymptotic behaviour of the metric components will be given in (4.46). D. Axial modes: first-order asymptotic approach In this section, we compute the asymptotic behaviours of h 0 and h c using the first-order system 3 for axial perturbations given in (2.9) following the algorithm we developped in. Using (2.9), with (4.12)-(4.14), we start by writing this first order system in the z variable: At spatial infinity, the matrix M can be expanded as Therefore, the two components of Y at infinity are immediately found to be a linear combination of the following two modes: Hence, the asymptotic behaviour of the original metric variables h 0 and h c are given by where c ± are constants. Near the horizon, we change variables by setting x = 1/ √ z − 1, and study the behaviour, when x goes to infinity, of the system (4.37), rewritten as The algorithm then enables us to simplify the original system, here up to order x −1, using the transfer matrix P such that with the functions p i defined by (4.43) The new system is then and c ± and d ± are constants. This result calls for a few comments. First, we can see from (4.49) that the scalar modes are not propagating at infinity: even though it is possible to identify two branches corresponding to two sign choices, the corresponding modes do not contain exponentials, and the leading order depends on. This implies that there is no choice of z * such that s ∞ ± (z * ) e ±iz * /c 0, with c 0 a constant speed independant of. Such a behaviour for scalar modes leads to the conclusion that defining quasinormal modes of the scalar sector in the usual way (through outgoing boundary conditions at infinity) for this solution is not possible. Second, one can compare the asymptotic behaviour of the scalar modes with what is obtained by considering only scalar perturbations onto a fixed background; this is done in Appendix C and we see that the two behaviours are very similar, even though they slightly differ. Third, one can observe that the 4-dimensional matrix above (4.50) is ill-defined in the GR limit where → 0. In fact, the second line of the matrix tends to infinity in this limit. This could be expected, since in that limit there is no degree of freedom associated with the scalar perturbation, which is obtained precisely from the second line of the matrix. One could solve this problem by setting = and considering the vector t K,, H 1, H 0, similarly to what was done for the EsGB solution in (3.49). Near the horizon Near the horizon, we use the variable x = 1/ √ z − 1, as for axial modes. Using the algorithm, we find a change of vector Y =P such that the associated matrix, that we denoteM x exactly as in (4.41), is diagonal and is explicitly given b Solving the first order system is then immediate and the asymptotic expressions of the components of the 4-dimensional vector (written as functions of z) are combinations of the four modes We have named two of these modes s i (for "scalar") because they contain a nonzero contribution, as can be seen by expressing these modes in terms of the original perturbative quantities, using the explicit expression for the matrixP provided by the algorithm 5. Indeed, the relation between each of the above modes and the initial perturbations is given by where c i and d i are integration constants while i are constants whose expressions are given explicitly in Appendix B. This behaviour is similar to what we have obtained for the axial perturbations. One cannot exhibit ingoing and outgoing modes: instead, the perturbations have non-oscillating behaviours at the horizon. V. CONCLUSION In this work, we have studied the linear perturbations about black hole solutions in the context of two families of gravity theories involving a Gauss-Bonnet term in the action. In order to do so we have extended our previous work to the case of Horndeski theories with a cubic dependence on second derivatives of the scalar field, since the Gauss-Bonnet models studied here can be recast in the form of scalar-tensor theories (we show explicitly, in Appendix A, the equivalence between the Lagrangians with the Gauss-Bonnet term and the corresponding scalar-tensor Lagrangians following what has been done in ). For a general shift-symmetric Horndeski theory, we have written the expression for the equations of motion of the axial perturbations about any static spherically symmetric background in a simple and compact form. The axial perturbations represent a single degree of freedom and their dynamics can be described either by a two-dimensional first-order (in radial derivatives) system or by a Schrdinger-like second-order equation, associated with a potential and a propagation speed. By contrast, the polar modes, which describe the coupled even-parity gravitational degree of freedom and the scalar field degree of freedom, are characterized by a four-dimensional first-order system. We then apply this general formalism to the two models considered here. For Einstein-scalar-Gauss-Bonnet theories, one difficulty is that there is no exact background black hole solution. The solution can be computed numerically or analytically in a perturbative expansion. We have followed the second approach here, following and providing some details about the calculation of the lowest order metric terms. We have then studied the perturbations, up to second order in the small expansion parameter (related to the coupling of the Gauss-Bonnet term). We have tackled the axial modes using both the Schrdinger reformulation and the firstorder system approach, thus cross-checking our results. As for polar modes, we have applied our algorithm to determine their asymptotic behaviours. We have found that both axial and polar modes have a rather standard behaviour. In particular, one can immediately see the existence of ingoing and outgoing modes at both boundaries and one can easily distinguish in most cases gravitational and scalar degrees of freedom at the boundaries. In the last part of this work, we have considered the perturbations of the 4dGB black hole solution of, for the first time to our knowledge. The Schrdinger reformulation of the equations of motion for the axial modes is characterised by the unusual property that the propagation speed diverges at the horizon (using the tortoise coordinate as radial coordinate), even if the potential vanishes in this limit. We also find a critical value c for the coupling beyond which the square of the propagation speed becomes negative. Studying the case < c, we have found that the asymptotic behaviour at spatial infinity is very similar to that of Schwarzschild but the modes are very peculiar near the horizon. These results are confirmed by our first-order approach. Moreover, concerning both polar and axial perturbations, it is not possible to apply the usual classification of modes into ingoing/outgoing categories near the horizon. Furthermore, we prove that the scalar modes have a leading order behaviour at infinity that strongly depends on the angular momentum, which seems to imply that no scalar waves propagate at infinity. In summary, we have illustrated in this work how our formalism can be used in a straightforward and systematic way to study the asymptotic behaviours of the perturbations about a black hole solution in a large family of scalar-tensor theories. When the perturbations are well-behaved, this is a useful starting point for the numerical computation of the quasi-normal modes. By contrast, if the perturbations are ill-behaved, it indicates that the solution or even the underlying gravitational theory might be pathological. In this sense, our general formalism can be used as an efficient diagnostic of the healthiness of some modified gravity theory, or at least the viability of some associated black hole solutions. We can now use the expression of G as a total derivative to express the action (A1) as a Horndeski theory. Injecting this relation into Eq. (A1) and integrating by parts gives After expanding the products, one finds that the Lagrangian density L G of (A14) is One can recognise several total derivatives: integrating by parts the terms containing these total derivatives and writing contractions of the Riemann tensors as commutators of derivatives, one obtains 3 ) + d 2 f d 2 2X ln(X)R + 4 ln(X)L Finally, one can rewrite the term E using ∇ E = 0 and writing contractions of the Ricci as commutators of derivatives, yielding 2 ) This direct proof, which does not exist in the literature to the best of our knowledge, complements the indirect proof given in based on the equivalence of the equations of motion. Appendix D: Equations of motion for the background and for axial perturbations The variation of the shift-symmetric Horndeski action (2.1) yields the equations of motion Due to Bianchi identities, the equation for the scalar field is not independent from the metric equations and therefore can be ignored. For a metric of the form (2.2) and a scalar field profile (2.3), one finds that there are only four non-trivial equations which are given in a supplementary Mathematica notebook. Given any background metric g solution to the above equations, one can introduce the perturbed metric where the h denotes the linear perturbations of the metric. In order to derive the linear equations of motion that govern the evolution of h, one expands the action (2.1) up to second order in h. The Euler-Lagrange equations associated with the quadratic part S quad of this expansion then provide the linearised equations of motion for h. In the following, they will be written under the form E = 0, where E is defined by In the Regge-Wheeler gauge, all the components of h for ≥ 2 are expressed in terms of the independent functions h 0 and h 1, as given in (2.5). In this gauge, one can show that the equations of motion reduce to the three equations E t = 0, E r = 0 and E = 0. These three equations depend only on F and G, since the terms proportional to P and Q and their derivatives vanish when the above background equations are imposed. They are given in the supplementary notebook as well. As there are only two independent functions, h 0 and h 1, one expects one of the above equations to be redundant. This is indeed verified by noting the following relation between the equations and their derivatives: This shows that the two equations E r = 0 and E = 0 are sufficient to fully describe the dynamics of axial perturbations. Finally, these two equations can be formulated as a simple first order system, given in (2.9).
The Missouri Democratic Party says it wants to be a part of the solution when it comes to healthcare, so it released a list of priorities it would like to see at the state level. According to Democrats, Healthy Missouri would offer more options and lower costs. Chair of the Missouri Democratic Part, Stephen Webber, and State Representative Crystal Quade, D-Springfield, announced the policy guidelines at a press conference in Springfield Wednesday afternoon, where they were joined by a Springfield mother who shared her story. “My first child was born with an unknown genetic disorder, and this means that doctors are unsure of what causes some of his disabilities,” said Lexi Amos. Amos says her son went about a year and a half without healthcare after her husband was honorably discharged until the ACA was implemented and they were able to enroll at a reasonable cost. “When you’re developing, five months, six months without access to a speech therapist, an occupational therapist, when that’s what you need, you a missing vital time,” she said. It’s to protect families like Amos’, the MO Democratic Party says it released its own set of solutions. “We spent a lot of time in Jeff City saying ‘no’ and trying to stop a lot of the things that came down,” said State Rep. Quade. “But it’s also equally important, if not more, that we offer solutions.” Healthy Missouri outlines, among other things, policies that would allow individuals to buy into Medicaid at actual cost, fully restore the MORx program, force pharmaceutical companies to disclose information on price hikes and ban them from giving gifts to physicians. It would also increase access to contraception for women under Medicaid and enact a full expansion of Medicaid under the ACA. “Medicaid has issues and there are a lot of things in it that we need to work on, but it’s still the most effective healthcare option that we have in the state,” said Quade. “There’s very low overhead and if we allow folks to buy into that program, it will only help with costs because we will have more people in it.” The plan would also create a prescription drug monitoring program, which Rep. Quade says the current government has failed to do. “It’s going to be a contract possibly with Express Scripts and monitoring things on their end. But it doesn’t actually provide the protection to doctor-shopping and for doctors to have access to that information,” said Quade. “It’s not a true PDMP to what we know it as throughout the rest of the country.” Quade says an expansion of Medicaid would also create 26,000 new jobs in Missouri. She says Healthy Missouri is a framework Democrats will follow to shape legislation that will ensure more and better coverage for all. “While my son is important and amazing to me, he is by no means the only child who needs healthcare,” said Amos. Quade says these items would be introduced as separate bills and she plans to propose some of them in the next legislative session. The policies will of course also depend on what the federal government does regarding healthcare and be amended accordingly. Jenifer Abreu of KOLR TV wrote this story
What you need to know before investing in casinos and gaming. As the saying goes, the house always wins. But when investing in casinos and gaming stocks there's more to making money than just owning the house. The location and quality of a resort or casino, as well as the local regulatory environment, have a lot to do with how much money you can make as an investor. However, if you can bet on the right players, the sky is the limit. What is the casino & gaming industry? Casino and gaming companies own and operate properties that offer games of chance and sports betting. They can range from small racetracks or gaming halls with a few slot tables or gaming tables to megaresorts that cost billions of dollars, offer world-class dining and entertainment, and define the skylines of some of the world's most famous cities. Marina Bay Sands in Singapore is one of the icons of the city's skyline. Image Source: Las Vegas Sands. Throughout most of the world, gaming is a highly regulated market in which taxes are levied on the house's take and a limited number of casinos are allowed. For those who can navigate this complex environment, the opportunity in gaming is enormous. How big is the casino & gaming industry? It's estimated that $160 billion in revenue was generated annually from legal gaming around the world in 2013. When you include hotel rooms, food and drinks, and entertainment, an estimated total of $272 billion was spent at casino and gaming resorts in 2013, growing at a 5% compound rate over the prior five years. The two best-known gaming regions are the Las Vegas Strip, which generated $6.5 billion in gaming revenue in 2013, and China's Macau, which generated an incredible $45.2 billion in gaming revenue during 2013. While Macau may dwarf Las Vegas in gaming revenue, it's important to keep in mind that there is more to a resort and casino than the gaming floor. In 2013, more than half of The Las Vegas Strip's revenue was generated from hotel rooms, clubs, restaurants, and retail. How does the casino & gaming industry work? The concept of building a casino is simple enough: You build a facility that will attract customers to your hotel rooms, restaurants, shops, clubs, and gaming tables and hopefully generate a good return on that investment. Resorts and casinos are generally built to attract a certain kind of customer, like the mass market player or VIPs, high rollers who will gamble hundreds of thousands of dollars per trip. VIPs are generally less affected by economic swings than the mass market but are also lower margin because they can demand kickbacks and freebies from the casino. The distinction between the VIP and mass markets is especially distinct in Macau where junket operators act as a conduit to bring VIPs to casinos, loan them money for gambling, and often operate their own gaming rooms. These arrangements come with financial payments to the junket operators as well as extravagant complimentary services for VIPs. While these costs lower margins, the VIP market is around 70% of the gaming market in Macau so it's a market operators can't ignore. One thing that's different about resorts and casinos versus other businesses is when cash flow occurs. A large outflow happens when the property is being built. Once the building is completed, gaming and other activities have fairly high cash margins. Investors use earnings before interest, taxes, amortization, and depreciation -- or EBITDA -- as a proxy for the cash flow generated each quarter. A well-designed resort and casino can generate in excess of 20% of its original cost in EBITDA each year. The Las Vegas Strip is one of the most iconic gaming locations in the world. Photo: Jon Sullivan via Wikimedia. What are the drivers of the casino & gaming industry? The success or failure of a resort or casino comes down to three major factors: location, the regional and macro economy, and the design and operations of the facility. The location of a casino is the first thing investors should look at because this defines the regulatory environment as well as the market opportunity. The regional and macro economy will act as the tide that raises or lowers the overall gaming opportunity for gaming companies. Finally, design and operations are what separate a specific casino in a given location. As an example of the importance of location, Macau is by far the largest gaming market in the world; with only six concessionaires competing for customers there's plenty of cash to go around. On the other side of the spectrum is Atlantic City, which once had a monopoly on East Coast gaming but is quickly going bankrupt because Pennsylvania, New York, Delaware, and other surrounding states have legalized gaming and given consumers more options. The economy is clearly a driver and this can be for both the good and bad of the industry. When the economy is booming, companies planning conferences and consumers planning vacations are willing to increase travel and gaming expenses to the benefit of resorts and casinos. But when a recession hits, these are among the first expenses to be eliminated. Las Vegas saw this during the Great Recession, during which a few of the best-known gaming companies in the world nearly went bankrupt. The design and operation of a resort and casino have everything to do with the kind of clientele a company can attract and how much those people will pay to be there. Resorts trying to attract high rollers willing to spend money both on gaming and non-gaming activities will often be built with grand attractions like the fountains at the Bellagio or the Grand Canal at The Venetian. These iconic features become a calling card for resorts and can boost traffic, spending, and profits for gaming companies. The impact of superior design and operations can be significant. For example, Wynn & Encore Las Vegas were more profitable than neighbors Circus Circus, Monte Carlo, Excalibur, New York-New York, Luxor, and Mirage combined. Companies that can get these three factors right can create long-term value for shareholders, but getting even part wrong can spell disaster. Casinos and gaming stocks are a risky bet, but at least on the stock market the house is on your side.
Your browser does not support HTML5 video tag.Click here to view original GIF Earlier today, local news station NY1 stumbled upon a “mysterious” statue of a mostly naked Hillary Clinton “depicted with horse hooves, and standing on what appeared to be printouts of emails. The statue also featured a Wall Street banker pressed against her left breast.” So of course, some shit went down. It was a lovely day in the city. One woman apparently decided that destroying the statue was the best way to protect Hillary Clinton’s dignity, much to the dismay of a young bystander who seems to have liked the statue just the way it was. Your browser does not support HTML5 video tag.Click here to view original GIF After the woman called the police to report “an obscene statue,” a struggle ensued. From NY1: It stood outside the entrance to the Bowling Green subway station for a couple of hours without incident, but a dispute eventually broke out when a woman knocked it down, kicked it several times and sat on it. “Why are you sitting on it?” one man said. “Because I find this offensive!” she shouted. “Freedom of speech!” the man shouted back. A perfect microcosm of the United States, boiled down to a few mere seconds. NY1 then reports that the statue was ultimately taken away “by an unknown person.” Advertisement Your browser does not support HTML5 video tag.Click here to view original GIF Was this mysterious statue puller Hillary Clinton herself? It’s hard to say for sure, but—almost certainly, yes. You can watch the full video over at NY1 here. I cannot possibly recommend it enough.
They're 14 years apart. They are the past, present and -- in Justise Winslow's case -- perhaps the future of the Heat franchise. And, if Winslow becomes the player that many expect him to become, he -- and we -- may look back at this period as some small part of Dwyane Wade's legacy. Wade, who was excited when Winslow slipped to the Heat, has shown a willingness to share his knowledge with the rookie from Duke, as was evident again following Friday's practice. Together, with assistant coach David Fizdale watching, they worked against each other in the post, with Wade stopping after most plays to offer the taller Winslow some pointers. "As his career develops, hopefully he’s able to do multiple things on the floor, but right now there’s gonna be certain things Coach wants him to do, and some of those things I’m good at," Wade said. "I’m just passing down knowledge to someone who I think could be good at things that I have strengths at. It’s gonna take a while, but if he figures it out at 21, he’s ahead of the curve. I figured it out at like 27." Wade said he didn't really have an instructor for post play until Fizdale -- who had tutored Joe Johnson in Atlanta -- joined the team in 2008, for Wade's sixth season, and then Wade would work against LeBron James down there starting in 2010. Winslow said he got some post opportunities at Duke, though he had to fall in line behind Jahlil Okafor there. He is trying to add something to the standard repertoire, of primary move and counter. This could be a valuable weapon for Winslow, especially until he refines his outside shooting. And Winslow can be a valuable teammate for Wade, as the latter tries to extend his own contending window. "All of us are where we’re at because someone before us helped us," Wade said. "They helped by letting us sit there and watch film with them or having conversations with them. If he’s a student of it and he really wants to know, I’m a pretty decent teacher in certain areas." That includes the off-the-court stuff too. Wade is a master of the photo shoot by now and, after their practice and media responsibilities on the practice court were finished, they walked down to the interview room for a dual photo shoot for Sports Illustrated. Winslow joked that he should have fixed his hair. Wade reminded Winslow that his hair "always looks like that." After 10 minutes of some photos together, they finished the shoot, to the veteran's satisfaction. "You got some good ones to choose from," Wade said to the photographer. Same goes for Winslow, when it comes to a mentor.
Effect of interpass temperature on microstructure, impact toughness and fatigue crack propagation in joints welded using the GTAW process on steel ASTM A743-CA6NM Mainly due to their great toughness, martensitic stainless steels are used for manufacturing hydraulic turbines. However, these steels have some restrictions regarding regions recovered by welding, mainly due to the formation of non-quenched martensite, which causes a reduction in toughness. Considering repair of hydraulic turbines, there is a great interest in developing welding procedures that increase impact toughness and avoid post-welding heat treatment (TTPS). This study aims to analyse the influence of interpass temperature on microstructure, impact toughness and fatigue crack propagation in multipass welded joints on martensitic stainless steel CA6NM, using AWS410NiMo filler metal and the gas tungsten arc welding (GTAW) process. In the sample with interpass temperature of 80°C, influence of the interpass temperature on the formation of ferrite, with intragranular formation in the two-phase field, was observed, while in the sample welded at 150°C, the formation of ferrite occurred mainly in the single-phase field. The change in the formation of ferrite, with the low interpass temperature, promoted an increase in impact toughness and a decrease in the fatigue crack propagation when compared with the sample welded with a higher interpass temperature. The results obtained indicate that the TIG process is an excellent alternative for the repair of CA6NM steel, with a significant influence from the interpass temperature.
An interactive approach for the design of an Italian fast medical support ship as consequence of world emergency due to Sars2-Covid 19 In spring 2020 we faced a completely new type of world crisis due to the spread, all around the world, of a pandemic disease due to a new type of coronavirus. Basically all the countries in the world are having to deal with the need to offer medical aid to the people with symptoms, and the particular type of medical treatments is causing serious problems to the hospitals, basically blocking also or slowing the capability to assist other pathologies or diseases or needs. In all those situations what is basically needed is the capability to fulfill the request of: a quick help, the transport of materials for med aids, the transport of mechanized tools for police missions and/or support to populations. A ship capable to move quickly to a port to increase local hospital capacity could be an important help. Countries like the USA already have a large experience in hospital ships or support ships, in Italy a solution has been arranged converting a ferry into a hospital ship to support less dangerous causes or other needs. Starting from these considerations the authors have investigated the possibility to realize a refitting of an existing unit to realize a ship capable to give a partial assistance in those situations. Key of the project is the interactivity between re-design and use of an existing ship to obtain a result in short time. Examining the main characteristics suitable for a ship with this aim, the authors made a critical examination of the state of the art of the ships for support and assistance, considering the various available solutions, and then made a study of the customization of a ship summarizing all those aspects, with an operational speed of above 35 knots if required, with an interactive approach at new design between naval architecture and medical/support aspects. Pandemic situation As well known the COVID-19 emergency hit all the countries all over the world, also with a severe impact on the Mediterranean area, especially to areas such as Italy, France, Spain. In all those countries it is very important, for the intense critical situations developed, the capability to offer medical aid in very short terms, and we have assisted at the incredible efforts, in all countries, to realize emergency hospitals in a very short time, not only to treat emergency situations due to COVID-19, but also to release the pressure from normal B Valerio Ruggiero vruggiero@unime.it 1 Engineering Department, University of Messina, Contrada Di Dio, 98158 Messina, Italy hospitals, taking care of other situations. It is the common opinion of several scientists that in the future the world community will probably have to face again the possibility of this pandemics spreading around. In this scenario it is possible to define the elements of a different kind of ship, operating to help and support in those situations. The idea of using a ship for emergency management is not new. Beyond the wartime, ideas for possible employment following earthquakes had already advanced in 1986. More recently, several studies assessed the possibility to develop model of intervention in different scenarios, such as patrol policing or rescue activities. Zhao et al. proposed a novel approach of patrol ship configuration and suggested guidelines for fleet management while Deng et al. proposed a genetic algorithm approach in order to optimize the patrol activities. Hospital ships were used in the case of humanitarian missions or after disasters due to their rapid intervention capacity on the site of a local emergency, generally in poor or underdeveloped Countries where the land forces were insufficient. However, there are no scientific papers relating to the use of ships in the management of pandemics or large-scale health emergencies. For the management of epidemics and pandemics, in fact, the ship can be an important resource, even in rich and developed countries, thanks to the possibility of completely isolating cases, symptomatic or asymptomatic individuals, or simply at risk, overcoming the problems observed in the isolation in hotels or hospices. Italian experiences In the last years Italy has been operating in different scenarios sending a fleet composed by properly called warships and also by units designed for roles more oriented to support populations. In particular we can remember the support to Haitian population with the CAVOUR, lately the support to refuges from North Africa area, and the transportation of them in various port in Italy from Lampedusa isle. In all the above mentioned situations it is important the speed of intervention, the capability to carry a reasonally large amount of materials, more than the military power of the ship itself, that is usually guarantee by other unit like air force or naval air force component carried by specialized ships according to requests that we are going to illustrate. But nowadays the situation the authors are examining is completely different: a ship is needed that can perform operations of quick support in events of natural disasters (earthquakes, flooding) and sanitary emergencies, carrying a reasonable amount of first aid goods as medicaments, food, and, main topic, a ship that can operate as an emergency hospital, giving support to local installations. Additionally, the situations on several large cruise ships, with several people infected, could require the capability to realize a fast evacuation of people, from the passenger ship to an other ship, in order to reduce the risk of infection for the several thousand of passengers usually on board. Methodology and methods The methodology to approach the project has can be summarized with the research of the main characteristics that the ship should have and, considering the necessity to obtain a result in short time, trough the research of existing ship with matching characteristics, capable to be transformed/refitted quickly. It is possible underline the main characteristics that this ship must have: possibility of a very high top speed to reach quickly the place of operations possibility to operate at medium speed, in order to save fuel possibility of quick loading roll on roll off cargo wide spaces for storage of food, provision comfortable accommodation for first aid and hospital possibility to carry at least 2 helicopters for medical assistance Operational requests to fulfill As described above, it's possible to summarize the main characteristics of a ship for this kind of intervention in this way: Very high top speed It is well known, that in an emergency situation, it is important to be on the place where the assistance is needed in a very short time. This requires the capability to have an adequate speed, in this situation we will speak about a scenary basically located in the Mediterranean area or the so called "Enlarged Mediterraneum" so it is reasonable to consider necessary to sacrifice a certain amount of range in order to reach an higher speed. A speed of 35 knots, with a peak of 40 knots of top speed can be considered reasonable. Medium speed to save fuel In addition to the above mentioned situation, Authors considered that could be very useful to have a medium "cruise speed" of 20 knots, that can be reached maintaining an affordable fuel consumption and using (that basically means adding hours of work) a propulsion system easy to maintain and repair. Roll on-roll of cargo Particularly important it is to move easily and quickly "rollon roll-off" cargoes, that can be composed by military vehicles, ambulances, commercial trucks with food, medical aid, building materials etc. So the deck of the ship must be strong enough to support heavy loads and it is necessary to have enough space for easy manoeuvring. These characteristics must also consider that can be impossible to moor the ship in a traditional way, with the transom against the pier, so it is needed to have also a side door in case of side mooring. Storage for cargo In this situation it is important, as already said, to have room to carry aid, food, etc., so the space must be prepared to carry single items already packed to be distributed. First aid, hospital, recovery The above mentioned emergency situations can require to host a large amount of people, and to quickly transfer them away from the emergency situation. It is important to have enough space for First Aid, for an Emergency Hospital, and also for simply for hosting and accommodating people. It is important to underline that the basic concept of the design, was to realize a ship, possible using an existing ship, in order to save time, and get an unit with multipurpose capabilities to give a fast, immediate help in critical areas. The ship must also have the capability of a "hospital ship", with basically on board two separated areas: one "Intensive care unit" with all the technical items needed to face contagious diseases and an other area to offer normal hospital assistance. In this way the possibility to operate at both levels according to the need of the area of interest will be guaranteed. It is obvious that a real hospital ship such as the USNS Mercy or the Spanish "Esperanza del Mar" will offer much more comfort; in the same manner a complete cargo ship will be able to carry much more load, but the Authors's intention was to reuse an existing ship at an affordable cost. Helipad An helipad for touch and go of 2 small or 1 large helicopter must be provided on board. Case study-existing project to modify The deal of this work is considering all the above explained necessities, to suggest a solution that could fulfil all of them, using also something that could be available in a short time. In the'90eis in Italy four ships called MDV 3000 m (Figs.1, 2) were built by the Fincantieri, for passenger and cargo transportation. Two units of those ships were sold for demolition, but the other two are actually operating in Mediterranean area for passenger transportations. The authors want consider those ships for several reasons: The fact that 2 ships are actually operating could save time in order to operate a new transformation. A previous work on these ferries suggested a possible transformation as support ship. The present paper considers a further modification inspired by the COVID-19 pandemic. The following pictures ( Fig. 3 and Fig. 4) show the original General Arrangement, using images published on Internet and on various magazines: Case study-results and discussion The authors studied a proposal with an high degree of interactivity among several aspects of the design, and the possibility to realize the modifications in a short time. Propulsion The Authors started from the examination of the propulsion system. It is important to remind that with the actual propulsion system the ship can reach a speed of 40 knots operating with the 4 diesel engines and the 2 TAG, but can also navigate at only 20 knots simply using diesel propulsion. Considering an operational area of the Mediterranean or even the so called Enlarged Mediterranean, a speed of 20 knots is enough to reach any operational scenario in less than 4 navigations days. The waterjets give an excellent manoeuvrability and limit the full load draft to 3.9 m., making it possible to operate also where the final destination (mooring place) is in shallow waters. So the Authors decided to save the original propulsion system. The examination of the garage decks showed that they were dimensioned to support 44 t. trucks, so the only change could be the opening of one side-door (or 2 doors, one on each side) in order to make possible to disembark light vehicles easily, with the ship moored by side. As for the General Arrangement the main considerations will be described in the follow paragraphs: Top and Wheelhouse deck arrangement It is important to obtain a helicopter landing area on the top deck, for a "touch and go" activity of medical support. So authors designed an area on the top deck, checking the feasibility considering the possibility to install stiffeners below the deck without problem. The stability will not be effected by the weight of the helicopter and touch and go operations. The most interactive work among design and medical needs has been performed on accommodations: considering that the ship should have a capability to operate as "people and material carrier" and also as emergency recovery for people or for their transportation, authors decided to separate the areas of each deck to get main vertical zones, not only for the firefighting aspect, but mainly to have a separation area among different usage areas. This in order to guarantee the possibility to have proper disinfections and also to realize and maintain the areas with "negative pressure" to avoid contaminated air from the various hospitalization parts. In fact, in order to destinate a part of the ship as hospital, also for infective diseases, all this area must be completely separated, in terms of ventilation systems, treatment of black and grey waters, etc. from any other part of the ship. In all the General Arrangement Figure, the red colour will be used to show the parts of the ship completely renewed. The wheelhouse deck, as showed in Fig. 5, will be modified to obtain accommodations for operators, in double bed cabins to support to general organization of the ship (Fig. 6). Main and Upper deck arrangement These are the most important decks of the ships. The upper deck has been dedicated completely to the Hospitalization of people, starting from aft to fore, in order to realize as much "physical separation" as possible from infected people. In terms of design and interactivity among the ship systems, this solution offers the possibility to install all the new machineries and piping on the aft part of the upper deck in order to minimize the impact and make these systems subject to easy maintenance. In this way operators and technicians can minimize contacts with hospital area. There is also an historical interesting note in this solution: in old sail ships, the "Lazzarettos" for the sick people were in the aft parts of the ship, the "downwind" parts, in order the give a natural ventilation with clean air flowing from healthy crew to sick people. Furthermore in the decks directly above and below the hospital areas are not located spaces with fire risk: the cooking area is on the main deck but far from the hospital. In this way all the fireinsulation (A-60 for those areas) can be realized offering a higher degree of protection. As stated at the beginning, the authors didn't aim at design a hospital ship, but a "support ship", so all the accommodations are studied to offer a first aid solution. The area of "Intensive care unit" is studied to offer at least 10 beds, it is important to remind that, at the beginning of the Covid emergency, the Germany Health System was credited of a ration of places/population of about 1 bed/3000 people; other countries, with efficient sanitary systems like Italy or France were credited of 1 bed/8-10.000 people. With this ratio an accommodation for 10 beds, can be considered as serving for a first aid to 30.000/100.000 people. The authors have studied an interactive and multidisciplinary approach to the project, examining some data about the project of an Hospital ship and also the data from World Health Organization to obtain parameters for the arrangement of Hospital accommodations. According to the literature about the importance of room ventilation in infective diseases the authors modified all the interiors with the realization of separations among different environments, in order to give the possibility to install ventilation systems capable to realize an appropriate pressure gradient to avoid contamination. An important aspect and special care must be dedicated to the exhaust of the ventilation areas, that must be conveyed in a safe part of the ship, preferably below water level to the avoid risk of contaminations. All the areas must obviously be equipped with "state of art technologies" for remote control, and "remote diagnosis" in order to guarantee a total interactive approach among the ship and structures ashore. Particularly the arrangement of the intensive care unit on the aft part of the Upper deck has been also studied considering the need of primary therapy for COVID-19 using hi-flow Oxygen therapy, with breathing masks for patients. This therapy needs high quantity of medical Oxygen for the patients. The Oxygen can be stored in bottles and distributed to the Hospital area realizing appropriate piping system, and the Hospital position upper deck, far from accommodations, allows the realization of piping in easy way. It is possible install a storage area on the garage deck, to provide an easy carrying of the new bottles by trucks. The authors want also to underline the possibility for the ship to operate as a completely independent unit, producing on board a quantity of high quality Oxygen. In this case a could be adopted on the lower garage deck (Fig. 7), using the available volume. The solution with cells is studied for submarines, in order to minimize the occupied volume. The description of the GA, can show the possibility to modify the other areas of upper decks, to obtain more beds. The fore part of the Upper deck can then be dedicated to the accommodations of medics and paramedics. Obviously these accommodations must be realized in a way to give comfort and possibility to avoid infections, so the General arrangement must be considered as a proposal, and at the lowering of seriousness of pathologies treated the amount of patients treated on board can be enhanced requiring less assistance. As parameter for the present proposal, the authors assumed a ratio of 1:1 for patients/Hospital staff, considering then to have 10 UTI beds + 40 normal beds, and a corresponding accomodation of 40 people as medical staff and other 40 as "Humanitarian help and support" people. The Main deck, in the study of the Authors, will operate as supporting deck: in case of evacuation there is a large area dedicated to 140 seating places, plus other 96 additional on left side. A part has been dedicated to "emergency sleeping area" easy to be divided for woman and men, in case of old people, pregnant women, basically people needing a more comfortable accommodations. Considering the various scenarios, in this study a large amount of space is left free, in order to have the possibility to embark for emergency more people than the expected, in order to give them a shelter. Obviously according the emergency dotations to a maximum number of people. In effect, was Author's intention to follow a line of "modularity" for the General Arrangement of the ship. In this way the ship can be quickly adapted to different scenarios. The last part of this deck is for accommodation dedicated to the crew of the ship, and there is no need of a refitting in this area, except for normal maintenance required by the interiors. Garage decks The garage decks are one of the most interesting areas, in terms of interactivity with other aspects of technology: the amount of space allows to realize a series of support systems for the various activities of the ships. The first changes in the garage decks, were the two side doors, for easy movement of light vehicles. But it is important to remind that the decks were structured to carry truck of 44 t. each, and with an height of 4 m. So it is possible to use the decks to embark also ambulances or heavy trucks of medical aid, or a complete dismountable hospital, to be mounted by the Army in whichever area, close to the ship or far, enhancing the capability of support. As it is possible to see, the area of the vehicles decks is very large, so it is possible to use them for embarking vehicles or general loads. It could also be possible to reuse some part for accommodations for shipwrecked people or refugees, even if authors preferred not to consider this opportunity, and considering this proposal as a mean to have basically a ship capable to carry quick large amount of vehicles of aids on containers on wheels and/or independent way of transport. The Authors already described the possibility to adapt part of the Lower garage as area for production and stowage of Oxygen for therapy (a patient can need several litres of Oxy-gen on a daily basis). The selected area is on the fore part of the lower deck, to make possible realize the necessary ducts for ventilation/providing air. An other area of the lower garage, could be realized to house the bodies of deceased people, in condition of sanitary security (low temperature cells) to avoid contaminations. Impact on displacement A similar refit must consider any changes in the structure and weight and stability of the ship, so authors checked the situation regarding this aspect. As well known the MDV 3000 were built by Fincantieri with attention to the weight, in order to reach the high speed promises. The Authors considered that all the furniture in the area to be remodelled will be dismounted and the new ones will be realized with different materials. The initial weight of the passengers and personal effects (1784 people) was close to 170 t. In Author's intentions the total capability of the ship can be considered as follows: 100-120 people carried as "Staff": Medicians, Operators, plus 60-100 patients (depending on the arrangement of the available areas, 100 refugees in Seats or accommodations). In total, according to the SOLAS values for people weight (75 kg) about 40 t of "weight" to carry as for people. This mean a margin in weight of more than 130 t., largely enough to compensate the weight added by the new accommodations. It is also important to consider that the ship will operate in two different ways: On going to the place of intervention, the ship will be carrying goods, truck etc. Leaving the place of intervention and moving away refugees and other members of staff, most part of the load will be consumed or left behind. Economical/cost evaluation It is not easy to estimate the cost of the transformation, basically because the available literature is regarding the construction costs of an hospital ashore, with a differences in the aim. In this case, moreover, there is to consider that for Emergency situation the cost cannot be evaluated from the traditional point of view of the possible future income, but only as "emergency solver". However, the authors examined some data for the construction of Passenger Ships, where the cost of accommodations can be usually estimated in 6-10.000 euro for each square meter, and the cost of a new similar unite for Coast Guard, completely built, where for a 100 m ship the cost is around 80 millions of euros. With this data is possible to estimate the cost for transformation in about 40 millions euro for ship. Conclusions As it is possible to see from the General Arrangement, the new idea is not to make a "completely new " kind of ship, but to smartly reuse something that has been already built or designed, making an interaction among several different needs: fast aid in case of pandemic, need to offer medical support to critical areas where there is not an hospital large enough for people exceeding the 10-15 units, provide immediate transportation of trucks with massive amount of goods, medicins etc., in order to develop an instrument that, travelling by the sea, can operate in Mediterranean in this study, and obviously anywhere in the world. The main point of the study, that consider a "case study" of an unit, already used, is to underline the advantages of a new typology of approach, where the interactivity among the needs required by a completely new world emergency and the medical aspects can be applied to a typology of ships, the Ro-Ro ferry, that can be readapted to offer easy efficient support. The main limitation that can appear at the beginning, the limited amount of beds, can be compensated by the fact that the ship is an independent system, capable to move, and in case of Ro-Ro ferries, the ship can operate also carrying huge amount of payload, as medical aid etc., offering also support to the ashore facilities in case of their overload. The Authors considered the limitations of using as starting point a ship of "only" 145 m of length, obviously a Ro-Ro Ferry 200-250 m. long and a larger tonnage could host an higher amount of patients, but the advantage of a ship of 145 m, with consequent limited draft, is the possibility to moor, as independent unit, in small harbours, or small islands, where usually the Sanitary presidium is limited and can be easy overloaded by a pandemic event.
Country-by-country reporting: ensuring confidentiality and protection of information of multinational groups of companies Introduction. Confidentiality and protection of information contained in Country-by-Country Reports is one of the most important aspects of implementation of provisions of Action 13 of the Base erosion and Profit Shifting Action Plan (BEPS Action Plan) into national legislation of countries around the world. Problem statement. Considering Ukraine`s commitments to Organization for Economic Cooperation and Development (D) in frames of combating against tax evasion by multinational groups of companies, a particular relevance receives the study of model legislation on confidentiality and protection of information, which is the subject of voluntary automatic exchange between countries, and approaches to its implementation into national legislation. Purpose. Complementary analysis of institutional basis for ensuring confidentiality and protection of information that is the subject of exchange between OECD member countries in frames of the BEPS Action Plan. Materials and methods. The research is based on a combination of general scientific methods, methods of comparison and empirical approach. Results. The need for harmonization of domestic legislation with OECD regulatory requirements that regulate confidentiality of information on taxpayers (including information exchanged in accordance with international agreements) has been substantiated. Conclusions. Based on the results of the conducted research it has been proved that there is the need for further improvement of institutional provision of protection and confidentiality of tax information, including information exchanged in accordance with international agreements.
Workers at a city centre project have unearthed a network of rare wooden water pipes during a pre-development dig in the Capital. Fifteen pieces of the elm piping were discovered during the excavation work at George Square, where a state-of-the-art underground heating system is being built by the University of Edinburgh for its new student centre. The wooden pipes were part of an underground network to supply drinking water which was built in 1756. It ran from the Comiston area of the Capital to the Royal Mile. Archaeologists called to examine the find – used to bring the “sweet water of the country to the centre” – describe them as being in “very good” condition. The pipes were usually made from hollowed out elm tree trunks, with holes bored into them at each end to allow the passage of water. Lindsay Dunbar, Fieldwork Project Manager for AOC Archaeology Group which carried out the excavation, said the pipe fixings of metal bands and lead fittings were “very typical” in wooden pipes used across the UK in the 18th century. Lindsay added the pipes were “surprisingly well preserved”, offering a unique insight into the design of the network more than 250 years ago. In 1621, an act of parliament gave the go-ahead for the town council to construct a pipe line to carry water the three miles from Comiston Springs to the city centre, but almost 50 years of bitter disputes over a proposed ‘water tax’ meant it took nearly half a century for the project to start. A delay of a further four years for the work to be carried out followed and even when the network was up and running, only the most affluent areas of the Capital would have had access to the supply. The general public would still have had to make use of “water stands” in the town centre, carrying buckets of water by hand. Although similar pipes have been found before across the Edinburgh’s Old and New Towns, this is the first instance of a section being archaeologically excavated in recent times. The City of Edinburgh Council’s museums have several examples of these pipes within their collections including examples on display at the Museum of Edinburgh. Bill Elliot, National Stakeholder Manager at Scottish Water said: “This is an amazing find for our customers in Scotland’s capital who have the opportunity to see first-hand how water was distributed in years gone by. “These pipes made up the first dedicated water supply in Edinburgh, and when the pipes were brought into use the town council described how they would ‘bring the sweet waters of the country to the centre’. “Similar pipes were discovered in 1894 in West Register Street. Made from hollowed out elm they date from the 17th to 18th centuries and were used to supply water to Edinburgh’s Old Town from springs like those out at Comiston.
Featherweight boxer Oscar Gonzalez has been declared brain dead Sunday in Mexico, less than 24 hours following a 10th round knockout loss to Jesus Galicia in their Televisa-televised main event Saturday evening in Mexico City. Gonzalez was rushed to the hospital immediately following his loss to Galicia, in what was a billed as a WBC silver featherweight title fight. The fighter is currently being kept alive only through the aid of machines, with his family to arrive from Tepic on Sunday afternoon before the fighter is officially pronounced dead. Gonzalez is just 23 years young. The featherweight held a ring record of 23-3 (14KO) and was in the midst of a three-fight win streak prior to Saturday evening. The run included an upset points win over former 122 lb. titlist Rico Ramos last April.
Seahawks receiver Golden Tate scored the only two touchdowns in Seattle’s 14-9 win against the Rams, but after the game he was asked more about his taunting than his touchdowns. Tate said on NFL Network after the game that he needs to “play a little smarter” and not draw penalties. Seahawks coach Pete Carroll greeted Tate on the sideline and gave him a scolding, and Tate said Carroll told him that if he wants to be considered one of the NFL’s great receivers, he needs to conduct himself like a professional. Tate was the only thing that worked for the Seahawks’ offense on Monday night: When passing to Tate, Russell Wilson went 5-for-7 for 93 yards, but on all of his other passes Wilson was 5-for-11 for 46 yards. So it’s a shame that Tate ended up drawing the wrong kind of attention to himself with the penalty, which detracted from some of the credit he got for his excellent game. For players like Tate, and for Broncos cornerback Dominique Rodgers-Cromartie, who started celebrating 71 yards from the end zone on his touchdown Sunday, the player to emulate is Barry Sanders, who scored 109 touchdowns in his NFL career and never felt the need to showboat on any of them. The idiot almost stepped outta bounds. The whole time he was waving at the safety I was praying he was gonna get hit and fumble the ball out of the back of the endzone. Don’t worry, “Golden”, you won’t ever be considered to be good. Enforcing the penalty on the play, and from the spot of the foul (or from the goal line, if it occurs in the end zone), would make a lot more sense than on the ensuing kickoff since most kickers can seriously dilute the yardage disadvantage. Of course, being smart enough not to hurt your team would be even better. I’m all for teaching young kids good sportsmanship, but these players are all big boys who are being compensated very handsomely to play a game, let then have some fun. And on the field, they’re constantly talking smack throughout the game anyways so I don’t see why it matters to suppress taunting. Barry Sanders was a special player, no matter what team you root for, watching him move and groove, cut and duck, and bring it home, was a sight to embrace, smart guy, and no one could handle him, he was one great RB! Not surprised he did this at all. Classless franchises begets classless players from classless schools! He tries that sheet on the better teams an someone will put the hurt on him in a game. I was hoping he’d pull a Leon Lett and get the ball knocked out before the end zone or better yet, stepped out of bounds. Would have been karma. He’ll get what’s coming to him soon. Just watch some tape of Larry Fitz or Jerry Rice. They’ll show you what (not) to do. Yes, he almost stepped out of bounds. Hilarious! No worries though. Taunt away, Golden…you’ll never be one of the greats anyway. Name should be changed to Wooden Tate. Isn’t he paid to do that?? It was nothing spectacular. Just do your job and grow up…. This guy always looks like he’s chirping during the games. I would think you can do that if you’re an elite WR. He’s just average. Golden Tate….to be considered and mentioned with a player like Calvin Johnson or Dez Bryant, you’re not gonna just have to play smarter, but you’ll have to play stronger, taller, faster, quicker and better. Yeah….I don’t think you’ll ever be mentioned with those guys in the same breath. a purely selfish play. He knew it was taunting when he was doing it. He chose to swap a 15 yard penalty against his team for 5 seconds of fun for himself. Immature and selfish. I’m hoping one of the linebackers on the team will have “explained it” to him after the game in the locker room. And every DB in the League promises to take Tate’s head off if given the chance. Grow up, son. The juxtaposition between how upstanding a citizen Russell Wilson is and how many other players on this team are just an embarrassment is astounding. No mention of an apology to his fellow NFL player? He apologized for putting his team/special teams in a bad spot. The flag is for being unsportsmanlike to the opposition. So did he apologize to McLeod? Or does he think his actions would have been OK if it didn’t draw a flag? If he pulls that crap in week 13, he’ll get killed! Tate rulez the WRs in the NFC. Just because he’s great doesn’t mean that he’s a lesser person because he taunted. He’s every reason the Seahawks should open up the passing game more. I bet the loser Packers wish they had him. Bank on it! ummm…. did he say Calvin Johnson???? When asked about it after the game I’m surprised he didn’t use his “I don’t know what you’re talking about…I don’t know what you’re talking about” response. I love how every fan is chiming in on how lucky we are. Should be 4-4. Almost lost. Overrated. One and done in the playoffs. People who call it classless are probably the same type of individual that would’ve made that call to their school board when their kids get beat 91-0 in a highschool football game lol. Though, he almost ran out of bounds=stupid of him. Learn how to taunt properly! And by the way…Barry Sanders = OVERRATED . What a coward for retiring! Bank on it! WRs like Megatron, Fitz, and AJ Green are fun to watch. Classy pros. Diva WRs are not. You almost want to see someone tag them. I thought it was funny actually. Don’t let these fuddy duddys change you . I day if they don’t want you to taunt, then stop you from damn scoring!! Janoris Jenkins had that ish coming. I’m a Richard Sherman fan and if someone taunted him, he would have that coming, and everyone would give props to the receiver that burned him. Jenkins had issues with Steve Smith last week and who knows what he said last night. I wish he would’ve went out at the 1, Rams d holds them to a field goal then they lose by 4. Still pretty funny he got shoved down for being the pos he is. If your taunting wouldn’t you want to make sure you wouldn’t be punked just as you scored. What a loser! want this to stop? blow the play dead when it happens, enforce the penalty 15 yards from the spot and assess a loss of down. no “result of the play” nonsense, and the offense will start with 2 & 25. if it’s after the play is over, treat it like an illegal forward pass. assess 15 from the original spot, loss of down, and go from there. That penalty last night cost them a whopping 2 yards. why wouldnt you do it? you’re definitely getting on sportscenter for it. no such thing as bad press. The kickoff was fielded at the 5, returned to the 32, but a holding penalty (which is always a possibility on KOR) took it back to the 22. I say it only cost them two yards because the kickoff spot was back 15 yards from normal, and still reached the 5 yard line. from the normal spot, that’s a touchback and the “O” takes over from -20. When long sacks wilson and does a stupid little dance… Why is that not taunting ?? I believe he pulled the same tactic when he was breaking into the doughnut shop. He was caught on the surveillance camera waiving a ball of frosted creams on his way out the door. I always wondered how a guy that dumb could have ever survived at ND? He doesn’t score that much so let him have his moment. Hold the ball till the end zone, hand the ball to the official, get off the field. Simple, professional and less likely to pull a bone head move like step out of bounds, drop ball, get a flag…. Ain’t nobody got time for that…. No worries Golden, you won’t ever have to worry about being considered great. Better player like Dez Bryant? No, Golden. You don’t want to be like him. He’s a jerk which overshadows any talent he may have. There are many other players to emulate. I was disappointed in the taunting last night, it was completely unnecessary. He was lucky as that was very close to going very, very wrong. But he apologized so it’s time to move on. He wasn’t the first player to make a dumb decision and won’t be the last. Every team has (or has had or will have) at least one player who makes stupid mistakes so be careful about pointing fingers. Tate has great hands, great moves and is spectacular in RAC. if you don’t like watching him make big plays, don’t watch Seahawks games. Dude isn’t even in the top half of the league for slot receivers. And everyones complaint is with the taunting. Do you really think that has a place on the field? Would you be cool if that was Kaep (or any other 49er) that was doing that or are you just a homer? Actually, the player that these diva WR’s should be emulating is Marvin Harrison. One of the greatest WR’s to ever play the game and every TD was a flip to the ref and a return to the sidelines. All class. There’s no way Tate should be mentioned in the same sentence as Calvin Johnson or Dez Bryant, let alone an above-average player like Demaryius Thomas. Hahaha. You sitting in your parents basement behind a keyboard and you’re calling Barry a coward? Barry = one of the greatest to ever play and in the HOF. You, not so much. Bank on it!!! This damn doughnut stealing kid needs to stay focused and humble. Success will come his way if he does………….. The Rams must have been barking the entire game for him to act like that. It comes as no surprise a week after Steve Smith called out Janoris Jenkins. Rams need to win some ball games before they try to intimidate anyone. Golden, don’t worry, we’ll all forget about you when Percy returns. You’ll no longer hurt us with your stupid 1st down dances and/or idiot moves like that one last night. Btw, why don’t you try catching a few more punts inside the 5yd line too. Oh never mind, Percy will soon take that job from you as well. Most of the people commenting in here have probably seen exactly two plays from Tate – the two in the Monday night games. He is a talented player, for sure. Can do great things with the ball after the catch – as a former RB, can be tough to bring down. Can also leap and make some spectacular grabs. But he’s always been immature. He sat out long stretches of his first season because he wasn’t running routes well enough to get on the field. He was out of shape his first season – was able to break into the clear on a couple of punt returns, but wasn’t able to finish. Last night you saw both the talent and the immaturity. Broke the big play, but wasn’t consistent enough to get open and make plays. And regarding that big play, that could have been the play that took the Rams out of the game, a play that sapped their emotional will. Instead, he goes off and taunts, which very likely had the opposite effect and got the Rams playing angrier, more motivated football. As a ‘Hawk fan who has watched Tate progress, I kind of wondered why the team went after Harvin. Tate isn’t as explosive, but he’s more durable and much the same kind of player, and I thought he had a chance to play his way into an extention this year. It’s harder to feel that way after last night. He didn’t step up, and when he did, he acted like a word that rhymes with brass pole and wound up energizing the other team. I’d say it’s time for him to grow up, but that time was really a couple of years ago. He has about eight games left to show he deserves a new contract, and if not, some other team can take their chances with him. Too much other talent in Seattle that the team can spend money on. he has a LONG way to go to be considered one of those 3 or many others, so maybe he should just try to concentrate on improving his game and attitude! When he wasn’t busy ordering rivals shot, of course. Why do these so called “PROS” have to act like they have never scored a TD or made a tackle in their life ?? Golden Tate is a pretty good player who could be a top 25 to 30 wide receiver if he would focus more on playing the game and not let his emotions get the best of him. He won’t ever be in the class of CJ, D. Thomas or Dez because he isn’t the physical freak those guys are. He could try to be a Steve Smith type player though. The guy is fun to watch and his runs after catch are quite impressive for a guy his size, but he needs to be more consistent and avoid stupid penalties. The Rams are just as much of a bunch of dirtbags as Seattle’s DBs and Tate saw an opportunity to rub it in their faces. He didn’t take into consideration that he could be hurting his team and making himself look like a fool. I will say though he got his money’s worth on the taunt. If you are going to get 15 yards you might as well make it worth while. Most of you fans who comment clearly don’t watch football. Tate last year averaged one of the top yards per catch. Because Seattle has been a 55% run first time, he was targeted far less than other big name receivers but still had 8 TD’s total. That taunting play was terrible and immature but that play he made on the ball was sick. The week before Patrick Peterson was in talks as the best cover CB in NFL, Tate ate his lunch for 77 yards catching 4 or 5 targets with Peterson shadowing him all game. That’s football talk. Taunting the hapless Rams… What a big guy he must be. it’s a shame russell wilson plays for this worthless franchise, he is a good kid with a good head on his shoulders and a lot of people may not see that because his teammates are a bunch of tools lead by the ultimate idiot carroll. To all the people here claiming that Tate isn’t any good, obviously haven’t watched him play much. He’s not the biggest or fastest receiver, but the guy is an outstanding playmaker. I get it, it’s the Seahawks, so for some reason people feel inclined to get defensive and spew your hate anytime they do anything to show any emotion whatsoever, but get a grip people. Other people talk trash with their mouths, he talked it with his hand. Big deal. Trash talk and taunting takes place on every play of every game in the NFL. It’s not like he was throwing multiple temper tantrums on the sideline or anything. Um.. I’m a diehard niners fan, but to me that was karma. If any of you idiots actually watched the game Janoris Jenkins was talking smack to golden and all their wide receivers ALL GAME, and then he got straight up burnt and got a taste of his own medicine. I’ve seen that quote attributed to many people, including Vince Lombardi, but Barry Sanders is the one player that I most commonly associate it with. As a Packer fan, I don’t necessarily like the Lions (although they’ve never been enough of a threat to actually “hate”), but I always had tons of respect for Barry Sanders. Can you imagine a player that talented in today’s NFL, behaving with all of the class that he displayed week-in and week-out? What a bunch of jealous losers! Seattle is 7-1 and you all have to find fault with any little thing. They’re 7-1….deal with it! Yes, I wish the Lions were 7-1, but that doesn’t mean I’m going to whine like a 49er or cry like a Chicago fan. They’re 7-1. Golden Tate is a great player. Seattle didn’t win by 30 points, but they won…and three weeks from now, that’s all anyone will remember. Just because your teams aren’t as good as Seattle doesn’t mean you have to spend you day whining. Move on and try to figure out what’s wrong with your own teams and admit that Seattle looks great this year. Tate shouldn’t have done it. He knows it now. You can’t get sucked into that behavior. If this is Jenkins weekly MO, I wouldn’t be surprised if someone gives him an opportunistic cheap shot. Any fan who’s watched the game for awhile understands that divisions games can be tough. In most cases you can throw the point spreads out the window. You are playing a team that sees you twice a year and has databases of info and film on you. After the first five minutes of the game I knew Seattle was in for a fight. The Rams played with pride last night, fought hard and deserved to win. Their defense showed why many who are paid to prognosticate liked STL to make the playoffs this year and were a darkhorse SB pick by some. The Seahawks defense played with heart too, though. As did Russell Wilson. I think this is a good win for them. They need to refocus over the next few weeks leading up to the NO and SF games. Those teams are going to hit Wilson too, and they need to figure out how to protect him better. Holy cow, people acting like he stomped on someone’s head on his way to the end zone. Gimme a break! People need to chill out. All of a sudden he is labeled as a dirty, classless player. Wow! So quick to judge….yet you don’t look at how much time he spends in the community. I love love love the haters. Keep hatin! Seahawks will keep winning all the way to the Superbowl. Feels so good to be on top with all the riff raff crying and whining about how mean and classless your team is. LOVE IT. PS – Golden Tate apologized for the taunt in an interview immediately following the game. People slamming him are the ones who are classless. It’s one thing if he talked trash afterward or refused to apologize. But once he says he knows it was immature, you either let it go or prove that YOU are the one without class. For all of you predicting blowout you need to pump the brakes. Anything can happen on any given day. Think back to 2 weeks ago when the Broncos were supposed to beat the Jags by 28. I’m not saying the Rams will win this game but it might….just might be closer than you think. Nahhhh…I’m adding a td just for that comment. Luckily he is he only Player that taunts. Oh the horror. 49ersforeverrrrrrrr is now my favorite non hawk fan poster on this site. Hahaha! This is exactly why I don’t watch the NFL. For one the games are way to boring and then you get to watch morons like Tate act like and,,,,a moron and get this, the officials threw the flag long before the Moron crossed the end zone line but the NFL enforces this type of behavior on the kick off, so why is it that this type of behavior still happening? Enforce the penalty on the offender when it happens, take away the TD and this type of moronic musings won’t happen again. Keep in mind he also said he caught the ball against the Packers, so he’s obviously a liar. 1. Janoris Jenkins had you shut down all night, except for your 80 yd TD catch where he fell trying to back pedal. 2. You were playing the St. Louis Rams, who aren’t exactly challengers in the NFC. 3. You only have 3 TD’s thru 8 games in an offense in which your basically a #1 receiver with the play of Sidney Rice decent at best. And why of all people are you taunting McLeod? He had no involvement in the play and quite obv wasn’t going to catch you, so why, if your so confident in yourself, not turn around and wave the Jenkins, and you didn’t exactly beat him either. Totally classes and really no place for it in the game. Completely classless. Typical sea hawk. LOL! I love how all the people who are calling the Seahawks classless and stuff are all fans of teams the hawks have beaten. I’m sorry you’re booty tickled cuz you guys lost but GET OVER IT it’s one game. You do see me ragging on the Colts just cuz we lost to them. All the teams in this league are good and deserve respect. I don’t get why NW teams always get so much hate but oh well we thrive on that. We already got cheated out of one Super Bowl and that’s not gonna happen again.
Multiscale isotropic morphology and shape approximation using the Voronoi diagram The Voronoi diagram of a sample set obtained from a shape boundary can be analyzed to perform morphological erosions, dilations, openings, and closings of the shape with a scaled unit disk as the structuring element. These operations collectively comprise isotropic morphology--meaning the fundamental morphological operations with a parameterized disk operator. The isotropic morphology operations are the basis of multi-scale morphological shape analysis. For instance, features obtained from a series of openings and closings by disks of varying size can be used to characterize a shape over a range of scales. In general, the isotropic morphology operations are a significant class of operations with broad potential applications. The new, Voronoi-diagram-based isotropic morphology algorithm has four significant advantages over existing algorithms: the Voronoi diagram need only be computed once, and then an entire series of scale-based analyses can be performed at low incremental cost; the time/space complexity of the algorithm is independent of the disk radius and depends only on the number of boundary samples; the scale parameter (disk radius) can assume non-integral values; the implied metric is the Euclidean metric, rather than an approximation thereof.
U.S. warns WTO global trade talks "hurtling towards irrelevance" GENEVA (Reuters) - The United States launched a blistering attack on fellow World Trade Organization member states on Thursday for failing to do more to cut global barriers to trade, criticizing India in particular for trying to introduce a “massive new loophole”. Ambassadors to the 159-member WTO were meeting to review progress towards a possible deal to be signed in Bali in December, which would cut red tape from customs procedures, adding as much as $1 trillion to global trade. At the insistence of developing countries who objected to having to shoulder most of the burden of the red-tape reforms, a Bali agreement would also include limited reforms to rules on food and agriculture and special treatment for poor countries. While such a deal would be a boost for the world economy, the scale of the negotiation has been massively cut back from the far more ambitious “Doha Round” of trade talks, which dragged on for a decade before finally collapsing in 2011. “The glint of hope today is that we still have time - though only just barely - to adjust our course. The institution we care about is in crisis, and we need to act accordingly,” Punke said. “While it is not my intention to throw bricks, I will be frank in our substantive assessment of where various issues stand,” he said, adding that the mood had changed from hopeful to grim over the past three months. He called on all WTO ambassadors to seek urgent instructions from governments to try to re-energize the negotiations before the end of April. As the talks have slowed, many trade ministries have been distracted by more pressing problems, such as the global financial crisis, or with less daunting issues, such who should lead the WTO once Director General Pascal Lamy steps down at the end of August. Lamy told the meeting there had been a lot of activity - but limited progress on substance - towards the three main areas of a potential Bali agreement. He said there were still “very significant divergences” about how to change the rules on stockpiling food, as demanded by a coalition of developing countries led by India. He urged WTO members not to resort to finger pointing, but gave a pessimistic summary. “The stark reality is that the current pace of work is largely insufficient to deliver successfully in Bali,” he said. The disputed stockpiling proposal would let poor countries buy and store farm produce and would eliminate the existing cap on agricultural subsidies. Supporters say it would help poor farmers and food security, but critics say it would do just the opposite. Punke said the proposal became more worrying the more he learned about it and would be a step back, “creating a massive new loophole for potentially unlimited trade-distorting subsidies”. He said the proposal would mean governments pumping up food prices by buying commodities for their stockpiles, a policy that would lead to national surpluses later being dumped on world markets and hurting the interests of non-subsidized farmers elsewhere. Punke said the United States was concerned about rumors of yet more proposals on agricultural reforms, which he said would only deepen the impasse.
Webformer: Pre-training with Web Pages for Information Retrieval Pre-trained language models (PLMs) have achieved great success in the area of Information Retrieval. Studies show that applying these models to ad-hoc document ranking can achieve better retrieval effectiveness. However, on the Web, most information is organized in the form of HTML web pages. In addition to the pure text content, the structure of the content organized by HTML tags is also an important part of the information delivered on a web page. Currently, such structured information is totally ignored by pre-trained models which are trained solely based on text content. In this paper, we propose to leverage large-scale web pages and their DOM (Document Object Model) tree structures to pre-train models for information retrieval. We argue that using the hierarchical structure contained in web pages, we can get richer contextual information for training better language models. To exploit this kind of information, we devise four pre-training objectives based on the structure of web pages, then pre-train a Transformer model towards these tasks jointly with traditional masked language model objective. Experimental results on two authoritative ad-hoc retrieval datasets prove that our model can significantly improve ranking performance compared to existing pre-trained models.
Noise reduction in an anaesthetic gas scavenging system is left in place, the output of the astable is grounded, resulting in loss of function of the unit. Electrical continuity is maintained via the power supply to each inverter within the integrated circuit. The 75 kohm resistor in the original circuit diagram should read 56 kohm as stated in the parts list. Thank you for the opportunity to reply to Dr Krijnen's letter. His point about the cleanliness of the output pulse from the 14049 chip is well taken, but the effects of variations in tissue impedance will probably negate the small benefit to be accrued from the use of Schmitt inverters; in short, there will be no clinically apparent differences in respect of this application of the circuit. Our aim was to keep both the circuit and the article simple, so as not to deter those with limited electronic knowledge from constructing their own stimulator. The circuit error was a minor one which should have been noticed by anyone with the experience to be able to construct the device themselves.
The body of their daughter, 6-year-old Alysha Quate, was found in a garage in Centreville on June 6 after Elizabeth Quate told Las Vegas police her husband had killed one of their children in Belleville and hidden the body several years ago. Elizabeth Quate also told police her husband abused her and forced her to prostitute herself. Police went to the Quates’ home in Las Vegas, where they found a “disturbing scene” of Jason Quate with his two daughters, who were both pantless on separate matrresses in the apartment. They both showed signs of physical abuse. An examination of one of the daughters revealed signs of sexual abuse. The children were not currently enrolled in school. Jason Quate admitted to killing his daughter and hiding the body in an interview with detectives. In a media interview, he has said she accidentally choked on food that was in her mouth. Elizabeth Quate told her parents in a phone call that Alysha choked after Jason Quate hit the child. Jason Quate told police the scars on his surviving daughters were from punishments, such as hitting them on the buttocks with extension chords and belts. He said sometimes the girls moved and caused him to hit other parts of their bodies. During interviews with investigators, the girls said their mother, who they referred to as their “other parent,” did not live with them and she only visited during the day. An autospy was peformed on Alysha’s body, but investigators have not yet been unable to determine a cause of death due to the deterioated condition of the body. St. Clair County Coroner Calvin Dye Sr. said further tests need to be done. Jason and Elizabeth Quate remained in custody Friday in Las Vegas.
Large Eddy Simulation of Supersonic Boundary Layer Transition over a Flat-Plate Based on the Spatial Mode The large eddy simulation (LES) of spatially evolving supersonic boundary layer transition over a flat-plate with freestream Mach number 4.5 is performed in the present work. The Favre-filtered Navier-Stokes equations are used to simulate large scales, while a dynamic mixed subgrid-scale (SGS) model is used to simulate subgrid stress. The convective terms are discretized with a fifth-order upwind compact difference scheme, while a sixth-order symmetric compact difference scheme is employed for the diffusive terms. The basic mean flow is obtained from the similarity solution of the compressible laminar boundary layer. In order to ensure the transition from the initial laminar flow to fully developed turbulence, a pair of oblique first-mode perturbation is imposed on the inflow boundary. The whole process of the spatial transition is obtained from the simulation. Through the space-time average, the variations of typical statistical quantities are analyzed. It is found that the distributions of turbulent Mach number, root-mean-square (rms) fluctuation quantities, and Reynolds stresses along the wall-normal direction at different streamwise locations exhibit self-similarity in fully developed turbulent region. Finally, the onset and development of large-scale coherent structures through the transition process are depicted.
Stochastic Analysis of Solving Complex Problem on Distributed Computer The analysis of time and readability of parallel solving complex problems on distributed computer systems (CS) is presented. The derivation of equation for calculating the efficiency indices is based on the assumption that the time of problem solution on CS is a function of time of problem solution on one elementary machine, and the function has a finite number of discontinuities. The discontinuities have the probabilistic character and correspond to the CS failures that require reconfiguration of the CS (structure readjustability with regard to working machine only). A notion of complex CS reconfiguration is introduced and the reconfiguration is investigated. A set of integral equations for calculating the function of realizability of problem solution on distributed CSs is derived. A parallel algorithm for its computing is described
How do you meet a potential partner? Meeting people face-to-face is crucial, says our Dating Doctor Alana Kirk . . . The perfect date won’t come knocking— you have to make the effort to go out and get to know new people. Even if there is no spark, a date can boost your confidence. Mix up your methods. Try a new dating site or even approach someone in the real world! Who'll find love on our blind date? This week it's Pia, 50, and Mark, 48,... but will romance be on the cards?
beta-D-Mannosidase from human placenta: properties and partial purification. beta-D-Mannosidase was partially purified (108-fold) from human placenta by ammonium sulfate precipitation, Concanavalin A-Sepharose affinity chromatography, gel filtration and hydroxylapatite chromatography. Further attempts to purify the enzyme by several conventional methods failed due to loss of activity. The stability of beta-mannosidase, the effect of several compounds on its activity and some physico-chemical and kinetic parameters were investigated.
Deeded lake rights and deeded boat slip are included with this buildable lot in the desirable Sunset Bay lakeside community. Great location in a neighborhood of gorgeous homes. Large, level lot with a partial view of Chautauqua Lake. Very low HOA fee includes use of picnic area and firepit. Not all properties in the community have deeded boat dockage - this one has the advantage of an interior slip protected from the prevailing winds. Electric, gas, and public sewer are available at the road. Just a short drive or bike ride to Long Point State Park and the shops, concerts and restaurants of the village of Bemus Point.
Community-acquired respiratory viruses after lung transplantation: common, sometimes silent, potentially lethal Community-acquired respiratory viruses (CARV) represent an ever present risk to the lung transplant (LTX) recipient.1 Alone among solid organ transplants, the lung allograft is exposed to the ambient environment with every breath; hence, it is exposed to CARV, which may trigger rejection.2 Or at least we once thought despite some recent evidence to the contrary.35 The current article in the journal expands the previous work of the same group and provides a comprehensive analysis of 112 patients examined on 903 occasions over a 3-year period with a further 2-year follow-up to determine the presence and effects of CARV.68 The results are compelling in the main. CARV account for about 30% of all acute respiratory presentations after LTX and are a dominant cause of new respiratory symptoms with a risk of hospitalisation of 1750% depending on type. Surprisingly, 10% of asymptomatic LTX recipients had a positive test for CARV at screening visits (predominantly rhinovirus), but whether their asymptomatic status represents the effects of maintenance immune suppressive therapy on the inflammatory response cannot be ascertained from the study. The major strength of this prospective study is the exhaustive assessment of both symptomatic
AMSTERDAM (Reuters) - Trust management firms in the Netherlands must tighten their procedures against money laundering and tax evasion or face higher penalties for wrongdoing, the Dutch central bank said on Tuesday. The Netherlands, with dozens of tax treaties, is a hub for corporate entities shifting profits to lower tax jurisdictions. But it has come under increased international pressure to crack down on tax evasion. A Dutch parliamentary inquiry in June found that trust managers, who oversee financial assets on behalf of their owners, often ignore international rules and regulations, enabling tax-dodging and other criminal behaviour. “Progress in the trust management sector has been too slow for too long,” central bank director Frank Elderson said at a meeting with reporters. “The bar for trust managers is being raised and they have to act accordingly.” In a response to the parliamentary inquiry, the government earlier this month announced new legislation that gives the central bank more power to act against trust managers. “This will strengthen our supervision,” Elderson said. “But there still are limits to what we can do. The sector needs to improve itself. It’s in the firms own interest to get everyone on the right path.” The trust offices, which have no obligation to share client details with authorities, are seen as a pivotal player in the financial structures used by companies and wealthy individuals to limit their tax bills. The Dutch industry group for trust offices, Holland Quaestor, is trying to get companies to qualify for a certificate of good behaviour, by going beyond existing rules and regulations. But Elderson called progress on that front disappointing. Holland Quaestor said in a statement that stricter rules for the certificate will be introduced shortly. The industry group said it will keep working to improve quality in the trust management. The central bank estimates there are around 200 trust management organisations in the Netherlands. “The numbers are declining”, Elderson said. “And we expect that to continue, as it gets harder to comply to the rules, especially for small offices.”
The present invention relates to a magnetic disk recording apparatus for performing at least one of recording signals onto at least one magnetic recording disk and reading out the signals from the magnetic recording disk. In a prior-art magnetic disk recording apparatus as disclosed by JP-A-11-16311, a first actuator moves a support arm with respect to a magnetic recording disk, a support member holding thereon a magnetic head is connected to the first actuator through the support arm to be driven by the first actuator through the support arm, a second actuator moves the support member with respect to the support arm so that the support member is moved with respect to the magnetic recording disk by the first actuator and the second actuator, the first actuator moves the support member through the support arm by a relatively large length with respect to the magnetic recording disk, and the second actuator moves the support member by a relatively small length with respect to the magnetic recording disk. An object of the present invention is to provide a magnetic disk recording apparatus for performing at least one of recording signals onto at least one magnetic recording disk and reading out the signals from the magnetic recording disk, in which apparatus a magnetic head is correctly positioned without an undesirable vibration thereof. According to the invention, in a magnetic recording apparatus for performing at least one of recording signals onto at least one magnetic recording disk and reading out the signals from the magnetic recording disk, comprising, a support arm movable with respect to the magnetic recording disk, a first actuator for moving the support arm with respect to the magnetic recording disk, at least one magnetic head for performing through the magnetic head the at least one of recording the signals onto the magnetic recording disk and reading out the signals from the magnetic recording disk, a pair of support members at least one of which holds the magnetic head thereon and each of which is connected to the first actuator through the support arm to be driven by the first actuator through the support arm, and a pair of second actuators for moving respectively the support members with respect to the support arm so that the support members are respectively moved with respect to the magnetic recording disk by the first actuator and the second actuators, the second actuators move simultaneously the support members respectively with respect to the support arm in respective directions opposite to each other. Since the second actuators move simultaneously the support members respectively with respect to the support arm in respective directions opposite to each other, a force (for example, rotational moment) moving one of the support members cancels a force moving another one of the support members so that a vibration caused by moving simultaneously the support members respectively with respect to the support arm is not generated to keep an accuracy in positioning the magnetic head. The magnetic disk recording apparatus may comprise a pair of the magnetic heads while each of the support members holds the magnetic head thereon as a combination of the magnetic head and the support member. When the support members with the respective magnetic heads thereon are respectively swingable with respect to the support arm, it is preferable that a moment of inertia of one of the combinations around the support arm is substantially equal to that of another of the combinations around the support arm to effectively cancel the forces. When the support arm is swingable on an rotational axis of the first actuator, it is preferable that a moment of inertia of one of the combinations around the rotational axis is substantially equal to that of another of the combinations around the rotational axis to effectively cancel the forces. Alternatively, one of the support members may be prevented from holding the magnetic head thereon, and hold a counter weight. When the support member with the magnetic head thereon and the support member with the counter weight thereon are respectively swingable with respect to the support arm, it is preferable that a moment of inertia of a combination of the support member and the magnetic head around the support arm is substantially equal to that of a combination of the support member and the counter weight around the support arm to effectively cancel the forces. When the support arm is swingable on an rotational axis of the first actuator, it is preferable that a moment of inertia of a combination of the support member and the magnetic head on the rotational axis is substantially equal to that of a combination of the support member and the counter weight on the rotational axis to effectively cancel the forces. It is preferable that the second actuators move simultaneously the support members respectively with respect to the support arm in the respective directions opposite to each other by respective distances, speeds and/or forces (accelerations or decelerations) substantially equal to each other. It is preferable for keeping correctly a relationship in position and/or attitude between the support arm and each of the support members that at least a part of the support arm and at least a part of each of the support members are monolithically formed, and a flexible area is arranged between the at least a part of the support arm and the at least a part of each of the support members. When the second actuators are expandable and contractible to move respectively the support members with respect to the support arm, the second actuators are energized in such a manner that one of the second actuators expands to move one of the support members in a first direction while another one of the second actuators contracts to move another one of the support members in a second direction, the first and second directions being opposite to each other. It is preferable for keeping correctly a relationship in position and/or attitude between the support members that the second actuators are arranged in such a manner that directions parallel to a magnetic recording disk thickness direction in which directions the support members are simultaneously bent at least partially respectively by the expandable and contractible second actuators with respect to the support arm are identical to each other when the second actuators are energized to move respectively the support members with respect to the support arm. Areas of the support members onto which the second actuators are fixed respectively may face to each other in the magnetic recording disk thickness direction. It is preferable that a flexible member connecting the support arm to each of the support members is juxtaposed with each of the second actuators in the magnetic recording disk thickness direction so that the support members are bent at least partially with respect to the support arm respectively in the directions parallel to the magnetic recording disk thickness direction by the second actuators. The second actuators may be piezoids. When one of the second actuators has a first pair of expandable and contractible actuators to swing one of the support members around the support arm, and another one of the second actuators has a second pair of expandable and contractible actuators to swing another one of the support members around the support arm, it is preferable that the second actuators are energized in such a manner that one of the expandable and contractible actuators of the first pair expands while another one of the expandable and contractible actuators of the first pair contracts so that the one of the support members is swung in a first circumferential direction around the support arm, and one of the expandable and contractible actuators of the second pair expands while another one of the expandable and contractible actuators of the second pair contracts so that the another one of the support members is swung in a second circumferential direction around the support arm, the first and second circumferential directions being opposite to each other. It is preferable for keeping correctly the relationship in position and/or attitude between the support members that the expandable and contractible actuators are arranged in such a manner that directions in which the support members are simultaneously twisted respectively by the first pair of the expandable and contractible actuators and the second pair of the expandable and contractible actuators with respect to the support arm are identical to each other when the second actuators are energized to swing respectively the support members with respect to the support arm. It is preferable that a flexible member connecting the support arm to each of the support members is juxtaposed with each of the second actuators in the magnetic recording disk thickness direction so that the support members are twisted respectively with respect to the support arm by the expandable and contractible actuators. It is preferable that a polar direction of the piezoid of one of the second actuators is directed away from one of the support members moved by the one of the second actuators while a polar direction of the piezoid of another one of the second actuators is directed toward another one of the support members moved by the another one of the second actuators so that the one of the second actuators expands while another one of the second actuators contracts when the second actuators are energized. When the piezoid of each of the second actuators has a pair of first electrode adjacent to the support members in the magnetic recording disk thickness direction and second electrode distant from the support members in the magnetic recording disk thickness direction, it is preferable for easily energizing the piezoids that electric potentials applied to the second electrodes respectively are equal to each other and/or that the first electrodes are electrically grounded. It is preferable for easily energizing the piezoids that an electric potential difference to be applied to the piezoid of one of the second actuators to be activated is equal to an electric potential difference to be applied to the piezoid of another one of the second actuators to be activated. It is preferable for effectively canceling the forces that a change in electric potential difference to be applied to the piezoid of one of the second actuators to be activated in accordance with a time proceeding is substantially equal to an electric potential difference to be applied to the piezoid of another one of the second actuators to be activated in accordance with the time proceeding so that the movements of the support members in respective directions opposite to each other are synchronized. The support members and/or the support arm may be electrically grounded.
Inhibition of Nano-Metal Oxides on Bromate Formation during Ozonation Process Simulation studies in pure water were conducted to investigate the effect of nano-metal oxides on bromate (BrO3−) formation as catalysts and the catalytic mechanism. Results indicated that compared to ozonation alone, both nano-SnO2 and nano-TiO2 could inhibit the formation of bromate during ozonation process. The inhibition efficiency of BrO3− formation by nano-TiO2 enhanced with the increasing of ozone dosage and the decreasing of nano-TiO2 dosage, Br− concentrations and the pH value. Possible BrO3− minimization mechanism was that nano-TiO2 accelerated the decomposition of the dissolved O3 into OH radicals, which rapidly generated H2O2, and reduced HOBr/OBr− to Br−.
High-pressure melt growth and transport properties of SiP, SiAs, GeP, and GeAs 2D layered semiconductors Silicon and Germanium monopnictides SiP, SiAs, GeP and GeAs form a family of 2D layered semiconductors. We have succeeded in growing bulk single crystals of these compounds by melt-growth under high pressure (0.5-1 GPa) in a cubic anvil hot press. Large (mm-size), shiny, micaceous crystals of GeP, GeAs and SiAs were obtained, and could be exfoliated into 2D flakes. Small and brittle crystals of SiP were yielded by this method. High-pressure sintered polycrystalline SiP and GeAs have also been successfully used as a precursor in the Chemical Vapor Transport growth of these crystals in the presence of I$_{2}$ as a transport agent. All compounds are found to crystallize in the expected layered structure and do not undergo any structural transition at low temperature, as shown by Raman spectroscopy down to T=5K. All materials exhibit a semiconducting behavior. The electrical resistivity of GeP, GeAs and SiAs is found to depend on temperature following a 2D-Variable Range Hopping conduction mechanism. The availability of bulk crystals of these compounds opens new perspectives in the field of 2D semiconducting materials for device applications. Introduction 2D materials are of great interest for the novel electronic properties that can arise from the reduced dimensionality and the quantum confinement of charge carriers, and have become more and more appealing for applications in modern electronic devices. After the epoch-making discovery of graphene, the search for stable free-standing atomic layers of semiconducting materials has experienced a rush and a fast improvement of the processing techniques. The wide family of transition metal dichalcogenides (TMDs) has proven to be the most promising, offering quite a large variety of compounds, large tunability of properties and flexibility in potential practical applications. Electronic and optoelectronic devices based on various TMDs have been demonstrated. The search for other families of 2D materials exhibiting the same properties, existing in stable atomic layers and offering similar potential for applications together with a natural abundance and a low production cost is still very active and deserves a strong effort. Chemically stable atomic layers with no surface dangling bonds can be obtained from other layered materials and the van der Waals-like bond between layers with different chemical compositions opens new perspectives for new heterostructures to be realized in a wide range of materials. Besides graphene, examples of pure elements from group IV (Si and Ge) and group V (P) have been found to form atomically thin layers (silicene, germanene and phosphorene, respectively) that can be obtained through either chemical deposition on substrate or mechanical exfoliation of bulk 3D crystals. Binary compounds of 1 Celine.Barreteau@unige.ch a group IV element (Si, Ge, Sn) and a group V pnictogens (P, As,) are also known to form layered structures in which 2D strongly covalent layers are stacked onto each other through weak van der Waals like bonds, as well as in TMDs. Silicon and germanium phosphides and arsenides have been reported since decades to crystallize in various layered structures with either orthorhombic (Cmc2 1 space group, SiP, Pbam space group, SiP 2 and GeAs 2 ) or monoclinic (C2/m space group, GeP, GeAs and SiAs ) symmetries. After the initial investigations of their crystal structures and phase equilibria, during the sixties and seventies, this family of compounds has been rather overlooked, and attracts today our interest as being a potential class of 2D materials, alternative to TMDs. The equilibrium phase diagrams assessed so far predict the existence of a limited number of stable compositions and polytypes. Few more have been synthesised at high pressure (cubic GeP, cubic GeP 2, rhombohedral GeP 3 ) or suggested to exist according to structural investigations, even though not present in the equilibrium phase diagram (orthorhombic SiP 2 and cubic SiP 2 ). Recently, the phase diagram of the Si-P has been theoretically revisited under high pressure and suggested to be substantially different from that drawn under equilibrium conditions at ambient pressure. First principle calculations of phase stability in the Si-P system have predicted the existence of at least three new stable Si x P y compounds with a layered structure that could be stable in single atomic layer forms. Bulk crystals of these materials have been seldom if ever grown: SiAs was reported to crystallize from the melt by Sudo and from the vapor phase by Kutzner et al. ; SiP and SiAs crystals were grown by the physical vapor trans-port (PVT) method, but resulted not to crystallize in the expected space group; very recently GeP was reported to grow in crystalline form by using a solution growth method in a flux of Bi and Sn. As a matter of fact, the volatility and the strong reactivity and toxicity of pnictogens require the use of close reactors in order to prevent the vapor phase from escaping. Here we report about crystal growth of the four members of the family, namely SiP, SiAs, GeP and GeAs, from the self-flux under high pressure, using a cubic anvil hot pressure apparatus. Large, micaceous, and easy-to-cleave crystals were obtained in the case of GeP, GeAs, and SiAs. Small and brittle crystals were obtained in the case of SiP. Polycrystalline binary samples, processed in the same high-pressure furnace, were used as a precursor material for CVT growth experiments with iodine as a transport agent. This method was found to favor the growth of SiP and GeAs. This article reports on materials processing, crystal growth, structural and physical characterization of SiP, SiAs, GeP and GeAs. The crystals have the expected layered structure and can be exfoliated. These materials exhibit semiconducting behavior and confirm to have high potential as 2D materials for novel nano-engineered semiconducting devices. Thermodynamic considerations The phase diagrams of the four systems under investigation have been assessed and are reported in the Pauling Files database. Only two stable compounds are reported to exist in the Si-P and Ge-P diagrams, SiP and GeP, whose decomposition occurs at 1160 C and 750 C, respectively, via peritectic decomposition into elements P and Si, or Ge. According to those diagrams, SiP is reported to transform into a mixed solidliquid-vapor phase, whereas GeP is claimed to decompose into solid Ge and liquid P. At ambient pressure this appears quite unlikely, the sublimation temperature of P being as low as 430 C. The Si-P phase diagram has been recently refuted and corrected by Liang et al.. On the other hand, no recent thermodynamical investigations of the Ge-P system, or diagram updates, have been undertaken. Undoubtedly, the decomposition into a vapor phase of phosphorous at ambient pressure prevents from processing SiP and GeP by conventional techniques. The phase diagrams of the systems containing As have been investigated by various authors (see for a complete collection), and all agree on a congruent melting of SiAs and GeAs into a liquid phase at ambient pressure. This allows processing the compounds and growing the crystals under more conventional conditions. As a matter of fact, arsenic melts into liquid at ambient pressure and its vapor pressure is more than two orders of magnitude lower than that of P at the same temperature. In the case of SiAs and GeAs, the growth from the melt can be made more difficult by the presence of monoarsenide and diarsenide phases, both congruently melting in the same temperature range, which can grow into one another in the case of composition fluctuations and element segregation. Crystals of SiAs have been grown successfully from the melt under ambient pressure, as well as from the vapor phase under vacuum. Only m-size crystals of GeAs have been obtained so far. As a result of the above considerations, we have chosen a melt-growth method under high-pressure (in the GPa range) for all compounds. The calculated phase diagram of Si-P at 0.1 GPa, showing a solid-liquid equilibrium at the SiP composition, supports this choice. Moreover, high pressure has also been successfully used for growing crystals of black phosphorous preventing both its sublimation and the transformation into dangerous white phosphorous. The use of pressures as high as 0.5-1 GPa is expected to make the growth of SiP and GeP from the melt possible. In the presence of As, the high pressure impedes the toxic vapor of As from reacting with the atmosphere, and speeds up the growth kinetics. High-pressure melt growth (HP) All crystals were grown in a high pressure cubic anvil press. Pure elements Si (6N), Ge (5N), P (5N) and As (5N) were used as reactants. They were mixed in a stoichiometric ratio and pressed into pellets of approximately 7 mm of diameter and 3 mm of thickness under uniaxial stress (3 tons). The pellets were then placed in a cylindrical boron nitride crucible surrounded by a graphite sleeve resistance heater and inserted into a pyrophyllite cube as a pressure transmitting medium. The pyrophyllite cell was then placed inside the high-pressure set-up, which consists of six WC anvils. For each composition, the cell was cold pressurized, then fast brought to high temperature (T 1 ) at 1200 C/hours, held for 30 min at this temperature and then slowly cooled to a temperature T 2 before being quenched to room temperature, while maintaining a constant pressure (see Table 1 for details). The temperature T 1 was chosen to be above the complete melting of the precursors. The quench temperature was chosen to be above any possible decomposition or phase transition. The slow cooling rate allows crystals to nucleate and grow. Chemical Vapor Transport (CVT) As mentioned above, vapor transport techniques are the most common way to obtain single crystals of these materials. However, owing to the large difference between the vapor pressure of the pnictogen and the group-IV element, very large temperature gradients (∼25 C/cm) were employed to achieve the right stoichiometry. In order to reduce such a strong technical constraint, we tried to grow crystals by vapor transport using smaller temperature gradients (5-7 C/cm). For the Physical Vapor Transport (PVT), the pure elements were mixed in a stoichiometric ratio, with a total mass of 0.2-0.3 g. The mixture was then placed in a quartz ampoule with an internal diameter of 8 mm and a length of 120 mm and sealed under vacuum (5x10 −6 mbar). For the Chemical Vapor Transport (CVT), a transport agent (I 2 ) was added to the pure elements according to a molar ratio n(I 2 )/n(Group IV) = 0.05. In both cases, the sealed reactor was placed in a two-zone furnace in the presence of a thermal gradient dT/dx ≈ 5-7 C/cm, and heated up to T hot at the hot end, equal to 1100 C and 900 C for SiP and GeAs, respectively. After few days, the furnace was switched off and the temperature decreased to room temperature. Preliminary results confirmed the difficulties in maintaining the wanted stoichiometry during crystallization in both techniques, more dramatically in the PVT case, due to the rapid sublimation of the pnictogen element during heating. The addition of the transport agent proved to be insufficient to successfully control the growth process. According to these observations, we decided to start from different reactants and grow single crystal only by the CVT technique. Instead of using a mixture of pure elements, we started from high-pressure prereacted binary precursors mixed with the transport agent. This processing route proved to be successful to grow single crystals of SiP and GeAs. Structural and Physical characterization As-grown crystals were characterized by X-Ray diffraction (XRD), SEM-EDX analysis, Raman Spectroscopy, and electrical transport. XRD patterns were acquired in a Philips X'Pert four-circle diffractometer and a Philips PW1820 powder diffractometer, both using Cu K radiation. Thanks to their better quality, only GeP and GeAs crystals could also be measured in an Agilent Super Nova single crystal diffractometer using MoK radiation. SEM-EDX analysis were carried out in a LEO 438VTP electron microscope coupled to a Noran Pioneer X-Ray detector. Raman spectroscopy was performed with a homemade micro Raman spectrometer equipped with an argon laser ( = 514.5 nm, spectral resolution of ≈1 cm −1 ) and a helium-flow cryostat working from 300K to 4 K. The electrical resistivity was measured by the standard four-probe method using a Quantum Design PPMS (physical properties measurement system) from 300 K to 2 K under a magnetic field from 0 to 7 T. Results and Discussion Pictures of single crystals of SiP, SiAs, GeP and GeAs grown by the HP method are presented in Fig. 1. As shown in these pictures, it is easier to grow the pnictides of germanium than those of silicon. Single crystals of GeP, GeAs and in a lesser measure SiAs are large, shiny and present grey flakes that can be easily cleaved into thinner flakes. On the other hand, SiP crystals are significantly smaller, less shiny, and very brittle. The peculiar lower quality of SiP crystals is ascribable to various facts: SiP is the only monopnictide that crystallizes in the orthorhombic space group Cmc2 1 instead of the monoclinic C2/m, which is common to the other compositions. Moreover, according to the phase diagram reported by Liang et al., the temperature range suitable for nucleation and growth of SiP is rather narrow. Besides, the isostatic pressure can only be increased over a little range for avoiding the formation of other HP metastable Si x P y phases. Congruent melt conditions were achieved for all systems: SEM-EDX analysis confirms the expected 1:1 chemical composition of the crystals and neither composition fluctuations nor secondary phases have been noticed in the core of the HP-grown bulk. SEM images, figure 1(e-f), evidence the lamellar structure of these crystals and their suitability for fabricating 2D devices. The same analysis on the GeAs and SiP crystals grown by the CVT method also confirms the 1:1 stoichiometry, with no traces of the transport agent I 2. Crystals cleave easily in the plane corresponding to the van der Waals gap (see Fig. 2) and clean powder diffraction patterns, with a strong preferred orientation, were obtained. This is confirmed by the -2 scans shown in Fig. 3, which are compatible with a monoclinic symmetry C2/m with strong preferred orientation along the direction. For GeP and GeAs, good quality single crystals could be cleaved to confirm the crystal structure (the reciprocal space reconstruction for the plane for GeP is shown as an inset in Fig. 3). The powder diffraction pattern for SiP presents broader reflections but agrees with the orthorhombic space group Cmc2 1 with preferred orientation along the direction. The diffraction patterns obtained from the cleavage planes of the two different structures in the same Bragg-Brentano geometry are very similar. This is accounted for by the similar local symmetry in the planes and the (Si,Ge)-(P,As) polyhedra that order in similar chains in these planes, as described in ref.. All samples were also characterized by Raman spectroscopy. The narrow, well defined peaks of the Raman shift, as well as the very low level of background proved the general good quality of the crystals. Fig. 4 shows the Raman spectra of SiP, SiAs, GeP and GeAs (from top to bottom, respectively). The Raman shift in SiAs is in good agreement with the previous Raman study reported by Kutzner et al.. The three compounds with a monoclinic structure exhibit similar Raman spectra (similar groups of modes, red-shifting when going from lighter to heavier compounds). Unequivocally, the Raman shift of SiP is different from the others, confirming the different crystal structure of SiP. The bottom-most plot shows two patterns of GeAs crystals grown by different techniques (HP and CVT): the reproducibility of the Raman spectrum confirms the quality of the samples and the reliability of the processing routes. Indexation of the Raman modes of GeAs, GeP and SiP is not known at this stage. DFT calculations of the phonon spectra are in progress. The Raman study as a function of temperature shows no significant changes down to 5 K, indicating that no structural transitions occur and the symmetry is preserved. Electrical resistivity measurements were performed on several HP-grown single crystals of each composition from room temperature to 5K. Small deviations in the resistivity from one sample to another with the same composition are consistent with the difficulty in correctly estimating the thickness of these small, layered crystals. As pointed out in Fig. 5, the four pnictides exhibit semiconducting behavior. The values of electrical resistivity at room temperature are in the same range for the monoclinic compounds, GeP (0.02 cm), SiAs (0.038 cm), GeAs (0.071 cm). On the other hand, for the orthorhombic SiP the electrical resistivity is four order of magnitude larger (141 cm), as shown in the inset of Fig. 5 ; the high resistance of SiP prevents the complete characterization over the whole temperature range. The large electrical resistivity of SiP, as compared to the other members of the family, was reproducible over samples from various batches and is likely to be related to the structural difference between SiP and the other monopnictides. No magnetoresistance effect is observed by repeating the (T) measurements under magnetic fields up to 7 tesla. Despite their evident semiconducting behavior, a simple thermally activated Arrhenius law does not fit to the experimental resistivity of the monoclinic compounds. As a matter of fact the dependence of the resistivity on temperature is very well described by a Variable-Range-Hopping model. Accord- ing to this model, the resistivity obeys the law where T 0 is a constant and 0 depends on temperature as the square root of T. The factor n in the exponent indicates the dimensionality of the system, so that the exponent 1/(n+1)=1/3 in a two-dimensional system. The plot of ln(T −1/2 ) versus T (−1/3) shown in Fig.5 for the three compounds GeP, GeAs and SiAs, evidences a linear trend between T=5 K and T=120 K and agrees with a Variable-Range-Hopping conduction in a 2-dimensional system. This is consistent with the 2D-like structure of these materials and confirms that the electrical conduction involves only the (Ge,Si)-pnictide bonds in the quasi-2D slab oriented in the direction. This behavior is expected on the base of the calculation of the Electron Localization Function (ELF) in GeP reported by Lee et al. that predicts the absence of covalent bonds between the layers and strong covalent Ge-Ge and Ge-P bonds in the layer. The slope of the linear dependence of ln(T −1/2 ) on T −1/3 shown in Fig. 6 is higher in SiAs than in the two Ge-pnictides (for which it is the same). Such slope, that is T 0 in equation, is proportional to the hopping distance and inversely proportional to the density of state at the Fermi level, N(E F ). The higher T 0 would indicate that in SiAs the density of growthinduced defects is lower. On the other hand, the equal slope observed in GeP and GeAs VRH-linear regression suggests that the origin of localized impurity states in the gap cannot be ascribed to occupation vacancies or substitutional defects on the pnictogen site, but are more likely related to the slightly disordered local coordination of Ge, predicted by ELF calculations. Conclusions With the aim of searching for 2D layered semiconducting materials that can be exfoliated down to atomically thin layers, we have investigated the family of Si-and Ge-monopnictides (SiP, SiAs, GeP, GeAs). Bulk crystals of these compounds were rarely grown, had small size and never allowed systematic investigations of their electronic properties. The crystal structure in which these materials crystallized was object of controversy. In this work we have shown that high pressure (in the GPa range) favors the crystal growth of the four Si-and Ge-monopnictides. Those containing Ge, in particular, can be grown with a large size (up to 4-5 mm 2 in the cleavage plane). Crystals of SiP and GeAs could also be grown by the vapor transport technique, provided that high-pressure pre-reacted elements were used as precursors and I 2 was used as transport agent. We have confirmed the monoclinic space group C2/m for SiAs, GeP and GeAs, and the orthorhombic Cmc2 1 for SiP. All compounds exhibit a semiconducting behavior. Nevertheless, the electrical resistivity of three of them (SiAs, GeP and GeAs) is found to follow a 2D Variable Range Hopping conduction mechanism at low temperature. These materials can be mechanically exfoliated and the study of their properties as a function of the flake thickness is in progress. Acknowledgements The authors gratefully thank J. Teyssier for his precious help in Raman spectroscopy experiments. This work was partially supported by the Swiss National Science Foundation through the "Sinergia" project n. CRSII2-147607.
1. Field of the Invention The present invention relates to a frame for use in a vehicle seat and, more particularly, to a frame located in the front portion of a vehicle seat such as a rear seat cushion which can be directly placed on and assembled to a vehicle floor and, which is so adapted that, after it is assembled, an occupant can remove the seat from the vehicle floor by lifting it up with his finger. 2. Description of the Prior Art A conventional frame of this type, as shown in FIG. 1(A), comprises a substantially L-shaped one which includes a main body (a') formed of a band-like metal plate and an upwardly bent portion (a1') extended integrally from the main body (a') at the front end portion thereof. Therefore, when a seat cushion using such conventional frame is removed from a vehicle floor to which it has been assembled, it is necessary for an occupant to insert his finger tip between the vehicle floor and the bottom of the frame and put his fingertip against the inner end (a2') of the main body (a') in order to lift up the seat cushion, as shown by two-dot chained lines. However, since the inner end (a2') of the main body (a') is formed with sharp edges or pointed corners, there is a possibility that the occupants fingertip will be injured. According to another prior art frame, an upwardly curved portion (a3') is provided in the inner end of the main body against which the fingertip is abutted, as shown in FIG. 1(B). This is very effective in eliminating the above-mentioned drawback. However, when loads are applied to the seat or the seat cushion (that is, when an occupant is seated on the seat cushion) and thus a spring (b) is flexed, the flexed spring (b) is abutted against the upwardly curved portion (a3'), which provides a strange sound. Also, there is a possibility that the spring (b) may be damaged at its portion that contacts the end of the upwardly curved portion (A3') when the seat it is used for long periods of time. In FIGS. 1(A) and 1(B), reference character (c) stands for a top member, (d) designates a hog ring for fixing the end of the top member (c) to the frame (a') or (a"), and (b1) represents a clamp welded to the frame main body (a') or (a") to catch the end of the spring (b) on the main body (a') or (a"). The above-mentioned spring (b) is formed by folding an S spring into a substantially V-shaped side elevation configuration.
Class of exactly soluble models of one-dimensional spinless fermions and its application to the Tomonaga-Luttinger Hamiltonian with nonlinear dispersion It is shown that for some special values of Hamiltonian parameters the Tomonaga-Luttinger model with nonlinear dispersion is unitary equivalent to the system of noninteracting fermions. For such parameter values the density-density propagator of the Tomonaga-Luttinger Hamiltonian with nonlinear dispersion can be found exactly. In a generic situation the exact solution can be used as a reference point around which a perturbative expansion in orders of certain irrelevant operators may be constructed. I. INTRODUCTION The Tomonaga-Luttinger model is the most important model of one-dimensional interacting electron physics. The model describes two Fermi points of right-moving (R) and left-moving (L) electrons. The electrons interact through a two-body short-range potential: L,R =: L,R L,R :, where function(x) vanishes for |x| ≫ 1. The parameter plays a role of the ultraviolet cutoff. The symbol :... : stands for the normal ordering with respect to the noninteracting ground state. The kinetic energy of the model is a linear function of the electron momentum. It is a necessary condition for the exact solubility of H TL : when the kinetic energy is linear one can apply bosonization to find the excitation spectrum and the correlation functions of the model. However, there are situations in which we must extend the model by including the nonlinear dispersion term: Such generalization immediately introduces serious complications: the bosons become interacting. Another extension of the system where the bosonization fails is the quasi-one-dimensional conductor: an array of one-dimensional Tomonaga-Luttinger wires coupled by transverse hopping and interaction. Bosonization works poorly for the transverse hopping has no simple expression in terms of the Tomonaga-Luttinger bosons. Thus, some researchers began developing 'fermionic' approaches to the model. In a variety of contexts the one-dimensional electron systems were described in terms of the composite fermionic degrees of freedom. More to the topic of the present paper are Ref. which we briefly discuss below. In Ref. the authors studied the effect of nonlinear dispersion v F on the density-density propagator of the Tomonaga-Luttinger Hamiltonian. They avoided using the bosonization and instead worked with the bare fermionic degrees of freedom. The quasi-one-dimensional case was investigated in Ref. where the hybrid boson-fermion representation for the quasi-one-dimensional Hamiltonian was proposed. The latter representation describes the highenergy one-dimensional degrees of freedom in terms of the Tomonaga-Luttinger bosons while the low-energy threedimensional degrees of freedom as fermionic quasiparticles. Despite their success, both approaches are not systematic. Rather, they were custom-made for particular tasks at hand. The systematic method to handle the generalized problem - was proposed by the author in Ref.. The main idea of the method is to apply a certain canonical transformation to - which kills marginal (in the renormalization group sense) interaction H int. Such transformation maps the original model on the model of fermionic quasiparticles weakly interacting through an irrelevant operator. Due to the irrelevance of the quasiparticle interaction the usual perturbation theory is applicable and the quasiparticles near the Fermi points are free. This approach allows one to calculate the retarded density-density correlation function of the Tomonaga-Luttinger model with nonlinear dispersion to the zeroth order in the quasiparticle interaction. The developments described above solve the longstanding problem of the Tomonaga-Luttinger model with the nonlinear dispersion. The purpose of this paper is refinements of the method: we will show that if, in addition to the nonlinear dispersion, one supplies the Tomonaga-Luttinger Hamiltonian with a certain type of irrelevant interactions the Hamiltonian becomes exactly soluble. The system at and near this solubility point is studied in this paper. We proceed by noting that the classical formulation of the Tomonaga-Luttinger model with nonlinear dispersion - is somewhat inconsistent. It is trivial to check that the scaling dimension of H nl is equal to 3. Indeed, the field operator has dimension of 1/2, the dimension of ∇ is unity. Therefore, the dimension of the whole operator is 3. In addition to H nl, there is one and only one operator whose scaling dimension is also 3: From the renormalization group view point if we are interested in the effect of H nl we must include H int as well. Initially, it looks as if such modification leads to substantial additional difficulties. Yet, despite this expectations, we will see that the operator H int does not make our problem more complicated at all. Moreover, if g, v F, g and v F satisfy some special relation the Hamiltonian is equivalent to the system of noninteracting fermions. Thus, the system becomes exactly soluble. The exact solubility gives us a chance to access qualities otherwise hidden from the analytical investigation. For example, the spectrum of the Hamiltonian and the statistics of excitations can be determined. Additionally, the density-density propagator can be calculated as well. It is proportional to the propagator of the free fermions. Of course, a generic model (which we denote as H G, where 'G' stands for 'generic') deviates from the exactly soluble case. However, it can be shown that if such deviations are weak we can study the generic model with the help of perturbation theory in orders of (H G − H ex ). In particular, the propagator of the exactly soluble Hamiltonian is the zeroth order approximation to the propagator of H G. The paper is organized as follows. The exactly soluble model is derived in Sect.II. Since the latter Section is very technical the reader might get lost in the details. For those who are not specifically interested in all the minutia there is Sect.III where we give concise nontechnical overview of the presented derivations. This Section is fairly self-contained so that the reader ready to settle for just a heuristic argumentation may skip Sect.II. The situation of weak deviation from the point of exact solubility is discussed in Sect.IV. Sect.V is dedicated to the application of Sect.IV ideas to calculations of observables. In Sect.VI we compare our approach with the results available in the literature. Technically involved calculations are relegated to Appendix. II. EXACTLY SOLUBLE MODEL We start with the quadratic Hamiltonian: where ellipsis stand for quadratic terms which contain more space derivatives. We must assume their presence to avoid creating spurious Fermi points besides those located at q = 0. From the practical point these terms cause us no trouble. They will be discussed in Sect.IV, together with other terms whose scaling dimension is 4 and higher. The summation in eq. runs over the chirality index p which is equal 1 for left-movers and -1 for right-movers. As we have mentioned above the colons denote the normal ordering with respect to the noninteracting ground state |0. Specifically, the normal order is defined by the equations: Obviously, the Hamiltonian is trivially soluble. The spectrum of the fermionic excitations is The contribution O(q 3 ) comes from the terms of eq. denoted by the ellipsis. These terms guarantee that pq = 0 for q = 0 only. The density-density propagator for the model can be calculated without much effort. Now we define the unitary transformation U : and transform with its help the operator H 0 : The subscript 'ex' stands for 'exact' since the Hamiltonian H ex is unitary equivalent to the quadratic Hamiltonian which implies exact solubility of H ex. The coefficients 's from eq. uniquely specify the operator U. We impose on them the following conditions: These conditions mean that U when acting in the Fock space of Hamiltonian cannot create highly excited single-fermion states. Loosely speaking, the action of U is confined to the region |q| < near the Fermi points. This guarantees that the interactions of H ex have proper ultraviolet cutoff of. In order to find H ex explicitly we need the following commutation rules: which are proven in. Next, let us study how operator U acts on different observables. With the help of eq. it is possible to show that for q = 0: For zero modes we have: Such transformation rule is a consequence of definition, in which commutes with N p. It is convenient to define the following symbols: Here the asterisks stand for convolutions of two functions. Due to eq. we have u q = 1 and v q = 0 for |q| ≫. Therefore,v(x) is well defined, whereas(x) is singular: where(x) is well-defined. Bothv(x) and(x) are even functions of x. They vary slowly on the scale of (1/) and vanish for |x| ≫ (1/). We express the action of U on p (x) in the following manner: The obtained result for U U can be used to find U H kin U. We prove in Appendix that: These formulas express the kinetic energy density in terms of the density operators. Since we know how the latter are transformed under the action of U we determine U H kin U easily, as the derivation below demonstrates. First, introduce the notation: In the bosonization framework the above equations define the normal ordering of the bosonic operators. In this paper we do not introduce the Tomonaga-Luttinger bosons. Thus, the symbols : : and : : must be understood as a shorthand notation for expressions like eq. and eq. without further hidden field-theoretical meaning. The reason for using normal ordered products is that such objects are well defined analytically at any values of x, y and z while unordered products have singularities when arguments coincide (e.g, the unordered product p (x) p (y) has a second order pole at x = y). The new notation allows us to reformulate eq. in a more compact way: In the bosonization context this equality corresponds to the bosonization of the kinetic energy. Using eq. and eq. we find: The expression under the limit sign is equal to: In this equation we need to normal order operators according to definition eq.: It is easy to show with the help of eq. that for any function f (x) Therefore, eq. may be written differently: Finally: Collecting all terms together one finds: where c is the dimensionless non-universal c-number: In eq. contributions proportional to p0 or ( p0 ) 2 are O(L −1 ). Consequently, they may be neglected. The additive constant (the term proportional to c) can be neglected as well. With this in mind we determine: The expression in the curly brackets has to be transformed: The second term can be written as: The term h int is a sum of irrelevant operators whose scaling dimensions are 4 and higher. That is why we put such superscript at h: We establish: int. Here we used eq. to transform : 2 p :. The : L R : term of eq. can be written as: Ultimately, we obtain the following expression for U H kin U : It is clear from eq. that up to the linear combination of irrelevant operators h int the classical Tomonaga-Luttinger model is unitary equivalent to the model of free fermions with the Hamiltonian H kin. This was shown in. At first sight it appears that our ability to map the Tomonaga-Luttinger Hamiltonian on the free fermionic quasiparticles is incompatible with the known fact that there is no quasiparticle pole in the Tomonaga-Luttinger single-electron propagator. The resolution of this paradox is simple : the overlap of the physical electron state and the quasiparticle state is zero. Thus, the singlequasiparticle propagator does have a pole but such an object has no physical significance. Because of the orthogonality catastrophe the physically important single-electron propagator is not connected to the quasiparticle propagator in a Fermi liquid manner: G TL (q, ) = ZG qp (q, )+G reg (q, ). The Tomonaga-Luttinger single-electron propagator was derived with the help of representation in Ref.. It is interesting to establish the relation between the Tomonaga-Luttinger Hamiltonian parameters v F, g and the free fermion parameter F. If we define g 0 = g q | q=0 = 4 F u 0 v 0 we can write: The latter formula is very well known in the bosonization theory. It gives the dependence of the plasmon velocity on the bare Fermi velocity v F and the interaction constant g 0. The latter interpretation does not contradict to eq. for the plasmon mode in the fermion system with the Hamiltonian H kin propagates with the velocity F. Our next task is to find U H nl U. We prove in Appendix that: This expression may be familiar to those who work with the bosonization: it is the bosonic form of H nl. As a consequence of eq. one derives: The expression under the integration sign is transformed as follows: Using our result for U : 2 p : U we derive: In this equality. The cubic term in eq. has to be normal ordered: Next one has to substitute eq. into eq. and collect similar terms. Using eq. and definition eq. it is easy to show now that: Therefore, up to terms O(L 0 ) we find: Now we need to expand the third order binomial: and process the terms one by one. The first term: : 3 p : +3: 2 p w p : +3: p ( w p ) 2 : +:( w p ) 3 :. We use the same trick which was exploited while obtaining eq.: As with eq. it is possible to convince ourselves that the expressions in curly brackets have higher scaling dimension than the scaling dimension of : 3 p :. Indeed: Dimension counting shows that the operators under the summation sign have the scaling dimensions of 5 and higher. This is bigger than 3, the dimension of : 3 p :. Acting in a similar fashion we establish for the second, third and fourth terms of eq. that: where the expressions in square brackets has the scaling dimension 5 or higher. Combining the above results we have: The superscript in the notation h int reminds us of the fact that h int is a series of operators whose lowest scaling dimension is 5. Further, combining eq. and eq. we write: Therefore, using the above formula and eq. one finds the following expression for H ex = U H 0 U : int. Since at v F = 0 the particle-hole symmetry is broken the bare value of the chemical potential is shifted: in the formula above = 4 2 c F 2 whereas = 0 in Hamiltonian. It is important to realize that v F,ex, g ex and g in eq. are not independent. By dividing eq. on eq. we obtain: 1 2 Since we establish: The deduced results can be summarized in the from of a theorem. Theorem 1. Consider the Tomonaga-Luttinger model with nonlinear dispersion and the irrelevant operators h int whose scaling dimensions are 4 and higher. Such model is unitary equivalent to a system of free fermions with Hamiltonian provided that parameters v F,ex, g ex, v F and g 0 satisfy eq.. The theorem trivially implies that the Tomonaga-Luttinger model with Hamiltonian whose parameters satisfy relation is exactly soluble. This allows one to find the density-density propagator of eq.. Since the ground state |0 TL of Hamiltonian is given by eq. we derive: Here, by the Green's function definition, pq ( ) = exp( H ex ) pq exp(− H ex ), that is the evolution is controlled by H ex, eq.. The action of U = U −1 on the density and evolution operators is: Note the minus sign in the above transformation law. It differs from eq. since here we apply U rather than U. Observe as well that the evolution of the density operators on the right-hand side of the above equation is set by the Hamiltonian. Thus: The latter expectation value is easy to find. It is just a product of two free single-quasiparticle propagators corresponding to Hamiltonian. Therefore, Matsubara propagator for the Tomonaga-Luttinger model is equal to: where D 0 is the propagator for the Hamiltonian H 0 specified by eq.. At low |q| this formula may be cast in a traditional for the bosonization literature way: Here K is the usual Tomonaga-Luttinger liquid parameter. The retarded propagator D ex q and the spectral density B ex q = −2ImD ex q are: 2 − ( F q + F q 2 ) 2 sgn. Due to exact solubility the spectral density is zero everywhere outside the interval F |q| − | F |q 2 < || < v F |q| + | F |q 2. III. NON-TECHNICAL OVERVIEW OF THE DERIVATIONS The presentation of the previous Section was rather technical. Below we will give a less formal account of the derivations performed so far. It is obvious that the major source of technical complications are the zero modes and the irrelevant operators h. At the same time, they are the least important parts of the derived Hamiltonian. Thus, it is useful to redo the calculations of the Section II paying no attention to the zero modes and highly irrelevant operators. That way we will be able to capture easily the most important features of the algebraic manipulations without losing too much significant information. Since the material presented in this Section is a refashioning of the discussion given above the reader should be prepared for some degree of redundancy. Our technical goal is to start with the Hamiltonian of the free fermions with nonlinear dispersion H 0, eq., and transform H 0 into H ex, eq.. We begin the execution of this program by writing H 0 as: The validity of this equation can be checked with the help of eq. and eq.. We already pointed out in the previous Section that the above equation may be thought of as the bosonic form of H 0. As in Sect.II we want to transform H 0 with the help of U and find U H 0 U. Operator U acts as a Bogoliubov rotation on the density operators, thus we may write (see eq., eq. and eq.): p (x): U =:(u 0 p (x) + v 0 −p (x)) 3 : +..., where the ellipses stand for (a) additive constants, (b) zero modes terms (c) irrelevant operators with high scaling dimensions and (d) operators which are reduced to zero modes upon integration over x. As it was explained above such terms do nothing but cloud our view. Thus, we will neglect them. Applying the rules eq.- to eq. the following is derived: where the ellipsis stand for the zero modes and highly irrelevant operators. One notes that the term : 2 p : is proportional to ip: p ∇ p : and, thus, the coefficient in front of :. Likewise, the coefficient of L R becomes g 0 (see eq.), the coefficient in front of : 2 p : −p is g ex (see eq.) and the coefficient of : 3 p : is v F,ex (see eq. ). Consequently, we obtain: The previous equation coincide with eq. up to zero modes and highly irrelevant operators. The coefficients g ex and v F satisfy eq.. This concludes our nontechnical overview of the exactly soluble Hamiltonian derivation. Transcending many technical details discussed in Section II we discover that the method of calculating H ex is in fact fairly simple. Another question we want to discuss here is the meaning of the exact solubility line eq.. Consider the generic Hamiltonian H G : whose parameters do not satisfy any specific relation. It is possible to find the transformation V of the form eq. which diagonalizes the first term of the above equation. For the detailed calculations one should consult Ref. or our derivation of eq.. The heuristic explanation, however, is easy. Consider the 'bosonic' form of H TL : Application of V produces the following: By adjusting u q and v q we can kill the interaction term p −p. For this to happens u, v must satisfy the equation: or, equivalently: If this condition is met we have: where the product p p was expressed in terms of the fermionic operators, with the help of eq.. There exists another transformation V of the general form but with a different set of coefficients which diagonalizes the sum (H nl + H int ): For this to take place the parameters must satisfy the relation: where 0 equals to q at q = 0. The derivations of the two above equations is similar to the proof of equations and. The idea of these derivations is to write and adjust 0 of V in such a way as to remove : 2 p : −p terms. This requirement lead us to eq.. Once the latter terms are removed what remains is proportional to : 3 L : + : 3 R :. This expression is proportional to H nl and, therefore, is diagonal. If the values of v F, v F, g and g are arbitrary then V = V. Consequently, the whole Hamiltonian H G cannot be diagonalized with the help of the canonical transformation of the form eq.. However, if we demand that V = V or, equivalently, = such diagonalization becomes possible and the system becomes soluble. One can check that the exact solubility condition may be derived from the equation 0 = 0. Thus, one can say that the exact solubility condition guarantees the simultaneous diagonalization of the whole Hamiltonian. IV. GENERIC TOMONAGA-LUTTINGER MODEL WITH NONLINEAR DISPERSION In this Section we will discuss how the above findings can be applied to a generic model of the one-dimensional spinless fermions. The most general Hamiltonian is: where H stands for operators whose scaling dimensions are 4 and higher. Here no special relation between v F and g is assumed. Neither do we suppose that H is equal to the linear combination of h and h from eq.. To avoid clutter we do not show the chemical potential term explicitly for it can be accounted trivially. It is convenient to write Hamiltonian H G in the form: The term H contains those operators whose scaling dimension are 4 and higher and which are not absorbed into H ex. The constant v F and H measure the deviation of the Hamiltonian H G from the exact solubility line. We want to apply the transformation V of the form eq. to H G such that the transformed Hamiltonian contains no marginal interaction operator H int. The remaining irrelevant operators can be treated perturbatively. We already know how the first two terms of eq. are transformed under the action of V. The unknown entity is V H V. To discuss the action of V on H it is useful to introduce the notation: These operators exhaust the list of operators whose scaling dimension is lower than 4 and we can write the most general expression: The coefficients 's are functions of 's which specify V. Now we must consider two cases: (i) when int = 0 and (ii) when int = 0. Let us start with (i). The transformation V with acts on H G as follows: In the first line of eq. the expression in the round brackets corresponds to V v F h nl V. Eq. can be reformulated in a more compact way: HamiltonianH 0 is quadratic in the fermionic field operators. It is similar to H 0 but has renormalized parameter values. Likewise, H is the renormalized version of H. We see that deviation from the exact solubility line eq. in v F has two effects: (a) renormalization of F and F and (b) introduction of the interactions between the quasiparticles (operator H irr ). Fortunately, these interactions are irrelevant in the renormalization group sense. Thus, their influence could be accounted perturbatively. For that reason let us introduce the notation: which suggests that the transformed operator V H G V represents the Hamiltonian of the weakly interacting quasiparticles. Next we study the case (ii). This situation is not much more difficult than (i). From the field-theoretical point of view int = 0 means that irrelevant operators contribute to the renormalization of the marginal operator coupling constant g. The correct way to address the problem is to fix 0 by a generalized version of eq.. The value of 0 must be determined by the requirement that the quasiparticle Hamiltonian +H ('s are functions of 0 ) contains no h int : It is easy to demonstrate that eq. is equivalent to eq. if int = 0. Once 0 is determined according to eq. the expression attains the form of eq.. Thus, we prove Theorem 2. A generic Tomonaga-Luttinger model with nonlinear dispersion is unitary equivalent to a system of the free quasiparticles with irrelevant interactions. V. CALCULATIONS OF OBSERVABLES What does Theorem 2 imply for evaluation of the physical properties of H G ? The irrelevance of corrections H irr means that the ground state properties of Hamiltonian H G can be approximated by the ground state properties ofH 0. Should we be interested in corrections those could be found perturbatively. Further, for an observable O(x) the correlation function may be expressed as: If the observable O is such that V OV has a very complicated form the practical evaluation of eq. may be difficult to carry out. An example of such situation is O ≡ p. At present the author does not know how the single-fermion propagator for the case of nonzero v F can be calculated. On the other hand, if the object V OV has some simple representation in terms of the quasiparticle degrees of freedom one can evaluate G O with the help of the usual perturbative expansion in orders of H irr. For example, consider calculation of the density-density propagator: O ≡ ( L + R ). In that instance V OV is a linear combination of the density operators. The zeroth order expression for such propagator is: The corrections in orders of H irr could be determined with the help of field-theoretical techniques. Note, that the actual calculation of O((H irr ) 2 ) corrections may be quite nontrivial. Indeed, numerical evidence and heuristic analytical arguments indicates that D q of the generic model diverges stronger than D ex q. This suggests that the perturbation theory for D q requires a reliable resummation technique. However, we are not always interested in the propagator itself. In fact, there are situations when a few low-order diagrams contributing to D q would suffice. For example, consider some quantity r proportional to an integral over D q. Unlike the propagator itself, r remains finite. This is so because the divergences of D q due to O((H irr ) 2 ) corrections are very weak. Thus, the integration of D q can be successfully performed with the result: where quasiparticle interaction correction |r| < ∞ may be non-analytic. Regarding the quasiparticle interaction corrections due to H irr we would like to make a remark. Operator (g + int )h int is the least irrelevant part of H irr. Consequently, we expect that at small momenta the corrections due to (g + int )h int are stronger than those due to H. Therefore, the most important contribution to the low momenta corrections are ∝ (g + int ) 2. Thus, there is a hypersurface in the generic one-dimensional model parameter space where the low momenta corrections drastically weakens. This hypersurface is fixed by the equation We can interpret this as a manifestation of the exact solubility line eq.. As a practical application one can calculate the hypersurface for the spinless Hubbard-like model in the limit of small interaction u ≪ t. Using the decomposition: one can calculate v F, v F, g and g in terms of t(i) and u. As soon as this task is done the equations for those quantities must be substituted into eq. and the limit u → 0 is taken. This gives: where a is the lattice constant and the values of v F and v F,ex are functionals of t(i). What are the phenomenological manifestations of eq.? As explained above, on the surface fixed by the latter equation and in the limit u ≪ t the scattering of the quasiparticles is drastically reduced. As a consequence, the propagator D q calculated for the Hamiltonian satisfying eq. differs qualitatively from the propagator of the generic Hamiltonian eq.. For example, the divergence of the propagator due to quasiparticle scattering might be weaker. Such prediction may be tested numerically. VI. COMPARISON WITH OTHER WORK The issue of nonzero v F has a long history. Traditionally, it has been approached within the framework of the bosonization. The disadvantage of the bosonization is that the perturbative expansion in orders of v F is highly singular. To circumvent this latter difficulty the authors of Ref. used the random phase approximation to find D q. Their spectral function differs qualitatively from eq.. We believe that such a result is an artefact of the random phase approximation. Alternatively, in several recent papers the use of the 'fermionic' methods was advocated. The idea behind them is that away from the mass surface the perturbation theory in g is permissible without bosonization. That is, the authors chose to work with the original fermions but at a price: the perturbation theory results cannot be applied straightforwardly near the mass surface. The proposed method was used to calculate the highfrequency tail of the propagator. The latter technique is complimentary to the exact solution: it is easily implemented at high frequency but it is problematic at the mass surface where the exact solution works. In Ref. the heuristic argument was put forward suggesting that the spectral function B of H G diverges at some line of (q, ) plane. This finding is corroborated qualitatively by the numerical work. How does this compare against calculations presented here? Obviously, our spectral function B ex of the exactly soluble model is finite. This does not contradict to Ref. : the propagator of the generic model H G is affected by the quasiparticle interaction which induces the extra divergence. Thus, the propagator calculation requires an adequate resummation technique, as we pointed out in the previous Section. We may hypothesize that the argumentation of Ref. may be directly applied to the quasiparticle Hamiltonian H qp. Such 'marriage' of the two approaches possesses a clear advantage: both marginal and irrelevant interactions are accounted for to all orders. Further speculations are to be postponed until thorough investigation. To conclude, we have shown that the Tomonaga-Luttinger model with specific parameter values is equivalent to the free fermions. At this point the Tomonaga-Luttinger Hamiltonian is exactly soluble. Away from the exact solubility the model may be mapped on a system of the fermions with irrelevant interactions. Although, under such circumstances the model is not soluble exactly, yet, it can be investigated with the help of the perturbation theory. VII. ACKNOWLEDGEMENTS The author is grateful for the support provided by the Dynasty Foundation, by RFBR through grant 03-02-16626 and by Russian federal program "Leading scientific schools" NSh-1694.2003.2. VIII. APPENDIX In this Appendix we prove certain relations required by the presentation in the main text. Combining the results for G and derivatives of F we prove eq. which, in turn, proves eq..
Q: Suggestions for a photo sales & printing vendor? I'm beginning to seriously think about selling my photos online ... as prints, greeting cards, etc. Can anyone recommend a site that does a good job of handling the sales translations, printing, & shipping? I don't mind giving the vendor a cut of the sale (fairly large, I would imagine) ... but I do want something that gives me a lot of control over how the sales gateway looks. Ideally I would like a site that let's me put a simple 'purchase this photo' button or link on my own site. I'm not looking for a stock photo site. A: My SmugMug experience as a SmugMug Pro user: Pros A lot of control over your gallery's settings and prices. Excellent customer support. This high level of support on the web is uncommon. The customer's experience is simple and straight forward (as long as you don't overwhelm them different sizes and options). Tons of cool features with the Pro account, like coupons and packages. Cons Almost too much control over your gallery's settings and prices sometimes. It can sometimes be a little overcomplicated. Not necessarily that great looking right out of the box. SmugMug handles all of the transaction. The customer checks out, SmugMug takes their money, sends the stuff to the lab and ships it to the customer. SmugMug takes 15% of your profit. I don't think you can have a buy link on your photo that integrates with their shopping cart. I don't think SmugMug is really all that great as simply a photo sharing site. Flickr is better. But, SmugMug is pretty darn good for the pro selling prints. A: I've used both Photoshelter and SmugMug. Here are some quick thoughts on the two: Photoshelter's Strengths: great online image delivery system for electronic use easier integration with standalone websites (they have some GREAT integration with WordPress sites that use a Graph Paper Press theme) (subjective) generally seen as higher-end and more professional than SmugMug SmugMug's Strengths: they seem to be innovating faster at the moment and announcing new features quicker larger product and print selection from their printing vendors (subjective) their shopping cart/ordering process seems a bit easier to average (non-techie) folks who purchased my products/prints cheaper A: I was looking for a similar service to easily provide prints for sporting events that I've shot. I checked out sites like SmugMug and PhotoShelter, but they didn't quite have what I was looking for as I already had an extensive self-hosted photo gallery and did not want to re-upload a lot of photos to another site. Then I heard about Fotomoto, which has integrated nicely with my existing site so far. You basically get a couple of lines of Javascript to add to your pages and then Fotomoto handles the rest - the shopping cart, the printing, and the shipping.
Evolution of the Surface Temperature of Pianists' Arm Muscles Using Infrared Thermography Musculoskeletal disorders are very frequent among musicians. Diagnosis is difficult due to the lack of objective tests and the multiplicity of symptoms. Treatment is also problematic and often requires that the musician stop playing. Most of these disorders are inflammatory in nature, and therefore involve temperature changes in the affected regions. Temperature measurements were recorded with an infrared camera. In this paper we present an overview of the temperature measurements made in the arms of 8 pianists during regular piano practice sessions
134Cs for the treatment of cervical cancer. One projected use of 134Cs is for the treatment of cancer of the uterine cervix. The isodose curves for cancer of the cervix commonly have the shape shown in Fig. 1a and b. As can be seen, the tumour volume is not the same shape as the high dose treatment volume. With moderate energy radionuclides it should be possible to use filtration to try to fit the isodose shape to the tumour volume using a single central line source with a suitable variation of linear activity along its length.
Targeting NPY, CRF/UCNs and NPS Neuropeptide Systems to Treat Alcohol Use Disorder (AUD). BACKGROUND The term Alcohol Use Disorder (AUD) incorporates different states of disease related to the recurrent use of alcohol and linked to the relevant impairment, disability and failure to perform major responsibilities in different realms. Many neurotransmitter systems are involved in the phases or states of alcoholism from reward mechanisms, associated to binge intoxication, to stress and anxiety linked to relapse and withdrawal. Some neuropeptides play a key function in the control of anxiety and stress, and establish a close relationship with the pathological mechanisms underlying alcohol addiction. Among them, Neuropeptide Y (NPY), Corticotropin-releasing factor (CRF)/Urocortins and Neuropeptide S (NPS) cross-talk, and are responsible for some of the maladaptation processes that the brain exhibits during the progression of the disease. METHOD In this study, we review the literature mainly focused on the participation of these neuropeptides in the pathophysiology of AUD, as well as on the use of antagonists designed to investigate signaling mechanisms initiated after ligand binding and their connection to biochemical adaptation events coupled to alcohol addiction. The possibility that these systems may serve as therapeutic objectives to mitigate or eliminate the harm that drinking ethanol generates, is also discussed. CONCLUSION The peptide systems reviewed here, together with other neurotransmitter systems and their mutual relationships, are firm candidates to be targeted to treat AUD.
. With regard to postoperative stability of the shoulder joint, the results yielded by the various arthroscopic refixation techniques are not as good as those obtained after open operation. The aim of this paper is to analyze the reasons for this and to present a new procedure which it is hoped will improve the arthroscopic results. The main reason for the high postoperative recurrence rate after arthroscopic joint stabilization seems to be that refixation of the capsule is not performed at the level of the lesion, but above it, because of the position of the subscapularis tendon. Another reason for the poor results of arthroscopy is that the enlarged capsule cannot be shortened as desired, because the glenoid labrum is used for refixation of the capsule. To improve the arthroscopic results we suggest basic changes of the procedure in cases with severe damage to the soft tissue at the antero-inferior aspect of the glenoid and/or in cases with an enlarged capsule: refixation of the capsule should not be carried out from inside the joint but from outside the capsule. To this end, we applied the so-called extraarticular screwing technique. Refixation is achieved by inserting small cannulated titanium screws by means of a special screwdriver. No metal is placed inside the joint. This technique requires a new portal, namely the so-called antero-inferior portal, which is placed 1.5 cm inferior to the coracoid process. If the precautionary measures described are duly observed, the musculocutaneous nerve cannot be damaged. The technique allows stable refixation of the capsule in the desired length by placement of one or two small screws in the center of the Bankart lesion. Our preference is based on experience with 83 patients with recurrent shoulder instability who were operated on by arthroscopic techniques.(ABSTRACT TRUNCATED AT 250 WORDS)
Characterization of multiple sclerosis lesions with distinct clinical correlates through quantitative diffusion MRI Highlights Macroscopic and microscopic diffusion properties discriminate between MS lesion types. The number and volume of lesions with larger diffusion changes are associated with worse clinical outcomes. Diffusion MRI provides useful information of the pathological heterogeneity in plaques. Introduction Multiple sclerosis (MS) is a chronic inflammatory autoimmune disease of the central nervous system (CNS) that is characterised by the presence of focal lesions, and damage to the normal-appearing white matter (NAWM) and the grey matter (). There is substantial heterogeneity in the pathological changes among MS lesions, with different patterns of demyelination () and a variable degree of neuroaxonal damage having been described. In addition, while active plaques are most often found at early disease stages, smoldering, inactive and shadow plaques subsequently predominate. Chronic active lesions are associated with a more aggressive disease evolution (;) and indeed, differences in the severity of demyelination, remyelination and neuroaxonal damage could explain why some patients recover completely from relapses yet in others, their disability deteriorates more rapidly. The changes in lesions and in the NAWM can be visualised through conventional magnetic resonance imaging (MRI), yet they are poorly associated with the clinical phenotype and physical disability, partly reflecting the failure to characterise the pathological nature of tissue injury in MS. However, diffusion MRI-based techniques can reveal quantitative and more specific information about the mechanisms associated with tissue changes (). Macroscopic diffusion properties have been studied extensively in MS lesions using diffusion tensor imaging (DTI) features, such as the reduction in fractional anisotropy (FA) relative to the NAWM. Unfortunately, DTI findings are strongly influenced by a complex intravoxel fibre architecture, which limits the ability to accurately estimate the different pathophysiological features of the disease (;Filippi and Rocca, 2011). Recently, several microstructure imaging techniques have been proposed to compute distinct signal contribution patterns with the aim to provide greater sensitivity and specificity toward the underlying damage mechanisms (). Several mathematical representations from biophysical models have been exploited to understand the contribution of restricted intracellular diffusion components (). The estimation of local diffusion properties based on multi-compartment spherical mean technique (MC-SMT) has successfully decomposed the distinct signal components into microscopic tissue features (). Thus, this approach is only sensitive to fibre composition, whereas DTI metrics depend on both intravoxel fibre orientation, distribution and microstructure (;). The MC-SMT model computes a multicompartment domain, encompassing extra-axonal and intra-axonal water diffusion spaces, and microscopic diffusion tensor maps to estimate distinct local tissue properties (). In MS, MC-SMT seems to be able to distinguish chronic black-holes and thus, lesions with greater tissue damage from hyperintense T2 lesions (;), and this approach can detect reductions in the apparent axon volume fraction in the spinal cord (SC). Therefore, SMT-derived tissue features could be used as biomarkers to quantify the heterogeneous mechanisms involved in MS lesion pathogenesis in vivo. Considering that MS lesions can display different degrees of damage, we hypothesized that the combination of several diffusion properties may be useful to characterize the severity of the changes in these lesions. Thus, measuring such variability could provide insights into the progression of disability and cognitive decline in patients with MS. Accordingly, the main aims of this study were to characterise MS lesions through macroscopic and microscopic diffusion information, and classify them in terms of the degree of damage, also determining the clinical relevance of the different types of lesions. Participants We prospectively recruited a cohort of 59 MS patients at the MS Unit of the Hospital Clinic of Barcelona, 53 relapsing remitting (RR) and 6 secondary progressive (SP) patients according to 2010 McDonald criteria (). Patients had to be relapse-free and free of corticosteroids in the month prior to testing. The Ethics Committee of the Hospital Clinic of Barcelona approved the study, and all participants provided their signed informed consent. Demographic and clinical data were obtained from each participant, which included their score on the Expanded Disability Status Scale (EDSS) and its sub-scores for pyramidal, brainstem and cerebellum function (Kurtzke, 1983;). Their Multiple Sclerosis Severity Score (MSSS, ) was also obtained and a cognitive assessment was performed using the Brief Repeatable Battery of neuropsychological tests (BRB-N, ). All raw values were transformed into z-scores according to published Spanish normative data (). The use of moderateefficacy (interferon beta, glatiramer acetate, teriflunomide and dimethylfumarate) or high-efficacy (fingolimod, natalizumab, rituximab, ocrelizumab or cladribine) disease modifying therapies was registered. Delineation mask and topography of MS lesions MS lesions were manually delineated on the T1 3D-MPRAGE image, supported by a co-registered FLAIR image, using JIM software (Jim version 6.0 Xinapse System, http://www.xinapse.com/). We characterised each lesion independently through its cluster size and defined their location automatically. We established the lesions in which > 5% of their volume was in direct contact with the lateral ventricles as "periventricular lesions", lesions with > 20% of their volume touching or within the cortex as "juxtacortical lesions", and brainstem or cerebellar lesions as "infratentorial lesions" if > 50% of their volume was placed in the brainstem or cerebellum. Finally, we considered the remaining lesions as "lesions located elsewhere in the deep WM" (). Lesions smaller than 27 mm 3 were excluded from the analysis (). Processing multi-shell diffusion MRI data The diffusion imaging data was preprocessed using a combination of FSL and MRtrix software (). The low b-value was used to compute DTI metrics with FSL's dtifit command by linear leastsquares fitting method () and all the diffusion shells were employed to map the microstructural diffusivity (). Afterwards, we applied an inverse transformation matrix using boundary-based registration to place MS lesions into the diffusion space (Greve and Fischl, 2009). For each patient, the following measures were assessed for each individual MS lesion, and in the global NAWM: Location (periventricular, juxtacortical, infratentorial or deep WM); lesion volume; DTI-derived metrics (FA, mean diffusivity: MD, radial diffusivity: RD and axial diffusivity: AD); SMT microscopic diffusion coefficients (FA, MD, RD and AD); and multi-compartment SMT microscopic diffusion coefficients (intra-neurite volume fraction: in, intrinsic diffusivity: diff, extra-neurite transverse microscopic diffusivity: v AD and extra-neurite microscopic mean diffusivity: v MD ). The macroscopic and microscopic diffusion properties were selected to perform k-means cluster analysis to further extract the specific diffusion indices able to classify MS lesion types. Data-driven clustering of MS lesion types We based the classification of MS lesions on diffusion imaging. We want to highlight here that clustering techniques may create artificial groups of data that may not be replicated in new data. To minimise this possibility, we only considered those sets of diffusion MRI measurements that led to clusters that were independently and consistently replicated in new data for periventricular, juxtacortical, brainstem, cerebellar and deep WM MS lesions, as defined by a prediction strength > 0.8 (Tibshirani and Walther, 2005). The "prediction strength" is a parameter proposed by Tibshirani and Walther that assesses how well the clustering obtained from one random half of the overall sample of lesions coincides with the clustering obtained from the other half of the sample. Specifically, for each set of diffusion tensor metrics and microscopic diffusion coefficients, we applied a standard kmeans algorithm with k = 2 (i.e.: clustering the data into 2 groups) and performing a separate centroid-based classification for two random halves of the MS lesions, thereafter calculating the prediction strength. We are aware that there might be more than two types of lesions but for simplicity, we decided to only explore the two-type scenario -we understood that should there be three or more types of lesions, they might very well group as two main types. To avoid spurious results related to unfortunate divisions of the overall sample of lesions into two sets, we repeated this process 500 times and each time, the overall sample of lesions was divided randomly into two parts and the prediction strength was assessed. Subsequently, we averaged the corresponding 500 estimates of the prediction strength. Finally, to evaluate the significance of the prediction strength, we repeated these calculations after randomly assigning the diffusion characteristics of each lesion to other different lesions in order to create the distribution of prediction strengths under the null hypothesis (i.e.: that diffusion characteristics are not clustered). The resulting null distribution showed which prediction strengths could be expected by chance and thus, they allowed us to estimate the pvalue. Relationships between clustering and clinical variables Application of the selected clustering recognised two types of MS lesions, type A and B. We assessed whether the overall number or the volume of each type of lesion was correlated with the variables of disability. To do that, we fitted linear models with the clinical disability as the dependent variable, and the independent variables composed by the number or volume of each type of lesions, age and gender. Given that the residuals of some numeric variables may not follow a normal distribution, we found the statistical significance using the Freedman Lane permutation procedure, a common permutation test in neuroimaging studies due to its robustness to gross deviations of normality (). For binary variables of disability, we fitted logistic linear models, in which again the dependent variable was the variable of disability, and the independent variables were the number or volume of lesion types, age and gender. For the sake of comprehensiveness, we reassessed the correlations that proved to be statistically significant, on this occasion performing the analysis separately for the MS lesions at each brain location. Results Clinical, demographic and cognitive data was collected from the 59 MS patients included in the study (as summarised in Table 1), and the cohort had a mean age of 44.7 ( ± 9.3) and 12.8 ( ± 9.16) years of disease duration. Most patients were diagnosed with the RRMS form of the disease (90%). Characterization and classification of the MS lesions based on their diffusion properties We analysed 1,236 lesions in total, with a mean brain lesion volume of 11.37 ( ± 15.30) cm 3. We computed the mean DTI values and the microscopic properties of all lesions, both globally and at the distinct locations, as well as in the NAWM (Table 2). These diffusion imaging properties were weakly correlated, and we discarded the AD and AD measures given their small variation in the lesions (38% of the values corresponded to the maximum value of this measure). Two sets of diffusion MRI indices at macro-and micro-scale were computed to identify different MS lesion profiles, the clustering of which showed prediction strengths > 0.8, irrespective of the lesion localization. The first set were distinguished on the basis of the parameters FA, RD, FA and in (Fig. 1), while the second set was defined by the same parameters in the same directions, except that FA was replaced by RD (which was higher in B-type lesions). The groups of lesions defined by the two clusters were 99% identical and as such, we decided to limit the study to the first cluster as this had a slightly higher prediction strength (0.931). Compared to A-type lesions, B-type lesions had a lower FA, FA and in, yet a higher RD, irrespective of the lesion location (P < 0.001, Table 3 and Supplementary Table 1). Moreover, the lesion number of both types were similar at all the locations, except at the juxtacortical regions where the B-type lesions predominated (P < 0.001). Most patients had both types of lesions, with between 40 and 70% of B-type lesions ( Supplementary Fig. 1). Moreover, 27% of patients had 80% or more of one specific type of plaque, with a predominance of A-type lesions in 15% of the subjects, while 12% had mostly B-type lesions (see Fig. 2). However, there was no correlation between the overall number of A and B type lesions. By contrast, 64 and 63% of the lesion volume corresponded to B-type lesions in the whole brain and periventricular areas, respectively, suggesting that the plaques of this type were larger (Table 3, Supplementary Fig. 2). The proportional volumes of B-type lesions but not their number was higher in the SPMS patients (mean percentage 90%) than in the RRMS patients (mean percentage 61%: 95% CI −0.48 to −0.10; P < 0.01). Association between MS lesion type and the clinical outcome At the patient-level, we detected several significant correlations between the overall number of B-type lesions and the clinical variables (P < 0.05 controlling for the effect of age and gender: Table 4). By contrast, we did not find any significant correlation between A-type lesions and the clinical data. Thus, a higher number of B-type lesions was associated with a higher MSSS, worse cerebellar function and worse cognition (Bonferroni-corrected P threshold = 0.004). Juxtacortical and cerebellar lesions had the strongest correlation values. However, we failed to detect significant correlations with clinical data when the number of periventricular B-type lesions was considered. In terms of lesion volume, the volume of B-type lesions was correlated with cerebellar function and cognitive disability. In particular, stronger correlations with clinical disability were found for periventricular lesions (Table 5). However, there were no significant correlations with EDSS, brainstem and pyramidal functional systems, verbal fluency, visual memory deficits and the type of treatment after a Bonferroni correction. Discussion In this study, we demonstrate that MS lesions can be classified into two types based on the severity of the changes in terms of macroscopic DTI parameters and microscopic diffusion properties. We found that most patients had both types of lesions, although in nearly a quarter of the cohort there was a clear predominance towards a given lesion type. B-type lesions are thought to present more severe tissue damage, and in terms of number and volume, the study demonstrates that their presence is related to a worse clinical evolution. Specifically, a larger number of B-type lesions in the juxtacortical, cerebellar and deep WM areas was more strongly associated with disability, as was a larger volume of these lesions in periventricular regions. All in all, the results support the usefulness of diffusion MRI to obtain information in vivo on the heterogeneity of the pathological changes in MS plaques. Our findings indicate that the combination of two diffusion-based models, DTI (FA and RD) and MC-SMT (FA and in ), which can capture how water moves in the tissue over distinct timescales, enables two distinct types of MS lesions to be classified with high predictive value. Lesions with larger modifications in diffusion imaging properties are crucial to characterize the two MS lesion types (A-type lesions show higher FA, FA and in, and smaller RD values; while B-type lesions display lower FA, FA and in, and higher RD values on Supplementary Fig. 3). B-type lesions are thought to be associated with more severe demyelination and axonal damage (). Therefore, the classification proposed would provide information regarding inflammatory destruction or the ability for neurorepair in a given patient, potentially representing a useful biomarker for phase II clinical trials. In previous studies, focal MS lesions display very heterogeneous DTI abnormalities, with a persistent decrease in FA values and an increase in the other diffusion coefficients compared to the NAWM (Inglese and Bester, 2010). FA values preferentially reflect changes in axon density, whilst RD is a measure sensitive to myelin injury. However, these diffusion features alone are not sufficiently specific to estimate the severity of damage. Moreover, their association with clinical disability is mild to moderate due to the large variability of DTI indices and the complex processes lesioned tissues undergo (). Conversely, FA and in provide information regarding more specific features at the microstructural level, depicting restricted anisotropic diffusion into the intracellular water domain (). Accordingly, despite the MC-SMT model does not allow the quantification of non-monoexponential behavior to describe the deviation of diffusion displacement from the Gaussian profile specifically (), a significant decrease of FA and in have been demonstrated for different degrees of brain and SC tissue damage in MS compared with normal WM tissue ). Furthermore, such microscopic features seem to be able to distinguish MS lesions with more axonal damage from the lesions that are hyperintense in T2-weighted sequences (;), identified as black-holes in T1-spin echo sequences (van ). When compared with the observation of black holes, the use of quantitative diffusion metrics increases the accuracy and reproducibility of the results. Thus, our findings highlight the complementarity of DTI and SC-SMT metrics to define the characteristics of MS lesions. The proportion of A and B type lesions was similar across the brain, except in juxtacortical areas where B-type lesions predominate. In periventricular regions, most of the lesion volume corresponds to Btype lesions, and such regional differences could reflect the nature of MS lesions in terms of their formation and evolution. This hypothesis is supported by the predominance of B-type lesions in SPMS patients (mean = 90%). Nevertheless, further longitudinal studies will be required to decipher the chronicity of those lesions and to assess whether they are related to slowly expanding plaques. Previous studies showed that focal MS lesions, a hallmark of the disease, are weakly correlated with clinical disability and disease severity (). However, our findings demonstrate that the number and volume of specific B-type lesions were strongly associated with a more severe disease evolution (correlation coefficients between 0.4 and 0.67), with a worse physical (mainly related to cerebellar functions) and cognitive disability. The lack of correlation with the EDSS after correcting for multiple comparisons could be influenced by the strong influence of SC integrity on the EDSS (), a fact that was not assessed here. Specifically, the number and volume of B-type lesions in juxtacortical and cerebellar areas, and their volume in periventricular regions, were the features that were most strongly correlated with disease evolution and disability. Indeed, periventricular damage may affect large white matter tracts, such as the cingulum and frontoparietal connections, potentially contributing to the cognitive deficits in patients with MS (;). Previous studies reported results consistent with the present findings, correlating brain lesion with a worsening in clinical disability, particularly for T1 hypointense lesions (). Together, the presence of lesions with larger diffusion changes could reflect a destructive pattern of chronically demyelinated axons and more neuroaxonal damage, which is related to more severe disease evolution. This study has several limitations that should be considered for future research. First, our findings should be validated through histological studies to characterize the underlying tissue changes in the A and B type lesions, and their correspondence with active, chronic or chronic active lesions. Second, diffusion metrics are highly dependent on acquisition and scanner parameters, although they are very reproducible in scan-rescan experiments. Consequently, it is important to harmonize the techniques for clinical trials that focus on different sites and protocols (). Finally, we did not evaluate the specific microscopic and macroscopic changes in new T1enhancing lesions, in black holes or over time, and thus, longitudinal studies would be useful to understand the MS temporal evolution and their predictive value in a prospective manner. Conclusions Microscopic features of the intracellular water domain (FA and in ) and macroscopic DTI-derived metrics (FA, RD) together contribute to define the amount of damage within MS lesions. In turn, these features provide a specific pattern of lesion severity that helps understand the Continuous variables are given as the mean (standard deviation). All diffusion metrics showed significant differences (P < 0.001) between A and B types lesions: FA = fractional anisotropy; RD = radial diffusivity; FA = microscopic fractional anisotropy; in = intra-neurite volume fraction. = delta/difference. **units of mm 2 /s 10 -3. mechanisms underlying clinical disability and cognitive impairment in MS patients. Accordingly, the classification of lesion types has the potential to ensure MS patients receive more specific and better-targeted therapies. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Funding Statement The author(s) disclose receipt of the following financial support for the research, authorship and/or publication of this article. This work was funded by: Proyectos de Investigacin en Salud (FIS 2015. PI15/ 00587, SL, AS; FIS 2018 PI18/01030, SL, AS), integrated into the Plan Estatal de Investigacin Cientfica y Tcnica de Innovacin I + D + I, and co-funded by the Instituto de Salud Carlos III-Subdireccin General de Evaluacin and the Fondo Europeo de Desarrollo Regional (FEDER, "Otra manera de hacer Europa"); the Red Espaola de Esclerosis Mltiple (REEM: RD16/0015/0002, RD16/0015/0003, RD12/0032/ 0002, RD12/0060/01-02); Ayuda Merck de Investigacin 2017; the Premi Fundaci Societat Catalana de Neurologia 2017; TEVA SLU; and the Fundaci Cellex. EL-S holds a predoctorate grant from the University of Barcelona (APIF). AP-U is supported by the Medical Research Council (grant numbers MR/K501256/1, MR/N013468/1) and the Fundacin Alfonso Martin Escudero. This work was supported by Miguel Servet Research Contract (CPII19/00009) to J.R. and Research Project PI19/00394 from the Plan Nacional de I + D + i 2013-2016, the Instituto de Salud Carlos III-Subdireccin General de Evaluacin y Fomento de la Investigacin and the European Regional Development Fund (FEDER, 'Investing in your future'). The funding bodies had no role in the design and performance of the study; the collection, management, analysis and interpretation of the data; the preparation, revision or approval of the manuscript; and the decision to submit the manuscript for publication. Declaration of Competing Interest The author(s) declare the following potential conflicts of interest Fig. 2. Example of two patients that presented a predominant lesion type. Most lesions were classified as A-type (in green) in the patient in the left column, while the majority of lesions were B-type (in red) in the patient in the right column: FA = fractional anisotropy; RD = radial diffusivity; FA = microscopic fractional anisotropy; in = intra-neurite volume fraction. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) with respect to the research, authorship and/or the publication of this article: EM-H, AS, ES and JR have no conflict of interests to declare. MA holds equities in Bionure and Goodgut; EL-S received funding from the University of Barcelona; CM receives funding from Instituto de Salud Carlos III, with a Grant for Health Research (PFIS, FI19/00111); NS received compensation for consulting services and speaker honoraria from Genzyme-Sanofi, Almirall, Novartis, Merck and Biogen; MS received speaker honoraria from Genzyme, Novartis and Biogen; IP-V holds a patent for an affordable eye tracking system to measure eye movement in neurological diseases and stock options in Aura Innovative Robotics; YB received speaking honoraria from Biogen, Novartis and Genzyme; AS received compensation for consulting services and speaker honoraria from Bayer-Schering, Merck-Serono, Biogen-Idec, Sanofi-Aventis, TEVA, Novartis and Roche; SL received compensation for consulting services and speaker honoraria from Biogen Idec, Novartis, TEVA, Genzyme, Sanofi and Merck. Positive (P = 0.015) n.s Positive (P = 0.025) n.s Positive (P = 0.001)* Positive (P = 0.029) Cerebellar functional system Positive (P < 0.001)* n.s Positive (P = 0.001)* n.s Positive (P < 0.001)* Positive (P = 0.003)* High-efficacy therapy Positive (P = 0.008) n.s Positive (P = 0.017) n.s Positive (P = 0.017) Positive (P = 0.016) Neuropsychological test battery Global cognitive score Negative (P < 0.001)* n.s Negative (P < 0.001)* Negative (P = 0.001)* Negative (P < 0.001)* Negative (P = 0.002)* zAttention Negative (P < 0.001)* n.s Negative (P = 0.001)* Negative (P = 0.005) Negative (P = 0.001)* Negative (P = 0.002)* zFluency Negative (P = 0.033) n.s n.s. Negative (P = 0.014) Negative (P = 0.007) n.s zVerbal memory Negative (P = 0.001)* n.s Negative (P = 0.001)* Negative (P = 0.016) Negative (P = 0.004) Negative (P = 0.003)* MSSS = Multiple Sclerosis Severity Score; EDSS = Expanded Disability Status Scale; n.s. = not statistically significant; *, significant after a Bonferroni correction.
Spontaneous Speech of Patients With Dementia of the Alzheimer Type and Mild Cognitive Impairment This article discusses the potential of three assessments of language function in the diagnosis of Alzheimer-type dementia (DAT). A total of 115 patients (mean age 65.9 years) attending a memory clinic were assessed using three language tests: a picture description task (Boston Cookie-Theft picture), the Boston Naming Test, and a semantic and phonemic word fluency measure. Results of these assessments were compared with those of clinical diagnosis including the Global Deterioration Scale (GDS). The patients were classified by ICD-10 diagnosis and GDS stage as without cognitive impairment (n = 40), mild cognitive impairment (n = 34), mild DAT (n = 21), and moderate to severe DAT (n = 20). Hypotheses were (a) that the complex task of a picture description could more readily identify language disturbances than specific language tests and that (b) examination of spontaneous speech could help to identify patients with even mild forms of DAT. In the picture description task, all diagnostic groups produced an equal number of words. However, patients with mild or moderate to severe DAT described significantly fewer objects and persons, actions, features, and localizations than patients without or with mild cognitive impairment. Persons with mild cognitive impairment had results similar to those without cognitive impairment. The Boston Naming Test and both fluency measures were superior to the picture description task in differentiating the diagnostic groups. In sum, both hypotheses had to be rejected. Our results confirm that DAT patients have distinct semantic speech disturbances whereas they are not impaired in the amount of produced speech.
Technical Field The present disclosure relates to an adhesive composition and a production method thereof. Background Adhesive compositions are used in various applications in the related art. For example, Japanese Laid-Open Patent Publication No. 2004-197048 discloses an aqueous adhesive used for adhesion between a decorative sheet and a base material that includes an ethylene-vinyl acetate copolymer based emulsion (A), an anionic urethane resin emulsion (B) and an aqueous urethane resin (C), and contains 1 to 50 parts by weight of the aqueous urethane resin (C) per 100 parts by weight of a solid content of a sum of the ethylene-vinyl acetate copolymer based emulsion (A) and the anionic urethane resin emulsion (B). Japanese Laid-Open Patent Publication No. 2010-180290 discloses, as an adhesive composition that is preferably used in color displays devices such as liquid crystal displays, an adhesive composition including 100 parts by weight of an urethane resin (A) and 5 to 30 parts by weight of a polyisocyanate hardening agent (B) added thereto, wherein the urethane resin (A) is polyurethane (Ax) obtained by reaction between polyisocyanate (a1), polyol (a2) and dioxycarboxylic acid (a3) having two hydroxyl groups and one carboxyl group in a single particle, or polyurethane urea (Ay) that is obtained by further reacting diamine (A4) with the polyurethane (Ax), and also that the urethane resin (A) has an acid value of 20 to 80 mgKOH/g. A large quantity of rubber materials such as EPDM, nitrile rubber, butadiene rubber has been used for components of various products for a long time. Also, recently, a large quantity of polyolefin based resins are used for household appliances and plastics for automobile parts due to their good resin properties such as workability, water resistance and oil resistance and also due to their cost effectiveness. Attempts have been made in applying coating on a surface of rubber material to a polyolefin based resin molded article and in forming a laminate with other resins to give an increased added value to rubber materials and polyolefin based resins. However, there is a problem that rubber materials and polyolefin based resin have bad adhesiveness with general coatings and other resins. None of Japanese Laid-Open Patent Publication No. 2004-197048 and Japanese Laid-Open Patent Publication No. 2010-180290 disclose or suggest improving adhesiveness of hard-to-bond adhesive material such as rubber materials and polyolefin based resins. As a technique related to the hard-to-bond adhesive material, Japanese Laid-Open Patent Publication No. 2002-080686 discloses an aqueous dispersion having improved adhesiveness and adhesion to a non-polar base material such as a polyolefin resin, particularly polypropylene, and obtained by combining a polyurethane based resin (II) to an aqueous dispersion (I) in which a block copolymer (I) comprising a polymeric block (A) that is mainly composed of an olefin based monomeric unit and a polymeric block (B) that is composed of 2 to 100 mol % of vinyl based monomeric units having a carboxyl group or an anhydrous carboxylic acid group and 98 to 0 mol % of another vinyl monomer that can be copolymerized with the vinyl based monomeric units is dispersed into a water solution of a basic substance of 0.05 equivalents or more to the carboxyl group or to the anhydrous carboxylic acid group. The present disclosure is related to providing a novel adhesive material composition having a good adhesiveness to hard-to-bond materials such as rubber materials or polyolefin based resins.
Terrific! #NeverTrump Candidate Evan McMullin Says Republicans Are Racist The #NeverTrump Republican candidate Evan McMullin told reporters this week that Republicans are racist. What an excellent candidate! McMullin is in the race to swing Utah to Hillary. The #NeverTrumpers just LOVE him! McMullin explained that he, like other Republicans, has heard for years from Democrats that the GOP is racist. He always rejected that kind of thinking. He rejected it, that is, until the last few years, when he worked in a senior staff position for the GOP in the House of Representatives. “I spent a lot of time in the Republican Party believing that that was something Democrats and liberals would say, [people] who weren’t interested in really understanding who we were,” McMullin said. “But I have to say in the time that I spent in the House of Representatives and leadership and in senior roles there, I realized that no, they’re actually right. And Donald Trump made it ever more clear that there is a serious problem of racism in the Republican Party. That is the problem. Not conservative ideals. Racism is not conservatism. And that’s what I’m talking about. That’s the problem.” #NeverTrump Candidate Evan McMullin Fundraising Off of Mitt Romney’s Email List
The alien ascidian Styela clava now invading the Sea of Marmara (Tunicata: Ascidiacea) Abstract During the implementation of a large project aimed to investigate the benthic community structures of the Sea of Marmara, specimens of the invasive ascidian species Styela clava were collected on natural substrata (rocks) at 10 m depth at one locality (Karamrsel) in zmit Bay. The specimens were mature, containing gametes, indicating that the species had become established in the area. The Sea of Marmara seems to provide suitable conditions for this species to survive and form proliferating populations. Introduction The Sea of Marmara is unique in having two stratified water layers separated by a halocline, generally developing at 20-25 m depths (). The upper layer is composed of brackish water originating from the Black Sea, while, the lower layer comprises marine water from the Aegean Sea. This sea has been under great anthropogenic pressures, mainly due to crowded cities situated along its coastlines (including stanbul), and the presence of many industrialized regions, in particular, zmit Bay. Pollution from different sources has caused hyper eutrophication (Aral 1992) and occasionally anoxia () in some areas. Moreover, the establishment of some invasive alien species in the basin has made conditions worse (). The mussel and oyster beds in the sea have been largely destroyed by the aforementioned asteroid and gastropod. The invasive alien species are known to have great impacts on native communities and often make complete changes to ecosystems that cannot be rectified (). The eastern Mediterranean Sea is one of the known regions that hosts high numbers of alien species, due to its proximity to the Suez Canal and high rate of maritime traffic (). This region includes 75% of the total number of alien species reported for the whole of the Mediterranean Sea. Seventeen alien ascidian species were reported from the Mediterranean Sea (see, some of which, such as Distaplia bermudensis Van Name, 1902, Microcosmus squamiger Michaelsen, 1927Botrylloides violaceus Oka, 1927and Didemnum vexillum Kott, 2002, have become invasive in some areas, especially in the western Mediterranean and Adriatic Sea (Mastrototaro and Brunetti 2006;Occhipinti-Ambrogi 2000;;). The lessepsian invaders such as Symplegma brakenhielmi, Herdmania momus, Microcosmus exasperatus Heller, 1878 and Phallusia nigra Savigny, 1816 densely colonize both natural habitats and man-made structures in coastal regions of the eastern Mediterranean (in Levantine Sea), gradually extending their distributions to the north and west, including the Aegean Sea (;;Thessalou-Legaki 2012;;). Shenkar and Loya (inar 2014). During a TUBITAK project (number 111Y268), specimens of Styela clava Herdman, 1881 were encountered and photographed in one locality, Karamrsel, located in Izmit Bay. This sessile and solitary ascidian species is native to the north-western Pacific but now occurs worldwide, due to anthropogenic transport (Carlisle 1954;Millar 1960;Holmes 1976;Christiansen and Thomsen 1981;;;Lambert and Lambert 1998;;;Hayward and Morley 2009). It is mainly characterized by its tunic shape and long stalk. This species was first reported in the Mediterranean Sea in June 2005, in Bassin de Thau (France) and was thought to have been transferred to the area by shellfish transfer (Davis and Davis 2008). This species was also recorded in the Black Sea in a species list of the macro-zoobenthos associated with a mussel facies inside the Constanta Sud-Agigea Seaport situated on the coast of Romania (Micu and Micu 2004). The species generally colonizes areas of shallow water and is especially abundant 10-200 cm below the sea surface, occasionally inhabiting hard substrate at depths of 15-40 m (Ltzen 1999). However, Kott found it at 100 m depth in Shark Bay (Western Australia). The aim of this paper is to report this species in the Sea of Marmara and to give additional information regarding its morphological and ecological characteristics. Material and methods Specimens of Styela clava were collected at one locality (Karamrsel, K15, zmit Bay, 40°41'38"N-29°36'26"E) in the Sea of Marmara at 10 m depth on rocks via scubadiving on 01 October 2012 (Figure 1). The animals were randomly sampled and fixed with 4% formaldehyde in the field. In the laboratory, specimens were rinsed with tap water and preserved in 70% ethanol. Specimens were deposited at the Museum of Faculty of Fisheries, Ege University (ESFM). The description of Styela clava Four specimens (registration code: ESFM-TUN/2012-1) were collected in the Sea of Marmara from station K15 at 10 m depth on rocks. Specimens are stalked and sessile. The body is more or less cylindrical, tapering to stalk. The body of the largest specimen is 5.5 cm long and 2 cm wide. The smallest specimen is 3.2 cm long and 1.8 mm wide. The specimen stalk reaches 3.5 cm long (Figure 2A-C). The siphons are short and placed anteriorly; the branchial siphon is more obvious than the atrial siphon. The external body surface is leathery, wrinkled, with irregular rounded conical warts ( Figure 2B, C). The body color is white in fixed specimens ( Figure 2C), but is chocolate-brown when alive (Figure 2A, B). Apertures have alternate longitudinal pale brown and dark brown stripes (Figure 2A). The branchial tentacles are simple. There are four branchial folds curved inwards on each side of the posterior part of the body. The branchial sac has numerous rows of straight stigmata. The gut is placed on the left side of the branchial sac, like a simple vertical loop. Gonads are long, parallel to each other, consisting of a central ovarian tube with testis follicles on the body wall along the each side of the ovary ( Figure 2D). In the largest specimen, gonads are placed on both sides (2 on the left side and 4 on the right side), consisting of a long ovary surrounded by male follicles ( Figure 2D). The epibionts of Styela clava The specimens of Styela clava from the Sea of Marmara were generally covered by sediment and some epibionts, such as Diadumene cincta Stephenson, 1925, Spirobranchus triqueter (Linnaeus, 1758 and green algae. The former species is known also to be an alien species, probably transferred to the area by shipping from the north-east Atlantic (). Ltzen reported various epibionts on S. clava in the North Sea, from tufts of red or green algae to ascidians including smaller specimens of the same species as well as Ascidiella aspersa (Mller, 1776) and Botryllus schlosseri. The density of Styela clava During many scuba dives and snorkeling trips performed along the Sea of Marmara in September and October 2012 (30 stations), this species was only encountered at station K15 (zmit Bay, Karamrsel) and only 10 specimens were observed at a depth of 10 m on natural habitats (on rocks). The density of the species was approximately 1 ind.m -2. The dominant macrozoobenthic species sharing the same habitat with S. clava were Mytilus galloprovincialis Lamarck, 1819, S. triqueter, D. cincta, R. venosa and A. rubens. The latter three species are also invasive alien species in the Sea of Marmara. Styela clava has been known to become extremely dominant in some areas, attaining a density of 1000 ind.m -2 in European waters (Ltzen 1999;). Micu and Micu reported a density of 4 ind.m -2 and biomass (dry-weight) of 22.8 g.m -2 in the Agigea seaport in the Black Sea (Romanian coast). The survival requirements of Styela clava This species has a club-shaped body that can reach a length of 200 mm and attaches to hard substrata by an expanded membranous plate (). It reaches maturity at a size of 5 to 7.5 cm after ten months of settlement. It is a hermaphroditic species and has a pelagic lecithotrophic larva that rarely travels more than a few centimeters according to, but probably travels much father due to currents. It is known to tolerate temperatures ranging from -2 to +23 °C and salinity from 20 to 32 psu. At this particular station within the Sea of Marmara, the temperature was near the maximum known tolerance limit of the species (22.6 °C) and the salinity was 23 psu. Similar environmental conditions were also encountered at different sampling stations in the Sea of Marmara, but no animals of this species were found at those stations. These findings might suggest that the species was found at the area of first establishment. In the Sea of Marmara, the summer surface water temperatures and salinity at the sampling site (zmit Bay) were reported to be 25 °C and 23 psu, respectively (). Winter surface water temperature of Izmit Bay is around 7 °C. This suggests that there are no physico-chemical barriers in the region to hinder the population spread of S. clava. The Sea of Marmara's specimens had ripe gonads, indicating its successful reproductive capacity in the area. At this stage it could be concluded that this species is established and has formed a proliferating population in the area. The vector for introduction of Styela clava This species has been introduced to different parts of the world's oceans, including the east Atlantic coast (see ), Australia (), New Zealand, both coasts of North America (;Lambert and Lambert 1998;Lambert 2003;Wonham and Carlton 2005), Mediterranean Sea (Davis and Davis 2008) and the Black Sea (Micu and Micu 2004). As Davis and Davis summarized, there are two possible mechanisms of ascidian introduction; shellfish transportation (juvenile ascidians) or via ship's hulls and sea chests (mature ascidians) (Coutts and Dodgshun 2007). As there is no shellfish farming in the Sea of Marmara, the only possible vector for the introduction of this species to the area was via shipping. The sampling station (K15, Karamursel) is located in zmit Bay, which is one of the most industrialized areas in Turkey, with intense international ship traffic. The donor area for the Sea of Marmara's population of this species is unknown at present. It might have been transferred from an area in the Black Sea or/the Mediterranean, or from outside the Mediterranean. Molecular analysis to be performed on the specimens might shed more light on from where this population was originated. Impacts of Styela clava The effects of Styela clava on soft bottom sediment assemblages in Port Philip Bay were reported to be negligible (). However, Bourque et al. reported that it caused a decline in mussel production in Canada, as it densely covered mussel lines. It was reported to be an aggressive invader, affecting native fauna by replacing the native competitive dominants in the benthic community (Clarke and Therriault 2007). The economic impact of this species on shellfish production in Canada alone was estimated at between $34-88,000 million (Canadian) per year (). Experiments made by Osman et al. indicated that S. clava is capable of greatly reducing the local settlement rate of oysters by preying on their planktonic larvae. The introduction and dense establishment of S. clava in England occurred simultaneously to a sharp decline in the population of the local ascidian, Ciona intestinalis (Ltzen 1999). Styela clava has effectively replaced the indigenous Pyura haustor and Ascidia ceratodes, which were the dominant ascidian species in southern California. However, in the Mediterranean Sea, the population level of this species has not increased in the Bassin de Thau in the three years since its discovery in the area and has not affected the shellfish industry greatly. It is thought that summer water temperatures (max. 29.1 °C) and salinity (max. 40.4 psu) in the area might kill off large proportions of the population (Davis and Davis 2009). Conclusions As the Sea of Marmara's hydrographical conditions conform with the survival requirements of Styela clava, it has a great potential to invade the coastal habitats of the Sea of Marmara. In order to stop, or at least mitigate the effects of this invasion, an eradication program should be urgently planned and implemented while the population is still confined to a very small area.
Great scott! Businesses at the Round Foundry and Marshalls Mill in Leeds found themselves back in 1985 after a DeLorean replica from the popular Back to the Future film series made a surprise visit last week. It will come as no surprise to fans of the films, starring Christopher Lloyd and Michael J Fox, given that the intrepid Dr. Emmett ‘Doc’ Brown and his faithful sidekick Marty McFly have a penchant for tearing through the space-time continuum. The prop vehicle, including the onboard ‘flux capacitor’, was part of a stunt by a collaboration of organisations in Rotherham to submit a bid to Arts Council England titled ‘Flux Capacitor for the Arts Councils, Creative People and Places Programme’. The vision is to empower communities to create their own cultural offer. Arts Council England is a tenant of the estate. Lisa Riley, events co-ordinator at The Round Foundry & Marshalls Mill estate, said “This was a fantastic surprise for the estate and the tenants really enjoyed seeing a real life time machine on site. Or as the Doc would say “thinking fourth dimensionally”. The DeLorean generated the same level of excitement as a bolt of lightning. “We have some enthusiastic Back to the Future fans here as tenants, so the stunt was extremely popular,” Ms Riley said. The original Back to the Future was released in 1985 and spawned two successful sequels.
HOUSTON/NEW YORK (Reuters) - Venezuela, struggling to pay for essential items such as food and medicine amid strict foreign currency controls, may have failed to collect about a third of its potential oil revenue in 2014, a Reuters analysis suggests. An oilfield worker walks next to pipelines at PDVSA's Jose Antonio Anzoategui industrial complex in the state of Anzoategui April 15, 2015. REUTERS/Carlos Garcia Rawlins The OPEC member nation likely realized just over $50 billion in oil revenue in 2014, according to an analysis of publicly available data and estimates based upon past performances of Venezuela's oil sector. But as a result of generous financing mechanisms to allied nations through cooperation agreements and imports of crude oil and various products, Venezuela potentially deprived itself of about $24 billion in oil revenue last year, the analysis suggests. An exact figure for both realized and deprived revenues is unavailable given the absence of specific data from Venezuela’s state-owned oil company, Petroleos de Venezuela (PDVSA), and the government. Accompanying this story is an interactive calculator allowing users to input their own estimates for some of these unknown elements. INTERACTIVE GRAPHIC: Venezuela: cash-strapped gusher The fact that the country holds the world’s largest proven crude reserves has generally persuaded investors that it can afford to service its debts, in spite of Caracas’ rhetoric lambasting capitalist imperialism. That confidence is eroding amid the collapse in oil prices and the crumbling of the state-led economic model, resulting in prices on sovereign and quasi-sovereign U.S. dollar-denominated debt dropping to levels typically associated with a default. Analysts have found it increasingly difficult to square how PDVSA will bring in enough revenue to meet its obligations, given its underinvestment in production operations that jeopardize oil output. Venezuela’s practice of subsidizing the cost of gasoline for domestic consumption for less than it costs to produce, as well as agreements to ship oil under barter pacts to Cuba or relaxed credits to other Caribbean nations, hurts the flow of revenue to the government. China has loaned Venezuela more than $50 billion since 2007, to be repaid with crude oil and product shipments. Nearly half of that amount has been paid off, including about $14.5 billion worth of oil last year, according to the Reuters analysis. Venezuela and China agreed to change the terms for its debt repayment starting in the fourth quarter of last year, implying fewer barrels were being sent to pay off its debt to Beijing. However, the renegotiated deal with China late last year and adjustments to its barter and/or relaxed credit agreements with Cuba and other Caribbean nations create uncertainty as to how much money Venezuela has been finally collecting in recent months. The government said in 2013 it received $9.6 billion back from the China Development bank, which was added to its coffers. This money represented the difference between the negotiated price on the oil and the real market price paid by China. PDVSA has not yet released its audited 2014 financial results that contain this figure, making a definitive calculation impossible. In the case of Petrocaribe and other bilateral agreements, the balance of crude oil and products sent in 2014 assumes a 19 percent drop in volume, as indicated in preliminary figures delivered by PDVSA and the Petroleum Ministry to the Venezuelan Congress in January. PDVSA did not respond to requests from Reuters for comments on the analysis nor to specific questions on the Chinese loans and adjustments to the Petrocaribe program.
Cowboys, Indians, Oil & CattleTexas History (Graham, TX): How Texas destinations associate themselves with the state narrative Several studies in destination marketing literature have shown that use of the engaging celebrities or associating a place with a celebrity strategy can be successful to market a place. Yet despite the proven effectiveness of this association strategy, there is not enough research into the idea of associating a destination with other kinds of familiar and admired symbolssuch as famous brands, heritage, and narratives. The aim of the current study is to expand the theoretical discussion around the association strategy beyond celebrities and to analyze which techniques marketers have used in order to associate their destinations with a state narrative; this includes examining some of the narrative components such as local brands, symbols, values, events, and sites. This topic has not yet been addressed in either tourism marketing academic or professional literature. Because the state of Texas has one of the most familiar narratives, it makes for a good case study in which to examine how marketers use the state narrative to market their destinations. As a methodology, we used quantitative and mainly qualitative content analysis of 666 tourism ads for Texan cities and towns, published in Texas Travel Guides (20082018). The findings show seven techniques that marketers used in order to associate their destinations with the narrative. Using the state of Texas as an example may provide a test case for exploring how marketers associate their place with other US state narratives in promotional tourism.
Trauma in tourist towns. Communities with major recreational attractions tend to be rural and to have high injury experience both because of environmental hazards unique to rural areas and because they attract urban visitors who may be unfamiliar with the area andits roads, engaging in activities for which they are poorly prepared, and using equipment that is rented, in poor condition, or even of inherently hazardous design or construction. Alcohol abuse by visitors is often aproblem. Once injured, visitors may have difficulty gaining access to emergency care andwhen received, such care may be limited in both quantity and quality. While the influx of visitors to rural areas brings jobs and income, it also may bring crowding, violence, unintentional injury and other stresses. Most rural recreational communities do not actively plan as a community to deal with these problems, but rather approach them piecemeal and with limited communication among all necessary participants. Such planning is essential if progress is to be made.
Hot compression deformation behavior and constitutive model of 2050 Al-Li alloy Isothermal compression experiments of 2050 AL-Li alloy were carried out by Gleeble-3800 thermal-mechanical simulator. Hot compression deformation behavior of the alloy under the conditions of deformation temperature of 300~500 °C and strain rate of 0.01~10 s-1 was studied. The constitutive model of thermal deformation was established. Results indicate that there is an obvious rheological steady state in the isothermal compression process of 2050 aluminum alloy. The microstructure shows that the alloy has recrystallization. 2050 aluminum alloy is a positive strain rate sensitive material. At constant temperature, the steady state flow stress increases with the increase of strain rate. Arrhenius constitutive model can accurately predict the rheological behavior of 2050 aluminum alloy. The average absolute relative error (AARE) of the relative error between the experimental value and the predicted value is 8.59%. Introduction 2XXX series aluminum alloys are widely used in transportation and aerospace industries because of their high strength and low density. As one of the 3rd generation Al-Li alloys, 2050 aluminum alloy has excellent static and fatigue properties. Through forging, rolling and other processes, 2050 aluminum alloy can be processed into components for aircraft wings, fuselage and other parts. Due to the influence of microstructure, deformation temperature, strain rate and stress-strain state of deformation area in the hot working process of Al-Li alloy, improper process may lead to defects such as part cracking. The constitutive model reflects the coupling relationship between strain stress, strain rate and temperature, so it is an important basis for the formulation and optimization of hot working process. At present, many studies have been carried out on the thermal deformation behavior of aluminum alloys. For example, Ning Liu et al. discussed the flow stress characteristics of 2055 aluminum alloy at 480 ℃ ~ 540 ℃, and established the constitutive model of the alloy. Zhenyang Chen et al. established the thermal deformation constitutive model of 2219 aluminum alloy at 330 ℃ ~ 450 ℃. However, the research on the hot deformation behavior of 2050 aluminum alloy is limited. In addition, the establishment of constitutive model in previous studies is based on a narrow temperature range. In 2 this paper, 2050 aluminum alloy was compressed at high temperature, the high temperature thermal deformation behavior of 2050 aluminum alloy was studied, and the constitutive model of the alloy was established. Experimental materials and methods The experimental material was 2050 aluminum alloy rolled plate, and its main chemical composition was shown in Table 1. The alloy was processed into a size of 8mm 12 mm standard hot compression specimen. The thermal compression test was carried out on Gleeble-3800 thermal-mechanical simulator. Bal. According to the deformation temperature and strain rate of 2050 aluminum alloy in actual production, isothermal compression tests were carried out at different temperatures (300, 350, 400, 450 and 500 ℃) and different strain rates (0.01, 0.1, 1 and 10 s -1 ). The specific process route was shown in Fig. 1. After isothermal compression, the specimens were cut along the center of the compression axis to observe the microstructure of the section. After grinding, polishing and Keller reagent corrosion, the microstructure was characterized by optical microscope. Fig. 2 shows the true stress true strain curve of 2050 Al-Li alloy after compression at different deformation temperatures and strain rates. As shown in the Fig. 2a, b, c and d with the increase of true strain, the true stress -strain curves at different deformation temperatures have the same trend. The true stress gradually increases to the peak, then decreases slowly, and finally tends to be stable. This can be attributed to the interaction between work hardening and dynamic softening. At a constant strain rate, the true stress decreases with the increase of temperature. This is because the increase of temperature leads to the increase of atomic kinetic energy and easier diffusion, which is conducive to softening behaviors such as dynamic recovery or dynamic recrystallization. At a constant temperature, the steady flow stress increases with the increase of strain rate, indicating that 2050 Al-Li alloy is a positive strain rate sensitive material. Fig. 3 shows the microstructure of 2050 aluminum alloy before and after isothermal compression. It can be seen that the obvious fibrous structure is shown in Fig. 3a, which is a typical structure of aluminum alloy after rolling. There are two softening mechanisms in Al-Li alloys: dynamic recovery and dynamic recrystallization. At the deformation temperature of 450 ℃ and the strain rate of 1 s -1, equiaxed small particles of dynamic recrystallization appear along the grain boundary (as shown in the circle area of Fig. 3b), which proves that dynamic recrystallization occurs in the deformation process of 2050 aluminum alloy under this process condition. The area of dynamic recrystallization is concentrated near the centerline perpendicular to the compression direction, which may be related to the friction force on the specimen. Constitutive model The flow stress of 2050 aluminum alloy depends on deformation temperature and strain rate. In this paper, Arrhenius constitutive model is used to describe the relationship between the three parameters: At low stress levels (<0.8) : sinh Therefore, under all stress states, Eq.1 can be expressed as: sinh Where, f() is a function related to the stress level, is the strain rate (s -1 ), is the flow stress (MPA), T is the absolute temperature (k), Q is the thermal deformation activation energy (kJ mol -1 ), and R is the gas constant; Parameter A, n,, are constants, where A is the structural influence factor, n is the stress index, and =/n. The relationship between flow stress and thermal deformation conditions can be expressed by Zener Holloman parameter equation (parameter Z) : exp sinh After substituting Eq. 2 and Eq. 3 into Eq. 1, take logarithms on both sides at the same time: / / The true stress and true strain data obtained from the experiment when the true strain is 0.3 are substituted into Eq. 4 and Eq. 5. The relationship curves of and under different deformation temperatures are drawn by Origin software, as shown in Fig. 3. It can be seen that both and have a linear relationship. The slope of different straight lines is obtained by linear fitting of the curve with the least square method. calculate under higher stress conditions, that is, the average value of the slopes of the two straight lines when the deformation temperature is 300 ℃ and 350 ℃. n 1 is the average value of the slopes of three straight lines when the peak stress is low, that is, the deformation temperature is 400 ℃, 450 ℃ and 500 ℃. Therefore, n 1 = 7.249; =/ n 1 =0.011. Fig. 4 Relationship curve between and. For all stress states, assuming that the activation energy Q is independent of temperature, find the natural logarithm on both sides of Eq. 5: ℎ The peak stress and the calculated substitute into Eq. 9 By performing linear regression on the fitting curve in Fig. 4, calculate the average value of different straight-line slopes, and get Q=286.370 kJmol -1, n = 6.362. Select a strain value every 0.05 within the strain range of 0.05 ~ 0.6 for calculation. The calculation steps are the same as above. The material constants of the constitutive model of 2050 aluminum alloy under different strain values can be obtained by substituting the experimental data. The relationship between strain and material constant is established by cubic polynomial fitting. Eq. 12 is a functional relationship. Where,, and 0,1,2,3,4,5,6 are all parameters. Solve the above fitting curve and calculate the parameter values in the formula. Table 2 shows the material constants of 2050 aluminum alloy under different strain conditions. The fitting coefficient and experimental data are substituted into Eq. 5 to obtain the constitutive model of 2050 aluminum alloy. As shown in Fig. 7a, b, c and d, the true stress curve and the predicted value of the constitutive model are shown when the strain rate is 0.01, 0.1, 1 and 10 s -1 respectively. It can be seen that the predicted value of the model is close to the true stress curve. Where: E is the experimental value of flow stress, P is the predicted value of the model, and N is the number of data groups. Fig. 8 shows the relationship between experimental values and model predicted values. Their correlation coefficient R reaches 0.986 and the average absolute relative error AARE is only 8.59%. It is further verified that the model can accurately predict the true stress of 2050 aluminum alloy under thermal deformation. Conclusion The true stress-strain behavior of 2050 aluminum alloy at 300~500 ℃ and strain rate of 0.01~10 s -1 was The flow stress enters the steady-state flow stage after the strain strengthening stage, which is related to the recovery and recrystallization softening behavior in the alloy. 2050 aluminum alloy is a positive strain rate sensitive material, and the increase of strain rate leads to the increase of flow stress. The Arrhenius constitutive model of 2050 aluminum alloy is established. The simulation results obtained by using the established constitutive equation are in good agreement with the experimental results. The correlation coefficient R reached 0.986 and the average absolute relative error AARE is 8.59%.
The company was actually founded a year ago (by Lewine, Berry, and President/CTO Shaun Zacharia, who all previously worked at online ad company AppNexus), and it has already run campaigns with some major advertisers. However, the company has only received a little bit of attention from the press — now it’s ready to do more of a publicity push, and it’s announcing that it raised a $2.1 million seed round at the end of last year. The round was led by True Ventures and iNovia Capital, with participation from NextView Ventures, Laconia Ventures, MESA+, the Social Internet Fund, Pinterest investor William Lohse’s Social Starts, former DoubleClick executive Paul Olliver, and Liberty City Ventures. Lewine and Berry told me that TripleLift monitors brand content across major publications and social media sites like Pinterest, Facebook, Tumblr and Instagram — it collects more than 10 million engagement points per day, they said. Then it can automatically add that content to standard banner ads. So for example, if a celebrity is seen showing off a company’s new handbag and everyone is getting excited about it online, TripleLift will automatically detect that activity and make sure an image of the handbag gets featured in the company’s ads. And it uses technology like face detection and color analysis to crop the pictures, so images of different size and proportion will look good together in an ad without any extra work from the advertiser. TripleLift says it has already run campaigns for Martha Stewart, Gucci, H&M and Puma, among others. Right now, the data side of the business is being used primarily to power TripleLift ads (“We don’t really sell analytics, and we don’t send them into our dashboard a lot,” Lewine said). But over time that could change as the company expands its analytics features, helping its customers understand what content is working and what isn’t. Oh, and if you’re wondering about the name, Berry said the company is trying to “tie together owned, earned, and paid media” — in other words, brand content, editorial content and advertising. Plus, there are three founders, and three main products. “We can come up 10 other reasons,” Lewine added.
Kareem Khubchandani Education and work Khubchandani holds a BA in sociology and anthropology from Colgate University, an MA and PhD in performance studies from Northwestern University, and was the inaugural Embrey Foundation Postdoctoral Fellow at the Center for Women's and Gender Studies at the University of Texas at Austin. As an author, Khubchandani has been published in Scholar and Feminist Online, Transgender Studies Quarterly, Journal of Asian American Studies, The Velvet Light Trap, Theater Topics, Theatre Journal, and The Wiley Blackwell Encyclopedia of Gender and Sexuality Studies. Upcoming essays will be featured in Other Pop Empires, The Global Encyclopedia of GLBTQ History, and Oxford Encyclopedia of Gender and Sexuality Studies. Khubchandani is also working on a monograph project titled Ishtyle: Accenting Gay Indian Nightlife, a performance ethnography looking at queer social spaces in Chicago and Bangalore. Alongside Elora Chowdhury and Faith Smith, he co-curates the Feminisms Unbound roundtable series for the greater Boston area’s Graduate Consortium of Women's Studies. Khubchandani has also worked for the Association for Theater in Higher Education, serving as LGBTQ Focus Group Conference Planner, and the Vice President for Advocacy. As an activist, he has worked with TrikoneChicago, a South Asian LGBTQ organization, and with the Bangalore Campaign for Sexuality Minorities Rights. Khubchandani serves as the Mellon Bridge Assistant Professor in the Department of Theatre, Dance, and Performance Studies, teaching at Tufts University. Performance Khubchandani performs in drag as LaWhore Vagistan, often combining performance with lecturing about queer nightlife, gender discipline, and South Asian diasporic culture. With Khubchandani’s teaching, Vagistan is often featured as a guest-lecturer, allowing for an alternative, performing pedagogy, as presented by a South Asian drag queen. In one of Vagistan’s performances, Lessons in Drag, she raises issues of gender discipline and explores South Asian popular culture through anecdotes, monologues, research interviews, stand-up comedy, and dance. According to Khubchandani, Vagistan’s name is rooted in the image of the Indian subcontinent, Pakistan, and Afghanistan, as a vagina; it also stems from an investment in the subcontinent as being unified, undivided by colonialism or national lines. Vagistan has performed at Austin International Drag Festival, Mustard Seed South Asian Film Festival, The Asia Society, AS220, Queens Museum, Jack Theater, Bronx Academy of Arts and Dance, Not Festival, and A.R.T. Oberon. Apart from his performances, Khubchandani’s videos have been screened at the Mississauga South Asian Film Festival, Austin OUTsider Multi-Arts festival, Hyderabad Queer FIlm Festival, and the San Francisco 3rd i film festival.
Low Illumination Video Image Enhancement Due to weather conditions, brightness conditions, capture equipment and other factors, leads to video unclear or even abnormally confused, which is not conducive to monitoring, and can not meet the needs of applications. Based on the actual data of night video surveillance, this paper proposes a new low illumination video image enhancement algorithm, which overcomes the existing problems. We analyze the characteristics of low illumination video image, and use HSV color space instead of traditional RGB space to enhance the robustness of video contrast and color distortion. At the same time, we use wavelet image fusion to highlight the details of video image, so the enhanced video has higher clarity and visual effect. Compared with other four algorithms, the proposed algorithm outperforms the above algorithms in subjective evaluation and objective evaluation. At the same time, compared with other algorithms, the proposed algorithm has faster processing time for each frame. Experiments show that the algorithm can effectively improve the overall brightness and contrast of video images, and avoid the over-enhancement of bright areas near the light source, which can meet the practical application requirements of video surveillance.
1. Field of the Invention The present invention relates to a device for coiling and uncoiling elongated goods, such as wire, cable or the like, onto a drum and comprising a support with driving means for a spreader arm, which is rotatably mounted on the support for coiling and uncoiling the elongated goods onto the stationary drum. 2. Discussion of the Background Coiling and uncoiling wire, cable and similar elongated goods by means of a coiling device is per se previously known. During coiling by way of such known devices, the coiled goods are torsionally twisted, wherein one turn of coiling causes one turn of twisting. Counter-clockwise coiling causes clockwise twisting, and clockwise coiling causes counter-clockwise twisting. Uncoiling usually takes place in the opposite direction to the coiling. When all the coiled goods have been uncoiled as stated above there is no remaining twist, however, mechanical stress in the goods has resulted. Owing to frictional resistance in the spreader arm and in the goods, twisting is not linear along the length of the goods but the portion last coiled will be twisted to a much greater extent as compared to a previously coiled portion of the goods. The non-linearity of the twisting thus arisen during coiling, which causes mechanical stress in the elongated goods, is not acceptable in some situations. To solve the problems pointed out above, the device according to the invention has a spreader arm which, at its side facing the goods, presents a gripping arm, arranged to guide the elongated goods substantially in a tangential direction relative to the periphery of the drum. In a preferred embodiment of the device according to the invention, the gripping arm has a running track in the form of a curved V-profile. Advantageously, the surface of the running track contacting the goods is defined by feeding rollers, which are spherical in shape and which present a rolling resistance near zero to eliminate twisting of the elongated goods during coiling and uncoiling. In order to provide a uniform layer of the elongated goods coiled onto the drum, the spreader arm is provided with a device for moving the gripping arm, so that the gripping arm moves along the width of the drum in accordance with the coiling or uncoiling of the goods. For access to the end of the elongated goods for uncoiling, the device according to the invention is provided with a lifting and rotating table so as to make possible 180xc2x0 rotation of the drum coiled with goods before uncoiling is initiated. The lifting and rotating table is preferably disposed on a carriage, the wheels of which run on rails starting out from the support of the device. The present invention further relates to a winding machine and an intermediate storage device having a coiling device of the above-mentioned kind and intended for feeding a cable when mounting the cable in the slots of a stator for an electric machine. The device, the winding machine and the intermediate storage device is especially but not exclusively intended to be applied when mounting high-voltage cable, on a generator where high-voltage cable is used in the windings of the stator, which cable lacks the outer protective covering normally surrounding such cable. When mounting such a cable in a stator for an electric machine, the cable is required to be completely free or nearly free from twist, i.e. twisting of the cable should be linear relative to the cable length. This implies that one turn of uncoiling from the drum arbitrarily along the length of the cable should be free or nearly free from twist. In particular the invented device is advantageous when used for an rotating electric machine of the type disclosed in WO-97/45919. The cable is preferably of the kind having an inner core with a plurality of wires, an inner semiconductive layer surrounding the core, an insulating layer surrounding the inner semiconductive layer and an outer semiconductive layer surrounding the insulating layer, preferably with a diameter of about 20 to 200 mm and a conductor area ranging from 80 to 3000 mm2.
Artificial intelligence, machine learning, analytics are still in incubation, but these future-forward, still maturing areas of innovation are impacting the contact center and the customer experience in a big way. The face of engagement is changing in the digital world and the cloud contact center is reinventing workforce optimization as well as the customer and agent experience along the way. Genesys announced the Genesys PureEngage, PureConnect and PureCloud platforms can now integrate with Google (News - Alert) Cloud Contact Center AI. The integration makes for a much easier path toward bot deployment or the implementation of automation across contact center efforts. "The launch of Google Cloud Contact Center AI was a game-changer for the industry," said Paul Lasserre, vice president of product management for artificial intelligence (AI), Genesys (News - Alert). "Businesses leveraging Genesys solutions have already identified hundreds of use cases for this powerful technology to provide holistic value across marketing, sales and services contexts." In addition to improved access to automation and bot deployment, the integration delivers AI-powered virtual assistants to address chat and call queries, while also escalating these interactions to a live agent when required. "Contact Center AI empowers enterprises to use AI to complement and enhance their contact centers," said Rajen Sheth, the director of Product Management for Google. "Google Cloud's goal is to make it as easy as possible for our customers to use AI for contact centers through our relationships with key partners like Genesys." The contact center is the frontline for customer service. Bots and humans stand side by side to support the customer journey and ensure the delivery on customer service expectations.
Near-infrared spectroscopy assessment of cerebral oxygen metabolism in the developing premature brain Little is known about cerebral blood flow, cerebral blood volume (CBV), oxygenation, and oxygen consumption in the premature newborn brain. We combined quantitative frequency-domain near-infrared spectroscopy measures of cerebral hemoglobin oxygenation (SO2) and CBV with diffusion correlation spectroscopy measures of cerebral blood flow index (BFix) to determine the relationship between these measures, gestational age at birth (GA), and chronological age. We followed 56 neonates of various GA once a week during their hospital stay. We provide absolute values of SO2 and CBV, relative values of BFix, and relative cerebral metabolic rate of oxygen (rCMRO2) as a function of postmenstrual age (PMA) and chronological age for four GA groups. SO2 correlates with chronological age (r=−0.54, P value ⩽0.001) but not with PMA (r=−0.07), whereas BFix and rCMRO2 correlate better with PMA (r=0.37 and 0.43, respectively, P value ⩽0.001). Relative CMRO2 during the first month of life is lower when GA is lower. Blood flow index and rCMRO2 are more accurate biomarkers of the brain development than SO2 in the premature newborns. Introduction Premature birth interferes with normal brain maturation, and clinical events and interventions may have additional deleterious effects. Compared with normal term newborns, premature newborns at term equivalent postmenstrual age (PMA) have structural abnormalities on magnetic resonance imaging (Huppi et al, 1998) and magnetic resonance diffusion abnormalities that have been associated with functional impairment (Bassi et al, 2008), and their resting state functional connectivity networks are abnormal (Smyser et al, 2010). Using near-infrared spectroscopy (NIRS), several studies have described alterations in cerebral blood volume (CBV) and oxygenation in preterm newborns (see review in Wolf and Greisen, 2009). However, little is known about baseline cerebral blood flow, oxygenation, and oxygen consumption in the premature newborn's brain. Such information, especially if available at the bedside, would provide valuable insight on early brain development and the impact of premature birth. Near-infrared spectroscopy is a portable and noninvasive method for interrogating cerebral physiology that uses low-intensity nonionizing radiation. It is thus suitable for use in neonates, whose thin scalps and skulls facilitate light transmission. Contrary to continuous wave NIRS measures of changes in oxy-and deoxy-hemoglobin concentrations (respectively, HbO (oxygenated hemoglobin concentration) and HbR (reduced hemoglobin concentration)) (Wolf and Greisen, 2009;Wyatt et al, 1986), frequency-domain near-infrared spectroscopy (FDNIRS) provides absolute values of HbO and HbR from which absolute values of CBV and hemoglobin oxygen saturation (SO 2 ) can be calculated (Fantini et al, 1995;Zhao et al, 2005). Frequency-domain near-infrared spectroscopy has been successful in measuring the evolution of CBV, SO 2, and relative cerebral metabolic rate of oxygen (rCMRO 2 ) over the first year of normal brain development (Franceschini et al, 2007), establishing baseline values of CBV, SO 2, and rCMRO 2 during the first 6 weeks of life in premature neonates (Roche-Labarbe et al, 2010), and determining the effect of acute brain injury on these parameters (Grant et al, 2009). Diffusion correlation spectroscopy (DCS) provides a measure of tissue perfusion based on the movement of scatterers (i.e., blood cells) inside the tissue (Boas and Yodh, 1997;Cheung et al, 2001). Diffusion correlation spectroscopy is a valid assessment of cerebral blood flow changes in the adult brain (Durduran et al, 2004;Li et al, 2005) and infants (Buckley et al, 2009;Durduran et al, 2010;Roche-Labarbe et al, 2010) and a safe and reliable alternative to the oxygen bolus (Edwards et al, 1988) or indocyanine green (Patel et al, 1998) methods. Combining FDNIRS measures of CBV and SO 2 and DCS measures of blood flow index (BF ix ) allows for reliable calculation of local rCMRO 2 in the newborns (Roche-Labarbe et al, 2010). Quantification will improve estimation of normal values and detection of abnormalities in at-risk neonates (Nicklin et al, 2003). Such markers of brain development and detection of deviations from normal can be obtained before the age at which accurate behavioral and neurologic assessments can be performed, potentially providing early biomarkers for adverse outcomes. Here, we studied premature neonates with no known brain injury to determine the relationship between CBV, SO 2, rCMRO 2, and BF ix, gestational age (GA) and chronological age. Subjects We studied 56 neonates (24 to 37 weeks GA at birth, 25 females) enrolled from the neonatal intensive care units and Well Baby Nurseries at the Massachusetts General Hospital, Brigham and Women's Hospital and Children's Hospital Boston between 2008 and 2010. Subjects were included if they had no diagnosis of brain injury or neurologic issue during or after their hospital stay. They were sorted into four groups: 24 to 27 weeks GA (9 subjects, 55 measurements, 6 ± 3 measurements per infant, APGAR score at 5 minutes = 8±0.5, weight at birth = 930±180 g), 28 to 30 weeks GA (10 subjects, 65 measurements, 7 ± 1 measurements per infant, APGAR score at 5 minutes = 7.4 ± 0.7, weight at birth = 1,200±350 g), 31 to 33 weeks GA (18 subjects, 64 measurements, 4 ± 1 measurements per infant, APGAR score at 5 minutes = 8.5 ± 0.6, weight at birth = 1,730 ± 300 g) and 34 to 37 weeks GA (19 subjects, 44 measurements, 2±1 measurements per infant, APGAR score at 5 minutes = 8.4 ± 1, weight at birth = 2,180 ± 250 g). Subjects included had a variety of cardiovascular and respiratory conditions representative of the neonatal intensive care unit population (Supplementary Table 1). Each infant was measured once a week from 1 to 15 weeks of age (ages in days were rounded off to the nearest week) while in the hospital. Our Institutional Review Board, the Partners Human Research Committee, approved all the aspects of this study and all parents provided informed consent. Acquisition We used a customized FDNIRS instrument from ISS Inc., Champaign, IL, USA (http://www.iss.com/products/oxiplex/), and built a DCS instrument similar to the system developed by Drs Arjun Yodh and Turgut Durduran at the University of Pennsylvania Cheung et al, 2001;Durduran et al, 2004). Both instruments are described in detail in Roche-Labarbe et al. The FDNIRS sources and detector fiber bundles (each 2.5 mm diameter) were arranged in a row in a black rubber probe (5 2 0.5 cm 3 ) with source-detector distances of 1, 1.5, 2, and 2.5 cm ( Figure 1C), adequate for a depth penetration of B1 cm, which includes the cerebral cortex in neonates (Dehaes et al, 2011;Franceschini et al, 1998). The DCS laser (50 mW power) was coupled to a 62.5-mm diameter multimode optical fiber and diffused at the fiber tip to comply with the American National Standards Institute exposure standards. The detectors were coupled to 5.6 mm single mode optical fibers. The DCS fibers were arranged in a second row parallel to the NIRS bundles, with source-detector distances of 1.5 (one fiber) and 2 cm (three fibers) ( Figure 1C). For each measurement, the probe and fibers were placed in a single-use polypropylene sleeve for hygiene reasons ( Figure 1A). Frequency-domain near-infrared spectroscopy and DCS measurements were obtained in sequence from seven areas of the head ( Figure 1B). The optical probe was held in each location for up to three times 10 seconds of data acquisition. Repositioning the probe compensated for local inhomogeneities such as hair and superficial large vessels to ensure that the measurement was representative of the Oxygen metabolism in the premature brain N Roche-Labarbe et al underlying brain region. The total number of positions and repetitions depended on the cooperation of the subject and the presence of other medical devices on the head. Total examination time was B45 minutes. Hemoglobin counts were extracted from clinical reports and arterial oxygenation (SaO 2 ) was obtained from routine monitors at the time of the measurement session. Near-Infrared Spectroscopy Data Processing Amplitude and phase data collected at each wavelength allows the calculation of average absorption and scattering coefficients using the multidistance frequency-domain method (Fantini et al, 1995). An automated data analysis routine includes data quality assessment and data rejection based on previously established statistical criteria (Roche-Labarbe et al, 2010). Oxygenated hemoglobin concentration and HbR were derived by fitting the absorption coefficient at our wavelengths with the hemoglobin spectra using the extinction coefficients reported in the literature (Wray et al, 1988) and a 75% concentration of water (Wolthuis et al, 2001). Total hemoglobin concentration HbT = HbO + HbR (mmol) and SO 2 = HbO/HbT (%). Cerebral blood volume in mL/100 g was calculated using standard equations (Franceschini et al, 2007;Takahashi et al, 1999) and hemoglobin concentration in the blood (HGB) from clinical charts. For 20% of measurements, HGB was not available, in which case standard normal values for HGB for age were used (de Alarc n and Werner, 2005). Diffusion Correlation Spectroscopy Data Processing Diffusion correlation spectroscopy data comprise a set of intensity autocorrelation curves (over a delay time range of 200 nanoseconds B1 second in our case) acquired sequentially at 1 Hz. Following the diffusion correlation equations (Boas and Yodh, 1997;Cheung et al, 2001;Culver et al, 2003;Durduran et al, 2004), a BF ix was derived fitting the normalized intensity temporal autocorrelation profile of the diffusively reflected light to the measured temporal autocorrelation function (Boas et al, 1995;Boas and Yodh, 1997;Cheung et al, 2001). To maximize accuracy, we used the actual optical absorption and scattering coefficients at 785 nm interpolated from the FDNIRS measurements. We rejected measurements that do not met objective criteria (Roche-Labarbe et al, 2010). Relative CMRO 2 from combined DCS and FDNIRS measures was calculated as the ratio between the subject's values and the average of all the first week's values (indicated by the subscript 0) of the 34 to 37 weeks GA group using the following equation: Statistical analysis In each infant for each measurement session and for each measured parameter (HbO, HbR, HbT, SO 2, CBV, BF ix, and rCMRO 2 ), results were averaged over all positions. We verified that results are consistent when only one location is considered. We averaged data sets to obtain one time point per week (ages in days were rounded off to the nearest week, for example, 3 days becomes week 0, 12 days becomes week 2). Given our small sample size, confidence intervals were calculated using Student's tables. We calculated linear regression, correlation coefficients (r), coefficient of determination (R 2 ), and significance levels between each measured parameter and HGB, chronological age, or PMA for all time points. Finally, we calculated linear regression between each measured parameter and chronological age for the first 31 days of life (4.4 weeks), then between each measured parameter and PMA for time points taken between 34 and 38.4 weeks PMA. These periods correspond to the overlapping measurements among all four groups. We performed t-tests for independent subjects with Bonferroni adjustment of significance levels on the slopes and intercepts (for each element in Table 2; i.e., 42 elements). Figure 2 illustrates the weekly average of FDNIRS/ DCS measured and derived parameters as a function of age for the four GA groups with confidence intervals. Figure 3 illustrates the same parameters as a function of PMA. For the four GA groups, Figure 2 shows that HbO, HbT, and SO 2 decrease with chronological age, HbR and CBV are constant with age, and BF ix and rCMRO 2 increase with age. Figure 3 shows that the decrease in HbO, HbT, and SO 2 of the four groups is independent of PMA, while the increase in BF ix and rCMRO 2 is associated with PMA (scatterplots of individual measurements as Supplementary Figures 1 and 2). Table 1 presents the results of linear regressions of the measured and derived optical parameters with HGB, age, and PMA across groups at all time points. To specifically illustrate differences among groups, Table 2 presents the results of the t-test on slopes and intercepts of the measured and derived parameters during the first month of life. Differences in slope, as seen on SO 2, reflect different rates of progression between GA groups, while differences in intercepts for similar slopes, as seen on rCMRO 2, reflect different absolute values between GA groups. Because the differences in CBV intercepts are due to the large variance in the 24 to 27 group in the first weeks of life (Figure 2), which is confirmed by the absence of correlation with any factor (Table 1), they do not reflect true intergroup differences. t-Test on slopes and intercepts between groups at same PMA showed no significant or near significant differences. Figure 4 represents the trends (slopes and intercepts) of SO 2 and rCMRO 2 maturation during the first month of life as a function of GA at birth (other box plots as Supplementary Figure 3). SO 2 decrease is steeper and occurs at an earlier PMA in subjects born at a lower GA (slopes are different). Relative CMRO 2 Oxygen metabolism in the premature brain N Roche-Labarbe et al during the first month of life is proportional to GA at birth (slopes are comparable but intercepts are proportional to GA). Discussion We provided absolute values of HbO, HbR, HbT, SO 2, and CBV and relative values of BF ix and rCMRO 2 as a function of gestational and chronological age; found that SO 2 correlates with chronological age but not with PMA; BF ix and rCMRO 2 correlate better with PMA than with chronological age; and rCMRO 2 during the first month of life is lower when GA at birth is lower. SO 2 is not correlated with PMA but varies with chronological age and HGB, suggesting that it depends on systemic changes and does not reflect changes associated with brain development. This is consistent with the findings that SO 2 correlates with the heart and respiratory rate and with arterial SO 2 in newborns (Tina et al, 2009). SO 2 undergoes a dip around 6 to 8 weeks of life, probably due to the transition from fetal to adult hemoglobin (Franceschini et al, 2007;Roche-Labarbe et al, 2010). This decrease is steeper and occurs at an earlier PMA in subjects born at a lower GA. This is because SO 2 immediately after birth is higher when GA is lower (Tina et al, 2009), and because SO 2 is highly dependent on HGB, which starts decreasing at birth regardless of GA and decreases faster in more premature infants (de Alarc n and Werner, 2005). These findings question the relevance of SO 2 as a measure of brain health and brain development in newborns, and are consistent with the results showing that SO 2 is not sensitive to evolving acute brain injury in newborns (Grant et al, 2009). The focus on SO 2 may explain why many early NIRS studies yielded inconsistent results, which hampered the implementation of NIRS in clinical settings (Greisen, 2006;Nicklin et al, 2003). Those using NIRS to evaluate hemodynamics in infants have typically used SO 2 rather than other parameters, and SO 2 is sensitive to transient hemodynamic changes (Huang et al, 2004;Naulaers et al, 2004;Petrova and Mehta, 2006;Toet et al, 2005). However, SO 2 is the Figure 2 Measured and derived parameters as a function of chronological age. Confidence intervals are displayed where two or more values were averaged. Filled markers were used when at least two values were averaged, empty markers represent individual values. BF ix, blood flow index; CBV, cerebral blood volume; HbO, oxygenated hemoglobin concentration; HbR, reduced hemoglobin concentration; HbT, total hemoglobin concentration; rCMRO 2, relative cerebral metabolic rate of oxygen; SO 2, oxygen saturation. Oxygen metabolism in the premature brain N Roche-Labarbe et al parameter least sensitive to development or the evolution of injury. Blood flow index and rCMRO 2 correlate better with PMA and are less dependent on chronological age than SO 2, suggesting that they are more sensitive to hemodynamic and metabolic changes associated with early brain development. Relative CMRO 2 during the first month of life is proportional to GA at birth, which is consistent with the correlation between GA at birth and spontaneous neuronal activity transients measured with electrophysiology (Andr et al, 2010). This is also consistent with the findings that fractional tissue oxygen extraction during the first 6 hours of life is higher when GA at birth is higher (Tina et al, 2009). When subjects from various GA groups increase in chronological age, rCMRO 2 shows an increase over time, which may reflect increasing oxygen requirements due to synaptic development. We did not find rCMRO 2 differences between GA groups at the same PMA, suggesting that synaptic production is appropriate for PMA, regardless of GA at birth. This is consistent with primate studies showing that premature birth does not affect the rate of synaptic production in the visual cortex: synaptogenesis correlates with PMA but not chronological age despite increased sensory stimulation (Bourgeois Figure 3 Measured and derived parameters as a function of postmenstrual age (PMA). Postmenstrual age for each group is calculated starting from the group's median gestational age (GA). Confidence intervals are displayed where two or more values were averaged. Filled markers were used when at least two values were averaged, empty markers represent individual values. BF ix, blood flow index; CBV, cerebral blood volume; HbO, oxygenated hemoglobin concentration; HbR, reduced hemoglobin concentration; HbT, total hemoglobin concentration; rCMRO 2, relative cerebral metabolic rate of oxygen; SO 2, oxygen saturation. Oxygen metabolism in the premature brain N Roche-Labarbe et al et al, 1989). The consistency of rCMRO 2 with PMA regardless of GA is also consistent with angiogenesis being driven by intrinsic mechanisms associated with synaptogenesis, with no direct effect of outside stimulation (Fonta and Imbert, 2002). However, it conflicts with reports of thinner cortex associated with premature birth (Nagy et al, 2010). Slopes of BF ix were similar among GA groups, but so were intercepts, perhaps due to competitive influences of HGB decreasing (BF ix was more influenced by HGB than rCMRO 2 ) and neuronal activity increasing BF ix. Overall, these results that agree with the findings in brain-injured neonates (Grant et al, 2009), suggest that rCMRO 2 is a better indicator of brain health and developmental stage in infants than SO 2. Oxygenated hemoglobin concentration, HbR, and HbT constitute the output of most NIRS systems and are provided for comparison purposes. Cerebral blood volume does not show any consistent behavior during the first weeks of life (Franceschini et al, 2007;Roche-Labarbe et al, 2010). Cerebral blood volume was higher in the 24 to 27 group compared with the other groups during the first 3 weeks of life (different intercepts), but the large variance in that group and the absence of correlation with any factor (HGB, age, or PMA) suggests that this may be an artifact due to small sample size in that group at that age. Because premature infants are often treated with medication or require respiratory assistance, we included them in analyses. The variety of cardiac and respiratory conditions among subjects probably contributed to the intragroup variability, particularly in the lower GA group. Caffeine, commonly administered to premature infants with apnea, stimulates neurons and decreases cerebral blood flow, therefore uncoupling cerebral blood flow and rCMRO 2, but its effects on baseline CBV, SO 2, and rCMRO 2 remain controversial (Chen and Parrish, 2009;Perthen et al, 2008). Ventilation modes are also suspected to affect brain hemodynamics, although results are still inconsistent (Milan et al, 2009). Figure 4 Boxplots of slope and intercept of oxygen saturation (SO 2 ) and relative cerebral metabolic rate of oxygen (rCMRO 2 ) during the first month of life. The dot is the mean, the box is the standard error, the whiskers depict the 0.95 confidence interval. Legend: + P value < 0.055 (near significant); *P value < 0.05 (significant, Bonferroni corrected). Oxygen metabolism in the premature brain N Roche-Labarbe et al thank David A Boas for his advice regarding data analysis. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Research Resources or the National Institutes of Health. Disclosure/conflict of interest The authors declare no conflict of interest.
(CBS/AP) COLUMBIA, S.C. (CBS/AP) A South Carolina Islamic center was defaced Tuesday when someone spelled out "PIG CHUMP" with bacon slices on a tiled walkway at the center. Florence Police Major Carlos Raines said Tuesday that someone placed the bacon in foot-high letters on the tiled walkway at the Florence Islamic Center between 7 a.m. and 2 p.m. on Sunday. Islamic dietary restrictions bar Muslims from eating pork. Mushtaq Hussain is one of the center's founders. He says he's sure whoever brought the bacon to the center was trying to anger those who worship there but that he's not afraid. "Muslims don't eat the pork, so you just think that somebody did this purposefully to make people feel angry," said Hussain, 50, who also owns a rug business in Florence and moved to the U.S. from Pakistan about 25 years ago. "Definitely they are trying to offend people." The center has be under renovation for four years, and there is no sign or prayer tower indicating what is operating there, Raines said. "There's absolutely nothing that identifies it as a mosque," Raines said. "It's an insult, and I'm sure that's what it was intended for." The Washington-based Council on American-Islamic Relations on Tuesday called on the FBI to investigate. But Hussain says he's confident local police can handle the case. The bacon has been removed, but Raines said there are still greasy marks on the walkway at the facility in Florence, about 80 miles east of Columbia.
Randomised clinical trial of adjuvant postoperative RT vs. sequential postoperative RT plus 5FU and levamisole in patients with stage IIIII resectable rectal cancer: A final report A randomised clinical trial was performed in patients undergoing radical surgery for rectal cancer to compare the efficacy and toxicity of adjuvant postoperative radiation therapy (RT) to sequential RT and chemotherapy (CT) with 5fluorouracil (5FU) plus levamisole (LEV). The primary end point was overall survival (OS); secondary end points were diseasefree survival (DFS), the rate of locoregional recurrence, and treatmentrelated toxicity; the final results of this trial are reported.