text
stringlengths
8
5.77M
Health Alloimmune, or isoimmune, disease is characterized by an immunity of an individual against blood from another individual from the same species. This condition can be present during pregnancy, when a fetus becomes immune to blood from the mother. In pregnancies complicated by serious alloimmune disease such as maternal-fetal blood type incompatibility, fetal blood transfusion during pregnancy may be warranted. Currently, blood is usually transfused by entering the abdominal cavity of the fetus as it develops in the uterus. Although this procedure has been used for 25 years, newer methods that transfuse donor blood directly into the fetal circulation (intravascular) may be more advantageous. To see which method offers the best results, two groups of similarly matched fetuses with severe alloimmune disease were compared. The site of the placenta, severity of disease and the age of the fetus were matched in 33 out of 44 pairs. The intravascular method required fewer procedural attempts, had fewer failures, resulted in better pregnancy outcomes and reduced the number of traumatic deaths. Although the intravascular method required a greater number of transfusions, it allowed the fetus time to mature to an older gestational age. The mothers suffered fewer complications when the intravascular technique was used. The intravascular method of transfusing fetuses with alloimmune disease offers the best overall survival and should replace intraperitoneal transfusion in most cases. (Consumer Summary produced by Reliance Medical Information, Inc.) Fetal assessment based on fetal biophysical profile scoring: IV. an analysis of perinatal morbidity and mortality Article Abstract: Prenatal fetal assessment tools must predict fetal outcomes accurately for the test to be useful on a routine basis. Biophysical profile scores are calculated using serial measurements of fetal body structures, anatomy, amount of fluid surrounding the fetus and the position of the fetal umbilical cord. A biophysical score (BPS) of 8 or above with a normal amount of amniotic fluid is considered normal, and fetuses with a BPS less than 6 are considered for immediate delivery. The relationship between the BPS and the actual fetal outcome was determined for 26,780 fetuses. The outcome of 913 (3.14 percent) fetuses with scores under 6 were evaluated. There was an inverse correlation between the BPS and fetal distress, admission to a neonatal intensive care unit, fetal growth retardation, Apgar scores (an assessment score of fetal well-being immediately after birth) of less than 7 (out of 10) and a umbilical cord pH of less than 7.2 (acidic). BPS was not correlated with meconium staining in the amniotic fluid (signalling the release of the first fetal stool, and a sign of fetal distress) or fetal malformations. There was also a strong relationship between the death of a fetus and the last profile. Therefore, the use of BPS to assess the well-being of a fetus accurately predicted fetal compromise. (Consumer Summary produced by Reliance Medical Information, Inc.) Maternal serum alpha-fetoprotein in twin pregnancy Article Abstract: An increased level of alpha-fetoprotein (AFP) in the blood of a pregnant woman is highly suggestive of a neural tube defect in the fetus. AFP can also be increased because of advanced pregnancy, more than one fetus, fetal death, or other fetal malformations. The level of AFP in a twin pregnancies can be two times higher than in a normal single pregnancy. The significance of an elevated AFP level occurring between 14 and 20 weeks of pregnancy was studied in 138 twin pregnancies. For 108 out of 138 women, AFP was 2.0 times the median. In 78 pregnancies the increase was 2.5 multiples over the median. Twins were detected in 56.5 percent of the pregnancies, on the basis of a cutoff level of 2.5 times the median, which is customarily considered to indicate neural tube defects. Two fetuses were identified with open neural tube defects. When fetal ultrasonography, the use of high frequency sound to visualize internal structures, was used in combination with AFP determinants, 86 percent of the twins were identified by the 20th week of pregnancy. When AFP levels were 4.0 multiples above the median, the fetal outcome was poor. Maternal AFP levels were useful in detecting twins and predicting the outcome of twin pregnancies. (Consumer Summary produced by Reliance Medical Information, Inc.)
This invention relates in general to an electroforming process for using a reusable electroforming mandrel. The fabrication of hollow articles by an electroforming process is well known. Generally, the articles are fabricated by electrodepositing a metal onto an elongated mandrel which is suspended in an electrolytic bath. The materials from which the mandrel and the electroformed article are fabricated are selected to exhibit a different coefficient of thermal expansion to permit removal of the belt from the mandrel upon cooling of the assembly. In one example of an electroforming arrangement, the mandrel comprises a core cylinder formed of aluminum which is overcoated with a thin layer of chromium and is supported and rotated in a bath of nickel sulfamate. In the process for forming large hollow articles having a large cross sectional area, it has been found that a diametric parting gap, i.e. a gap formed by the difference between the average inside electroformed article diameter and the average mandrel diameter at the parting temperature must be at least 8 mils (200 mm) and at least about 10-12 mils (0.250 mm-300 mm or 0.04-0.06 percent of the diameter of the mandrel) for reliable and rapid separation of the large article from the mandrel. For example, at a parting gap of about 6 mils (0.015 mm), high incidence of both belt and mandrel damage are encountered due to inability to effect separation of the belt from the mandrel. Various different techniques have been developed for forming and removing tubes from electroforming mandrels depending upon the cross sectional area of the electroformed tube. A process for electroforming hollow nickel articles having a large cross sectional area onto a mandrel is described in U.S. Pat. No. 3,844,906 to R. E. Bailey et al. More specifically, the process involves establishing an electroforming zone comprising a nickel anode and a cathode comprising a support mandrel, the anode and cathode being separated by a nickel sulfamate solution maintained at a temperature of from about 140.degree. F. (60.degree. C.) to 150.degree. F. (65.5.degree. C.) and having a current density therein ranging from about 20 to 50 amps/ft.sup.2 (amps/dm.sup.2), imparting sufficient agitation to the solution to continuously expose the cathode to fresh solution, maintaining this solution within the zone at a stable equilibrium composition comprising: ______________________________________ Total Nickel 12.0 to 15.0 oz/gal (90-112.5 g/l) Halide as NiX.sub.2.6H.sub.2 O 0.11 to 0.23 mole/gal (0.44-0.92 mole/l) H.sub.3 BO.sub.3 4.5 to 6.0 oz/gal (33.7-45 g/l) ______________________________________ electrolytically removing metallic and organic impurities from the solution upon egress thereof from the electroforming zone, continuously charging to the solution about 1.0 to 2.0.times.10.sup.-4 moles of a stress reducing agent per mole of nickel electrolytically deposited from the solution, passing the solution through a filtering zone to remove any solid impurities therefrom, cooling the solution sufficiently to maintain the temperature within the electroforming zone upon recycle thereto at about 140.degree. F. (60.degree. C.) to 160.degree. F. (65.5.degree. C.) at the current density in the electroforming zone, and recycling the solution to the electroforming zone. The thin flexible endless nickel belt formed by this electrolytic process is recovered by cooling the nickel coated mandrel to effect the parting of the nickel belt from the mandrel due to different respective coefficients of thermal expansion. For metal articles fabricated by electroforming on mandrels having a small cross-sectional area, the process described in U.S. Pat. No. 4,501,646 to W. G. Herbert overcomes difficulties in removing the electroformed article from the mandrel. For example, when the chromium coated aluminum mandrel described in U.S. Pat. No. 3,844,906 is fabricated into electroforming mandrels having very small diameters of less than about 1 inch, metal articles electroformed on these very small diameter mandrels are extremely difficult or even impossible to remove from the mandrel. Attempts to remove the small diameter electroformed article formed by the process described in U.S. Pat. No. 3,844,906 the can result in destruction or damage to the mandrel or the electroformed article, e.g. due to bending, scratching or denting. The entire disclosures of U.S. Pat. No. 3,844,906 and U.S. Pat. No. 4,501,646 are incorporated herein by reference. Various materials may be utilized as temporary removable coatings from a mandrel. These coatings can be meltable to facilitate removal of an electroformed article thereon. For example, a coating of wax may be utilized as a meltable coating. However, a wax coating must be rendered conductive in order to function as an electroforming surface. The preparation of a conductive wax as well as the reclaiming of the conductive wax for subsequent use can be expensive and require additional processing steps. Moreover, wax is relatively soft and easily damaged during handling. Japanese patent document No. J59-060702A, published Apr. 6, 1984, discloses a method of electroforming nickel stampers. The method comprises forming a thin reaction film having a low melting point on a base, embossing fine convex patterns on the film to make a master, forming a coating made of gold (alloy) on the master, forming a nickel layer by electrolytic deposition, separating a laminate comprising the gold and nickel from the master and eliminating the residual thin reaction films from the surface of the gold coat to obtain a nickel stamper. A reaction film is preferably made by reactive sputtering and preferably has a composition of TeCH. Residual film may be eliminated using ammonium persulfate. This process requires a reactive sputtering step, an embossing step, a gold deposition step and a special residual film elimination step using ammonium persulfate. The use of reactive sputtering equipment, exotic materials and multiple steps render the process complex and expensive. Moreover, since it appears that a reaction is necessary during sputtering to deposit the composition of TeCH, the deposited reaction film would appears to be unreusable. Japanese patent document No. J5-8210 187 A relates to the formation of a parting film on the surface of a metal die, plating a layer on the parting film and peeling off the metal die to form a metal body having a configuration reversal to that of the metal die. The parting film is formed from an oxide film obtained by exposing the metal die to oxygen plasma. The oxygen plasma is applied to the surface of a substrate under oxygen pressure of 0.1-1 torr and a Ni film of about 0.2-0.3 mm in thickness is formed. The method is suitable for forming a pattern of highly integrated digital information, a deeply-grooved pattern, etc. in an electroforming process. Since the parting film is formed by the oxygen plasma, it is uniform and adhesive to the substrate. This process requires extremely high temperatures to form a parting layer and requires peeling off the metal die to form a metal body. Such an oxide film is difficult and expensive to make and does not appear to be readily reusable. In U.S. Pat. No. 4,300,959, a method of electroforming is disclosed in which, for example, nickel and then copper is electroplated onto a wax mandrel coated with a sprayed coating of silver paint. A third layer of nickel is plated onto the copper layer and the wax mandrel is removed. The layered metallic object is heated to a point above the melting point of copper to yield a nickel electroformed object containing copper, nickel alloy. Since the wax substrate is not normally conductive, it is coated with a conductive silver coating which apparently comprises a heterogeneous mixture of silver particles, a binder , and a solvent for the binder so that the coating can be applied by spraying at room temerature to avoid melting the wax mandrel. The process disclosed in this U.S. patent requires the use of a mandrel that must be destroyed. Moreover, since the conductive coating on the wax would contaminate the wax, the coated mandrel constitutes a disposable, not a reusable material. In addition the use of an apparently heterogeneous conductive coating containing particulate material causes perturbations at the electroforming surface. In addition, a wax mandrel is highly vulnerable to surface damage prior to or during electroforming. In order to reuse electroforming mandrels meeting precise tolerance requirements, the mandrels must be carefully handled to avoid scratching, gouging or otherwise damaging the smooth electroforming surface of the mandrel prior to, during and after each electroforming operation. Deep gouges, for example, may render the mandrel useless because electroformed articles may not be readily removed from the mandrel. Repair of damaged electroformed surfaces is normally impractical. Moreover, deformities in the surface of an electroforming mandrel can carry over to electroformed articles rendering electroformed articles useless for cosmetic reasons or for failing to meet the tolerance requirements for precision applications.
Talking to teens about sexting; Centerville High School taking proactive approach by Kelly May CENTERVILLE, Ohio (WKEF/WRGT) -- At Centerville high school, teachers and counselors are taking a proactive but aggressive approach when it comes to talking to teens about bullying and sexting. "I don't know who's sitting here and maybe going through this right now," said teacher Janet Place, "it's just a very delicate situation." Place teaches computer applications and says she uses the class as a way to educate teens about cyber bullying and how sexting plays a role. "Our focus is more about empowering them, positive apps and things they can do in a positive way," Place said. She also said One of the biggest topics is sending pictures electronically, when it becomes a bullying and felony charges in Ohio. "It's a felony it's illegal at your age," she said she tell students hoping to give them the facts. "The unfortunate reality is if you're in love and you were 16, and you send a picture, you've committed a crime," Place continued, "anyone that forwards it and comments on it, does anything in relation to it, that's bullying." "We have to watch as adults how are speaking to other people," said intervention counselor Beth Miers in Centerville. Miers said she hopes to empower parents to stop cyber bullying by getting specifics from their kids, because bullying can happen on many different apps and social media platforms. "Thinking about what that little girl must've gone through daily when she would walk into a building, and into an environment that wasn't welcoming and wasn't safe to her," Miers said expressing remorse over the recent suicide of Bethany Thompson. Thompson reportedly committed suicide after being bullied in Champaign County. She was a brain cancer survivor and 11 years old. "It's tough but I think you have to stay on top of it," said Place. Teachers also said a good tip for parents to use with kids is to do a "gut check," reminding your kids to ask themselves before they share or post if what they are doing would hurt someone more than help.
This invention relates to the curing of tobacco leaf. The temperature and humidity in any type of tobacco curer must be properly controlled if the tobacco leaf is to be cured without spoilage in the minimum of time, with the best possible weight in cured leaf of the top quality. The curing process is dependent inter alia on the humidity and on the temperature inside the curer and on the nature of the leaf itself.
import { ModeController } from '../../../../src/graph/controller'; import Graph from '../../../../src/graph/graph'; import { GraphOptions, ModeOption } from '../../../../src/types'; const div = document.createElement('div'); div.id = 'graph-spec'; document.body.appendChild(div); describe('Mode Controller', () => { it('single mode', () => { const cfg: GraphOptions = { container: 'graph-spec', width: 200, height: 100, modes: { default: ['drag'], }, }; const graph: Graph = new Graph(cfg); const modeController = new ModeController(graph); expect(Object.keys(modeController.modes).length).toBe(1); expect(modeController.modes.default[0]).toEqual({ type: 'drag' }); expect(modeController.modes.default.length).toBe(1); expect(modeController.mode).toBe('default'); }); it('setMode', () => { const cfg: GraphOptions = { container: 'graph-spec', width: 200, height: 100, modes: { default: ['drag'], edit: ['canvas', 'zoom'], }, }; const graph: Graph = new Graph(cfg); const modeController = new ModeController(graph); modeController.setMode('edit'); expect(modeController.modes.edit).not.toBe(undefined); expect(modeController.modes.edit.length).toBe(2); expect(modeController.mode).toBe('edit'); }); it('manipulateBehaviors', () => { const cfg: GraphOptions = { container: 'graph-spec', width: 200, height: 100, modes: { default: ['drag'], edit: ['canvas', 'zoom'], }, }; const graph: Graph = new Graph(cfg); const modeController = new ModeController(graph); modeController.manipulateBehaviors(['delete'], 'xxx', true); expect(Object.keys(modeController.modes).length).toBe(3); expect(modeController.modes.xxx).not.toBe(undefined); modeController.manipulateBehaviors('drag', 'dragx', true); expect(modeController.modes.dragx.length).toBe(1); expect(modeController.modes.dragx[0]).toEqual('drag'); modeController.manipulateBehaviors(['drag', 'zoom'], ['out', 'xxx'], true); expect(Object.keys(modeController.modes).length).toBe(5); expect(modeController.modes.drag); // 删除 Behavior modeController.manipulateBehaviors('drag', 'dragx', false); expect(modeController.modes.dragx.length).toBe(0); }); it('add & remove behavior to several modes', () => { const cfg: GraphOptions = { container: 'graph-spec', width: 500, height: 500, modes: { default: [], custom1: [], custom2: [], }, }; const graph = new Graph(cfg); const modeController = new ModeController(graph); expect(Object.keys(modeController.modes).length).toBe(3); modeController.manipulateBehaviors(['aa', 'bb'], ['custom1', 'custom2'], true); expect(modeController.modes.custom1.length).toBe(2); expect(modeController.modes.custom2.length).toBe(2); const custom1: ModeOption = modeController.modes.custom1[0] as ModeOption; const custom2: ModeOption = modeController.modes.custom1[1] as ModeOption; expect(custom1.type).toBe('aa'); expect(custom2.type).toBe('bb'); modeController.manipulateBehaviors(['aa'], ['custom1', 'custom2'], false); const customd1: ModeOption = modeController.modes.custom1[0] as ModeOption; const customd2: ModeOption = modeController.modes.custom2[0] as ModeOption; expect(modeController.modes.custom1.length).toBe(1); expect(modeController.modes.custom2.length).toBe(1); expect(customd1.type).toBe('bb'); expect(customd2.type).toBe('bb'); }); });
RUSSO-TURKISH DIVERGENCE (PART II): THE ENERGY DIMENSION Energy issues figure prominently in the Russo-Turkish relationship. Their impact is not nearly as clear-cut as are the Iranian and Syrian issues. Turkey and Russia have a complex, evolving relationship characterized by mutual dependencies in the oil and gas spheres. As Richard Weitz stated, “Energy relations between Russia and Turkey have long been characterized by overt friendship and subtle competition.”[1] In the first part of this two-part series of research on Russo-Turkish relations, the authors contended that during 2009-2010, behind the surface of the rhetoric of convergence, there frequently lay strategic divergence between Russia and Turkey. In all those years, Turkish officials and experts described their relations with Russia as being the best ever and said that bilateral harmony featured prominently in the past decade’s international relations. However, Russia and Turkey had already begun to diverge during 2009-2010, and 2011 was a difficult year in Turkey’s relations with Russia in the security sphere. The year 2012 does not look much better. There is no clear consensus in the academic literature as to how convergent or divergent Russo-Turkish energy interests are. This is the focus of this article. Is there a coherence between Turkey’s energy policy and its general foreign policy? What is a “gas hub”? More academic work in needed in this area, including the consideration of the implications for European and Central Asian energy security and political relations.The years 2009-2010 have seen sudden reversals of fortune in the acute geopolitical rivalries between Nabucco and South Stream. While uncertainty surrounding future demand and financing raised the possibility that neither pipeline ever become a reality, Moscow seemed to be gaining the upper hand. Nabucco was sinking. Turkey’s stance between Nabucco and South Stream was pivotal. Turkey faced difficult choices between two competing and mutually exclusive energy supply routes. The Nabucco and post-Nabucco routes served Western Europe’s interest in reducing its dependency on Russian gas, while South Stream would help Russia strengthen its position as the main energy producer for European and Turkish markets. The natural disaster in Japan and revolutions in North Africa in the spring of 2011 ignited a new bout of the pipeline rivalry. By the end of 2011, Ankara had a pair of seemingly contradictory gas pipeline deals (both Nabucco and South Stream), which raised very interesting questions about the nature of Turkey’s energy policy. Although both deals may be part of Ankara’s plan to make itself a major energy transit hub, there was something that sounded dissonant about them coming so close together. With the Azeri agreement, Ankara appeared to be taking part in laying the groundwork for an energy corridor that would help Europe reduce its reliance on Russian gas and that would serve as a template for other pipeline projects–including, ideally, a modified version of the troubled Nabucco pipeline plan–that would bring non-Russian resources from the Caspian and the Middle East westward. On the other hand, with the Russian agreement, Ankara was now taking part in a project that many analysts see as designed to undermine Nabucco and the EU’s plans to dilute Gazprom’s influence in the European energy market. The Russo-Turkish energy relationship is the most intimate aspect of the bilateral connection. Yet the evidence for an erosion of ties seemingly appeared to be contradicted by the December 30, 2011, Russo-Turkish energy agreement. On its face, this accord signals a high degree of amity and cooperation between Moscow and Ankara. If, however, one probes more deeply into both sides’ energy goals and policies, it becomes clear that while this accord does signify positive aspects of the Russo-Turkish relationship, matters are much more complicated than a simple reading of the accord would indicate. RUSSO-TURKISH ENERGY RELATIONS AND THE “ENERGY HUB” STRATEGY Turkish elites take it for granted that geography makes Turkey a predestined energy hub for the transit of Eurasian energy through its territory whether concerning the Russian South Stream pipeline, the EU’s Nabucco pipeline, both of these systems, or other alternatives like interconnectors. Beyond that deeply-rooted assumption is the corollary belief that because of this hub status, the more pipelines that traverse Turkey the better–irrespective of their origin. For Turkey to become not just a transit country but a hub, it must compensate for its own lack of indigenous energy sources, which forces it into what one writer calls a “retroactive” strategy of depending on multiple international players and sources for energy, where Russia plays a major role. As domestic demand and consumption grow sharply, Turkey not only must maintain this large volume of imports traversing its territory, it will have to depend on increasing electric power and therefore most probably imported nuclear power to provide that electricity. Turkey, therefore, uses its geographical position to leverage pipeline rights through its territory in return for commitments to invest in its energy sector, including nuclear power plants, as in Russia’s case.[2] Yet even as it depends on Russia, it must balance that dependence with multiple other sources of energy to avoid excessive economic and/or geopolitical dependence on Moscow. In this context, one can better grasp the myriad complexities and repercussions of Russo-Turkish energy relations and explain the Russo-Turkish energy deal of December 30, 2011, as well as the earlier deal with Azerbaijan in October 2011. TURKEY AND NABUCCO For several years, Moscow’s primary economic and geopolitical objective in Southeastern Europe has been to lock down the conditions necessary for the start and then completion of the South Stream gas pipeline. This pipeline will bring Russian and Caspian natural gas to Southeast Europe and then Central Europe, bypassing Ukraine and tying Central Asian gas producers to Russia’s pipeline network for years to come. This project would then all but ensure Ukraine’s lasting dependence on Russia, compromise its independence and that of the key Central Asian gas producers too, and tie Balkan states to Moscow. Hitherto, Turkey had been distinctly cool to South Stream. It got Russian gas through the bilateral Blue Stream pipeline and from multiple other suppliers–Azerbaijan, Iran, etc. Turkey opposed South Stream, because its original course would have bypassed Turkey and gone through the seabed of the Black Sea to Bulgaria and then Europe. Yet Turkish officials had occasionally argued that Gazprom could join Nabucco, i.e. reinvigorate the Blue Stream pipeline and integrate it to the Nabucco network. While this solution might work for Turkey, it did not satisfy anybody else. In the meantime, Russo-Ukrainian energy relations have steadily deteriorated as Moscow’s power plays have become more overt and insistent and Turkish demand for gas is growing while the Nabucco pipeline remains on the drawing boards.[3] At the same time, all of Azerbaijan’s natural gas was contracted out to its immediate neighbors: Turkey, Russia, Iran, and Georgia. For this reason, Azerbaijan’s Shah Deniz II gas field on the Caspian Sea was crucial to the Europeans’ energy plans: It was projected to increase Azerbaijan’s output considerably, from roughly 10 bcm currently to 25 bcm, once the field came online, with most of the natural gas from Shah Deniz II available for export. However, the natural gas produced by this field was not expected to come online for years. In fact, it was pushed back to 2017-2018 due to price rows between Azerbaijan and Turkey. As a result, all projects are effectively competing with each other for limited supplies.[4] Turkey was already buying around 6 bcm of gas from the Shah Deniz I field, for a very good price. It sells half of that gas on to Greece at a much higher price. Eight bcm of gas was expected to come from Azerbaijan’s new Shah Deniz II gas development. Baku and Ankara, however, could not agree on how much Azerbaijani gas should go to Turkey, at what price, and under what conditions. Baku insisted that the old pricing formula needed to be revised. Turkey disagreed. Katinka Barysch, deputy director for the Centre for European Reform in London, contended that the problems with Nabucco were due to Turkey and Azerbaijan. In her analysis, “As long as this issue is not resolved, an agreement on the Shah Deniz II gas looks unlikely. Without that gas, it is hard to see how Nabucco could get under way.”[5] At the turn of 2010, when Nabucco’s fiasco was discussed far and wide, Azerbaijan began losing its interest in Nabucco. Early in September 2010, during President Medvedev’s visit to Baku, Gazprom and the State Oil Company of the Azerbaijan Republic (SOCAR) signed a contract under which Azerbaijan pledged to double its gas exports to Russia to reach 2 bcm in 2011. In 2010, Russia bought 1 bcm of gas from Azerbaijan compared to .5 bcm in 2009.[6] In March 2011, as a means to pressure Turkey, Moscow suggested it might give up on South Stream and review alternative means of supplying Europe with gas, notably LNG (liquefied natural gas), because Turkey would not agree to laying down South Stream along the bottom of the Black Sea in its territorial waters.[7] Turkey then decided to give Nabucco a boost by organizing a signing ceremony in Kayseri on June 8, 2011, for Project Support Agreements between the Nabucco companies and the ministries of the transit states, Austria, Hungary, Bulgaria, Romania, and Turkey.[8] The Kayseri deal signaled that after a long period of hesitation, the U.S.- and EU-backed project was at last gaining traction. Meanwhile, in early July 2011, the Bulgarian election brought to power former Sofia mayor Boyko Borisov, who was already on record as wanting to take Bulgaria out of the Russia-led South Stream consortium. Without Bulgaria on board, the South Stream project may well be dead. At the same time Turkey was pressing Gazprom to cut its prices, warning that Turkey would otherwise end a 25-year-old supply agreement with Gazprom. Turkey’s decision accords with the European and Chinese stance toward Russian gas, which demands lower prices and an end to rigid, multi-year take-or-pay contracts.[9] Energy Minister Taner Yildiz stated that Gazprom’s prices had risen by 39 percent since mid-2009, despite falling demand, as it was tied to long-term take-or-pay contracts. Therefore Turkey had to contain its losses. Ankara clearly felt that it had some leverage with Russia, since under present conditions, it is actually oversupplied with gas.[10] On this basis, Turkey proceeded to show its leverage by making the deal with Azerbaijan in October 2011. On October 1, 2011, Turkey announced that it would not renew the purchase of Russian gas delivered through the Western Balkan pipeline route after 2012. The official reason was the high price of Russian gas. Since Gazprom would not grant the discounted prices it sought in a depressed market, Turkey annulled the agreement.[11] Russian media and business circles immediately reacted by claiming that this was part of a concerted anti-Russian attack by Europe and Turkey on Russian gas policy.[12] Yet the reality was different and casts a critical light on both states’ policies, as does the subsequent December 2011 agreement. Turkey already imports about 60 percent of its gas from Russia and therefore worries about strategic over-dependence. Second, Gazprom had rebuffed Turkey’s requests for easing the onerous take-or-pay clauses in their contract, which raises Turkish payments even as imports contract.[13] Meanwhile, Russia has done everything in its power to block any alternative to its domination of Caspian and Central Asian gas flows to Europe, thereby ranging itself squarely in the path of Turkey’s long-standing ambitions to be an energy hub for the distribution of that gas to Europe.[14] Since Russia has also generally refused to accede to other customers’ requests of price cuts, Turkey signaled that it would no longer depend exclusively on Russian gas and that it had other options. Ankara also responded to its domestic critics’ complaints about the primacy of the state company BOTAS by allowing private importers to assume the contracts with Gazprom in search of better prices.[15] Turkey also hopes for contracts with Egypt, Iraq, the ITGI interconnector from Azerbaijan, as well as Turkmenistan to Romania, Hungary, and Italy, the projected Nabucco pipeline, and possibly hopes to force its way into the newly discovered Eastern Mediterranean gas fields. Immediately after giving this signal to Russia, Turkey turned to Azerbaijan and signed a major gas deal with it on October 25, 2011. Turkey will get 6 bcm of gas annually from Azerbaijan’s Shah Deniz-II field, recovering what it lost from Russia by its earlier termination of the contract from the Western Balkans pipeline. Turkey will also serve as a transit point for another 10 bcm annual supply of gas to Europe through spare capacities in its pipelines. These accords also envisage building the new Trans-Anatolian Dogalgas pipeline for Azeri gas through Turkey, while the existing line’s operation–which transports Azeri Gas from Shah Deniz-II–should go into effect by 2017 and send gas until 2043. These agreements ensure that Azeri gas can traverse a dedicated infrastructure to Turkey and then flow to Europe either through the Nabucco pipeline, the Trans-Anatolian pipeline, or through one of the many other alternative pipelines currently under consideration, e.g. the ITGI Interconnector.[16] Moreover, since the announced agreement refers to the new Trans-Anatolian pipeline as carrying an “initial” volume of 16 bcm, this suggests that Azerbaijan hopes to increase its volume to 24 bcm, especially as it projects an estimated annual production of 50 bcm by 2017.[17] Gazprom will thus lose significant revenue, and Russia considerable political leverage, as Azerbaijan is charging a significantly lower price to Turkey than Russia charges and received a side payment to make up the difference between its price and what Gazprom charged. These agreements also resolve all issues of gas transit between SOCAR–Azerbaijan’s company–and BOTAS–Turkey’s state-run energy firm–which have essentially replaced Gazprom with Azerbaijan as gas suppliers. Finally, and worse for Russia, these accords open the way for Moscow’s greatest fear, namely the southern corridor for gas that the EU is pursuing and by which Turkmen and Azeri gas will (if not also Kazakh gas) bypass Russia, flow directly to Europe, and strike a decisive blow to Gazprom and Moscow’s power over them and Europe. This deal also strikes at the plans for the Nabucco pipeline, since there will be no need for a Turkish sector and the builder of Nabucco need only connect gas from Turkey to Bulgaria to the distribution point of Baumgarten in Austria. In other words, whether or not Nabucco is actually built, Turkey will get Azeri gas, and what it cannot use will then go to Europe. Thus, Ankara is protecting itself against Nabucco’s continued dithering and inability to organize itself, even while insuring a dedicated European gas supply route.[18] Since 2002, Nabucco has been the only pipeline project for transporting Caspian gas to EU territory. However, the EU’s southern gas corridor now includes a number of other potential projects for delivering gas from the Caspian region to Europe, including ITGI (Turkey-Greece-Italy), White Stream (across the Black Sea from Batumi to Ukraine), and TAP (Trans-Adriatic gas pipeline). Nabucco relied exclusively on Azerbaijani gas for the pipeline’s first stage. The hopes to add gas volumes from northern Iraq proved unrealistic in any usable time-frame. The European Commission’s efforts notwithstanding, Nabucco lost momentum and, ultimately, credibility in its existing form. The tipping point may be traced to November-December 2011, when several adverse developments converged. The European financial crisis deepened, indefinitely postponing any decisions by lending institutions to finance Nabucco. In addition, the project’s management did not come forth with a long-expected correction to its cost estimates. The Shah Deniz gas producers’ consortium in Azerbaijan could no longer postpone the investment decision in that project’s second phase, which necessitated determining the transportation solution in early 2012. With that clock ticking, Nabucco was seen to be far from ready with the solution. Conversely, the Nabucco project’s planning assumptions outran the slow development of a trans-Caspian solution for Turkmen gas. Ultimately, British Petroleum and Azerbaijan’s State Oil Company offered in October and December 2011, respectively, transportation solutions to replace Nabucco outright. Prime Minister Erdogan correctly stated in December that Turkey was ready for the construction of Nabucco, but the EU was not.[19] TURKEY AND SOUTH STREAM There is also a dimension of Balkan rivalry here. The immediate destination for Russia’s planned South Stream and other existing pipelines are mainly in the Balkans, and Moscow has consistently used its gas weapon to leverage its influence in the Balkans. In Turkey’s case, Moscow played to Ankara’s long-standing ambition to become an energy hub. Ankara also welcomed the opportunity to create a vibrant energy and economic relationship with Russia that accepts large-scale Russian investment in downstream and domestic distribution of energy networks inside Turkey. Consequently, Moscow eagerly invested in such projects.[20] However, Moscow typically employed the same tactics in Greece and Bulgaria, promising them both that they would become hubs for the distribution of Russian oil and/or gas if they allowed such investment in their domestic energy networks and gave Moscow majority shares in projected oil and gas pipelines through their countries.[21] For example, a Greek analysis observed that relations between Moscow and Athens had improved dramatically after the New Democracy Party won the Greek elections in 2004. Russian sources soon realized that Prime Minister Konstantinos Karamanlis intended to follow an individual policy bolstering mutual cooperation on issues like arms sales to Greece. Russia fostered a common understanding with Greece on issues in energy and defense, supported Greece on the Cyprus issue, incited it against the United States, and pushed it to overcome Bulgarian reservations about an oil pipeline, the Burgas-Alexandroupolis pipeline. At the same time, Karamanlis publicly embraced the idea that the Burgas-Alexandroupolis oil pipeline and South Stream gas pipeline would turn Greece into an energy hub.[22] Through these mechanisms Russia clearly loosened Greece’s support for unified European policies directed toward Russia.[23] Russia made the same pitch about becoming an energy hub to Bulgaria. In speaking about the three current Russian projects with Bulgaria, Energy Minister Sergei Shmatko said: We do not doubt that the implementation of those three projects is exceptionally important for Bulgaria itself. The projects will allow Bulgaria to become a very important energy center in South Europe and a powerful energy transit junction in Europe. I think that Bulgaria’s current leadership, which has in mind the country’s long-range national interests, must excellently understand this.[24] Of course, should Bulgaria opt to choose other non-Russian alternatives–and this seems quite unlikely given its near total current dependence on Russian energy–Russian Ambassador to Bulgaria Yuri Isakov reminded Sofia that “Russia has other ways of implementing its energy interests.”[25] Likewise, Moscow has told Greece that it has alternatives to it as well. Thus, Moscow has frequently raised the possibility of Romania joining South Stream, a possibility that used to but ultimately did not frighten Bulgaria from leaving the program. Russia has also incited Serbia to join it lest it be left out. Although it appears Romania has definitively rejected participation in South Stream, it is clear that Moscow’s strategy is not just to divide and conquer in Southeastern Europe, but also actively to incite rivalries among Balkan states to make them each feel they will all be energy hubs or simply left out of the game. The reality, however, is that they are merely cutting up the same existing pie and that will probably be insufficient in the future to meet European demands.[26] The political results of this divide and conquer strategy were quite evident in 2008 as Romanian journalist Cristian Campeanu observed: “Moscow would thus dominate Europe’s energy and political agenda by means of ‘divide and rule’ tactics, as we could see on the occasion of the war in Georgia, when the countries that had lucrative agreements with Gazprom–France, Germany, Italy, and Hungary–claimed to see nuances in something that was simply a brutal Russian aggression.”[27] In that context, Turkey’s deal with Azerbaijan not only told Moscow that Ankara had other alternatives to it but also showed Moscow that it could lose South Stream given Romania’s unwillingness to join, Bulgaria’s retreat from the program, and Turkey’s new-found alternative options to bring Azeri and possibly trans-Caspian gas to Europe. Furthermore, this Turkish-Azeri deal is seen in the West as complementing Nabucco, ensuring that Azeri gas reaches Europe and cementing an overall reconciliation between Ankara and Baku. Not surprisingly Washington also welcomed the deal.[28] Meanwhile BP is also proposing a new concept and system for transporting Azeri gas to Eastern Europe called the South-East Europe Pipeline (SEEP) that would use existing pipelines while leaving open for the future the option of Turkmen gas reaching Europe. This too would, if implemented, undermine Russia’s South Stream grand design.[29] Ultimately, the accords with Azerbaijan eliminate all legal and political barriers to transporting Azeri gas through Turkey to Europe through any of the potential pipeline alternatives. The consortium operating Azerbaijan’s Shah Deniz-II field can now proceed knowing it has a secure market and pipeline. In the meantime, Turkey has reduced its current account deficit by an estimated $2 billion annually. The Southern Corridor championed by the United States and the EU can now open without impediments and the way is open for Turkmenistan to supply gas to Europe directly, as it wants to do, rather than through Russia. Moreover, the EU is planning just such a pipeline that would link Turkmenistan to Azerbaijan along the Caspian Sea’s seabed.[30] Moscow quickly grasped what was at stake and resumed discussions with Turkey. Essentially Moscow retreated somewhat on its refusal to cut the price of gas sold to Turkey in return for Turkey granting permits to Russia to begin work on South Stream in Turkey’s Economic Exclusion Zone (EEZ) in the Black Sea, which is the only way the project can be realized. This gives Moscow the prospects for revitalizing South Stream, as it desperately wanted, and Turkey gets price relief, even though it has consistently failed to take all the gas available from Russia due to the over-subscription of its gas market. In exchange for the permits to begin work in Turkey’s EEZ, Turkey will buy 3 bcm of Russian gas originally slated for take-or-pay (at the earlier higher prices of several years ago). In addition, both sides have agreed to contracts for long-term delivery of gas to Turkey from two sources until 2021 and 2025 at discounted prices.[31] From Russia’s standpoint, South Stream can go ahead. Moreover, it has now gained considerably more leverage against Ukraine, which is undoubtedly the big loser here. Russia has long been pressing Ukraine for a takeover of its gas pipeline and domestic distribution network. The construction of South Stream has been the stick it uses to frighten Ukraine into believing that if it refuses Moscow, Russia will build South Stream and simply bypass it, leaving it without any gas at all. Therefore, Ukraine has long regarded South Stream as a direct threat to its number one national asset, its gas network. The deal with Turkey places enormous pressure upon the Ukrainian parliament to legislate a Russian takeover lest Ukraine be totally bypassed by South Stream.[32] In the past, Ukraine had shown interest in building an LNG terminal in Turkey to reduce Russian pressure. Ukrainian Foreign Minister Konstantin Hryschenko had visited Turkey on December 22, 2011, to establish a joint working group on energy cooperation in order to facilitate their common goal of diversification of energy sources. Yet it is not known if Ankara disclosed its negotiations with Moscow to Kiev. [33] Along with this agreement, Moscow has moved up the timetable to start construction of South Stream to 2012. While it is very unclear if the gas is there to supply South Stream or if its supporters can meet the continually escalating costs of the project, Moscow is clearly trying to preempt the entry into force of the EU’s Third Energy Package in March 2013. Time will tell if this deal helps Moscow with that goal. South Stream could turn out to be an enormous bluff to force Ukraine to submit to Moscow. Those, however, are certainly the costs to Ukraine–and possibly Europe–of this Turkish deal in December 2011.[34] The value of this deal to Russia should not be underestimated. Without South Stream, not only would it lose leverage on Ukraine, but also on trans-Caspian producers. This is especially relevant as the Azeri-Turkish deal opens up more possibilities for them. In that context, Moscow has done everything possible to intimidate Turkmenistan and Azerbaijan from shipping gas directly to Europe and destroying its grand strategic design of monopolizing gas flows to Europe and thus controlling the CIS.[35] To the extent that it loses the power to intimidate and corrupt Europe by exercising near monopoly power on European gas imports, Moscow will lose its most powerful instrument for influencing European politics on a day-to-day basis. Therefore, Moscow is prepared to coerce these states to join its deal to keep them from signing up to the European and Turkish networks. On October 19, 2011, Turkmenistan’s Foreign Ministry blasted Russia’s politicized objections to the former country’s participation in a Trans-Caspian pipeline (TCP). It stated that such a pipeline was an objective and vital economic interest of Turkmenistan. It further rebuked Moscow for “distorting the essence and gist of Turkmenistan’s energy policy” and announced that the talks with Europe would continue.[36] Moscow’s response soon followed. On November 15, 2011, Valery Yazev, vice-speaker of the Russian Duma and head of the Russian Gas Society, openly threatened Turkmenistan with the Russian incitement of an “Arab Spring” if it did not renounce its “neutrality” and independent sovereign foreign policy, including its desire to align with Nabucco. Yazev said: Given the instructive experience with UN resolutions on Libya and the political consequences of their being “shielded from the air” by NATO forces, Turkmenistan will soon understand that only the principled positions of Russia and China in the UN Security Council and its involvement in regional international organizations–such as the SCO (Shanghai Cooperation Organization), CSTO (Collective Security Treaty Organization), and Eurasian Economic Union–can protect it from similar resolutions.[37] In other words, Turkmenistan should surrender its neutrality and independent foreign policy and not ship gas to Europe, otherwise Moscow will incite a revolution there leading to chaos. Other Russian analysts and officials threatened that if Turkmenistan were to adhere to the EU’s planned Southern Corridor for energy transshipments to Europe that bypass Russia, Moscow would have no choice but to do to Turkmenistan what it did to Georgia in 2008.[38] This campaign represents another example of Russia’s turn toward coercive diplomacy. It not only resembles what is happening in Russian policies toward Ukraine and Belarus, it also complements what has been seen in regard to missile defenses and Syria and what is described below in the case of Cyprus. At the same time, Turkey’s moves against Gazprom and Moscow in October 2011 were apparently long-planned and therefore part of its larger strategy. The agenda for the meeting with Azerbaijan’s President Ilham Aliyev was kept secret. Turkey had continued to stall on South Stream while Moscow had disregarded Turkish demands for price cuts or for the Samsun-Ceyhan oil pipeline, in this case due to Turkish oil tariffs.[39] Russian sources saw Russia’s deals with European governments and firms over the projected South Stream pipeline and Turkey’s desire to join the EU as driving this “anti-Russian” campaign. In reality, though, the EU has been very divided over South Stream.[40] Nonetheless, it is clear that as Russian pressure on Ukraine to hand over its gas pipeline network to Moscow grows, Turkey’s dependence on Russian gas becomes more of a liability and increases its one-sided dependence on Russia. At the same time, Moscow thought it could disregard Turkish economic interests, as suggested above. Indeed, Gazprom’s reluctance or even refusal to reduce its prices unless compelled to do so is clearly triggering resistance throughout Europe, including Turkey. Aiding this resistance is the fact that European customers now rely on the appearance of Qatari and Algerian LNG or shale gas. Therefore, Moscow must hope to restore the cuts in deliveries by making deals with private Turkish importers who are ready to negotiate terms or make a deal with Turkey that would then open the way for both private and state-to-state deals. This is indeed what it chose.[41] Yet it is unlikely those Turkish companies will accept the onerous take-or-pay clauses and high prices that feature so prominently in Gazprom’s contracts. Therefore, the question arises, did Turkey gain or lose from its December 2011 deal with Russia? Ironically, that deal followed hard on the completion of the final accord to build the Trans-Anatolian pipeline with Azerbaijan prompting analysts to speculate just what Turkey had gained (as it is clear what Moscow gained and Kiev lost here) from these two deals. Some observers feel that Moscow called Turkey’s bluff and insisted that South Stream be approved if Turkey wanted to get cheaper gas from Moscow, and that Turkey thus had no choice but to agree.[42] Vladimir Socor argues that Turkey derives no real benefit from the deal with Russia, as it failed to get Black Sea oil through the Samsun-Ceyhan pipeline, which was not mentioned in the deals with Moscow.[43] Attila Yesilada, an analyst at Global Source Partners, also criticized the deal with Moscow for surrendering the EU to Moscow while apparently giving up on Nabucco (He believes that the deal with Russia also entailed Turkey’s surrender of its position in Nabucco in return for Russian concessions on the take-or-pay clause in Russo-Turkish contracts.). Yet he withheld final judgment, because it is not clear that South Stream is a real project or a means by which to take over Ukraine’s gas network. Since no source of new gas for South Stream has yet been announced, it is by no means certain that it would supplant the existing Ukrainian pipeline. In the latter case, Nabucco is still alive, and the Trans-Anatolian pipeline is feasible. Still, he worries that Turkey may be driven to seek revenge on the EU at the expense of its own and EU interests.[44] On the other hand, Alex Jackson argues that Turkey’s deal with Russia could actually bolster Nabucco, as the Trans-Anatolian pipeline is going to be built regardless of whether commitments to buy its gas come first. If this pipeline could undercut the demand for South Stream’s product in Europe, South Stream–given its high costs and many uncertainties–may never be built. Then Turkey would get transit rights for gas going to Europe while still holding a binding contract with Russia for discounted gas.[45] Jackson argues that Turkey has played both pipelines off against each other and that this too prompted Putin to move up the construction of South Stream to 2012 in order to preempt the Trans-Anatolian pipeline.[46] Indeed, Turkey and Azerbaijan can argue that they have established a route for Nabucco through Turkey, even if they have not formally signed onto the project. It should, therefore, be easier for Nabucco to scale its pipeline to Turkey’s existing pipeline networks, namely the Trans-Anatolian pipeline and the interconnector to Greece and Italy.[47] CONCLUSION The complex Russo-Turkish energy relationship has hitherto been based on friendship and solid commercial interests, whereby Russia sells Turkey huge amounts of gas and oil to the point where about two-thirds of Turkish energy (especially gas) is imported from Russia. Naturally, this situation offers Russia significant benefits beyond the revenue it gains from a steady customer. It also raises for some the specter of excessive Turkish dependence on Russia and its gas. Russia’s ambition to dominate the European–and particular Eastern European (including Turkey)–gas market is quite visible. Yet that ambition conflicts with Turkey’s overwhelming elite consensus to be the gas hub for Eurasia, a consensus based on a belief that geography–if not other factors–foreordain Turkey to enjoy the status of this energy hub. As a result, elements of rivalry have entered into Russo-Turkish energy relations. Turkey’s desire to escape a one-sided dependence on Russian energy and to advance Azeri interests may have compromised the EU’s Nabucco program, but the Azeri-Turkish deals of late 2011 have put in its place a much more solidly based overland pipeline alternative to Russia’s South Stream proposal, about which there are still many unresolved questions. In other words, Turkey has agreed to be a major conduit and player in a pipeline that by its very nature contradicts and clashes with Russia’s grand design. Considering the importance of energy to both states’ overall geopolitical ambitions and perspectives, this likely means that there will be a growing disjunction between their policies–even if they strive to maintain the previous status quo. This can already be seen in Cyprus, where Turkish efforts to block Cyprus’ exploration of gas off its coast has led to the appearance of Russian warships in Cyprus and statements of support for it as well as Russian interest in exploring for gas there. Likewise, Moscow’s well-known ambition to dominate the Caucasus exclusively also encompasses Azerbaijan and its energy economy. These recent events are therefore most likely harbingers of a new trend that may emerge slowly but will nevertheless make its influence felt. Moreover, these tensions will reinforce those chronicled earlier over the fate of Syria’s revolution and concerning missile defenses integrated with the United States and NATO in Turkey. While both sides are probably loath to admit that the good old days of a halcyon relationship are over and will undoubtedly make efforts to retain that relationship, competition between Ankara and Moscow in the Eastern Mediterranean, Caucasus, and over energy is as likely as not to be the order of the day in the near future. *This work was supported by a National Research Foundation of Korea Grant funded by the Korean Government (NRF-2010-327-B00053). *Younkyoo Kim (Ph.D., Purdue University) is an associate professor at the Division of International Studies, Hanyang University, Seoul, Korea, where he has taught international relations since 2004. Previously, he taught at various colleges in the United States, including DePauw University, Butler University, and Saint Mary’s College. He is the author and co-author of over 30 scholarly articles and monographs, and author or editor of four books, including Russia and the Six-Party Process in Korea (2010), The Geopolitics of Caspian Oil: Rivalries of the U.S., Russia, andTurkey in the South Caucasus (2008), and The Resource Curse in a Post-Communist Regime (2003). His research interests have been focused on issues of energy security and international relations in East Asia and Eurasia. *Stephen Blank (Ph.D., University of Chicago) is Professor of Russian National Security Studies at the Strategic Studies Institute of the U.S. Army War College in Pennsylvania. Dr. Blank has been Professor of National Security Affairs at the Strategic Studies Institute since 1989. In 1998-2001, he was Douglas MacArthur Professor of Research at the War College. He has published over 700 articles and monographs on Soviet/Russian, U.S., Asian, and European military and foreign policies, testified frequently before Congress on Russia, China, and Central Asia, consulted for the CIA, major think-tanks and foundations, and has chaired major international conferences in the United States and abroad. He has published or edited 15 books focusing on Russian foreign, energy, and military policies and on international security in Eurasia. His most recent book is Russo-Chinese Energy Relations: Politics in Command, London: Global Markets Briefing, 2006. He is also author of Natural Allies? Regional Security in Asia and Prospects for Indo-American Strategic Cooperation (Strategic Studies Institute, U.S. Army War College, 2005).
X PROGRAM BENEFITS: Earn 5% back on all eligible purchases, paid in BE Rewards dollars Receive exclusive promotions and checkout with ease Save money on future purchases with your BE Rewards Please Confirm Are you sure you want to delete your registry? Your registry will no longer be available for you or others to view. NO. Return to my registry. Please Confirm Are you sure you want to delete your wishlist? Your wishlist will no longer be available for you or others to view. NO. Return to my wishlist. Please Confirm You are about to redeem your BE Rewards. Once you redeem your reward, your Potential Rewards will be recalculated so that you can continue to earn rewards eligible for future redemption. This action cannot be undone once you choose to continue. Please confirm you would like to redeem your rewards. You are about to redeem your Spa Rewards. Once you redeem your reward, your Potential Rewards will be recalculated so that you can continue to earn rewards eligible for future redemption. This action cannot be undone once you choose to continue. Please confirm you would like to redeem your rewards. Green Sprouts Warming Plate The Green Sprouts Warming Plate will keep baby's food heated without the use of electricity. Just add warm water through spout to keep food warm during mealtime. As an added bonus, the Warming Plate has a non-slip suction on the bottom to keep it securely in place. Dishwasher safe. Danielle February 8th, 2011I just received this plate, and it has made feeding so much easier for my little one! I highly recommend it. We just add a cup of hot water to the reservoir, and it keeps her food warm all through the meal, leading to happier baby and easier feeding for us. When we are done, we just clean out the plate, empty the water, and turn it upside down on the drying rack with the lid open for airing out. We've not had a problem with mold, mildew or deposits. Can't say enough about this! CharlottefromCobble Hill, BC January 11th, 2010This bowl is great! I have filled it with hot tap water and it stays hot for over 1/2 an hour. It's smooth and solid, and the suction base works very well. Cute design and pleasant colour. A: The Green Sprouts Warming Plate is made in China. A: It works by creating a pressure differential between the atmosphere and the area under the suction ring. The higher pressure in the room actually pushes the plate down onto the table surface, rather than it being 'sucked' down as is commonly believed.
Flames could be seen shooting up at the scene of a 5-alarm fire in Mount Eden just before dawn Thursday morning. No one was injured in the blaze. To see a short video of the blaze in Mount Eden, visit www.SentinelNews.comand click on the headline about the fire. The glow of flames from a predawn blaze in Mount Eden on Thursday split the darkness as dozens of firefighters from five departments struggled to control a fire that consumed a downtown building, damaged two others, and threatened even more. But after several hours, firefighters had the fire reduced to a long line of smoldering rubble that had once been prominent businesses in Mount Eden. Magistrate Tony Carriss, who lives just north of Mount Eden, said he doesn’t know how old the buildings were but that the one that was destroyed has been abandoned for many years. “I remember when I was just a little boy, there was a grocery store in one of them,” he said. Mount Eden Fire Chief Doug Herndon said a call came into 911 at 5:10 a.m., reporting a structure fire at the corner of Mount Eden and Van Buren roads. The building was located in Spencer County, literally on the Shelby County line. Herndon said he called in so many departments for mutual aid (Waddy, Shelbyville, Shelby County and Spencer County) because the fire was raging very close to numerous houses and businesses, making the potential for disaster a very real possibility. “We fought this fire with everything we had, and I have to say, I am proud of everybody who worked to get this under control, because it could have spread so easily,” he said. “Everybody did a great job.” Herndon said he does not know who owns the building that was destroyed, but he said he thinks it’s someone who lives in Jefferson County. According to the Spencer County PVA, the building’s address is 26 Van Buren Road and is owned by Richard Reid of Louisville. The dilapidated building is not part of a score of rundown and deserted buildings that some residents of Mount Eden have been working to get cleaned up to keep them from being such an eyesore in the small city’s downtown area. Two adjoining buildings received minor damage. Herndon said he does not know what caused the blaze, but he said it is suspicious because the buildings were not occupied. “I think one of them was being used for storage, but there was no electricity on in any of them,” he said. An investigation into the cause of the fire is scheduled to begin with a few days.
(1) Field of the Invention This invention relates to the formation of high capacitance capacitors on integrated circuit wafers and more particularly to capacitor plates using etchback of polysilicon hemispherical grains. (2) Description of the Related Art Polysilicon hemispherical grains, HSG polysilicon, are used to increase surface area of capacitor plates used to form integrated circuit capacitors, particularly for DRAM circuits. The HSG polysilicon is formed on a conductor, usually polysilicon, used to form capacitor plates. Etchback of the HSG polysilicon using vertical anisotropic etching forms an irregular top surface of the capacitor plates. HSG polysilicon is also used on the sidewalls of the capacitor plates however adhesion of the HSG polysilicon to the sidewalls can be a problem. U.S. Pat. No. 5,256,587 to Jun et al. describes methods of forming capacitor plates using a hemisphere particle layer having hills and valleys on a layer to be etched. The hemispherical particle layer is used on the top surface of the capacitor plates. U.S. Pat. No. 5,254,503 to Kenny describes the use of sub-lithographic relief images to increase the surface area of the top surface of capacitor plates. Polysilicon and porous silicon can be used to form the sub-micron relief pattern. U.S. Pat. No. 5,082,797 to Chan et al. describes the use of a texturized polysilicon structure to increase the area of capacitor plates. A polysilicon structure is subjected to a wet oxidation followed by a wet oxide etch to form the texturized polysilicon structure. U.S. Pat. No. 5,447,878 to Park et al. describes the use of an HSG polysilicon layer of form extended surface area on both the top and the sidewalls of capacitor plates, however an anneal step after the formation of the HSG polysilicon layer and an timed oxide back etch is not described. U.S. Pat. No. 5,492,848 to Lur et al. describes the use of silicon nodules formed on the top surface of capacitor plates to increase surface area. U.S. Pat. No. 5,134,086 to Ahn describes exposing a first polysilicon layer, an oxide layer, and a second polysilicon layer consisting of grains to an oxide etchant. The oxide etchant penetrates the grain boundaries of the second polysilicon layer and etches the oxide layer at the grain boundaries. The etching forms an irregular surface which increases surface area. The irregular surface area is on the top surface of the capacitor plates. U.S. Pat. No. 5,358,888 to Ahn et al. describes the use of polysilicon hemispherical grains to form an irregular surface on the top surface of capacitor plates. A paper entitled "A CAPACITOR-OVER-BIT-LINE (COB) CELL WITH A HEMISPHERICAL-GRAIN STORAGE NODE FOR 64 Mb DRAMs", by Sakao et al., IEDM, 1990, pages 27.3.1-27.3.4 describes using etchback of HSG polysilicon to increase the surface area of capacitor plates. The use of an anneal step or a timed oxide etchback step is not described. This invention describes the use of HSG polysilicon along with an anneal step and a timed oxide etchback step to form an irregular surface on the top and sidewalls of capacitor plates thereby increasing surface area and capacitance. The method of this invention prevents individual grains from breaking away thereby resulting in improved chip yield.
Q: Set JSlider's spacing to multiples of ten I want a JSlider which has five different value's: 1000 (thousand), 10000 (ten thousand), 100000 (a hundred thousand), 1000000 (one million) and 10000000 (ten million). As you can see, every value is ten times as much as it's predecessor. Since it isn't possible to set the minorTickSpacing because of it's dynamical value, I was asking how to do the spacing. A: You're going to have to pretend. Set the slider minimum to 3 and set the slider maximum to 7. Get your value with the following line of Java code. double value = Math.pow(10.0D, (double) slider.getValue());
Lahiff joins the Rebels London Irish prop Max Lahiff will make the move to Australia next season after signing for the Melbourne Rebels. London Irish prop Max Lahiff will make the move to Australia next season after signing for the Melbourne Rebels. Lahiff qualifies to play for Australia through his father, and has made 59 first team appearances since joining the London Irish Academy in 2009. Lahiff, who has played a prominent role for Irish in the Aviva Premiership this season, said he was looking forward to the opportunity of playing Super Rugby in Melbourne, also paying tribute to the London Irish for giving him the opportunity to develop. "Moving to the Rebels is an exciting opportunity for me and I am looking forward to playing in the Super Rugby competition. There are still two games left for London Irish this season and I will be playing my heart out to ensure we finish the season well" said Lahiff. "I have thoroughly enjoyed playing for London Irish. The club signed me from school and have played an imperative part in my development as a player. I would like to thank all the supporters; they have been great to me since I have been here." Rebels head coach, Damien Hill, said Lahiff would be suited to Super Rugby and will complement the style of rugby the Rebels are building towards. "At only 23 years of age, he has played a considerable amount of Rugby for the London Irish at the highest level. He is a very athletic and powerful prop, which will suit the speed of Super Rugby" said Hill. "He has the ability play both sides of the scrum, which is a valuable skill as a modern prop forward." Lahiff will join the Rebels at the beginning of the club's off-season in October.
For decades, Silicon Valley has been a magnet for tech talent, pulling the world’s best and brightest into California’s bay area. It’s easy to understand why: Silicon Valley hosts many of the world’s most successful tech firms including Apple, eBay, Facebook, Google, Netflix and PayPal. These firms offer employees prestige, high salaries and opportunities to work on cutting-edge projects. In addition, there are thousands of startups in Silicon Valley to attract those who want to build something from the ground up. However, recent analysis of Indeed job search data reveals a striking trend: Growing numbers of Silicon Valley tech workers between the ages of 31 and 40 are searching for work elsewhere. In fact, the share of clicks to jobs outside of Silicon Valley by tech workers aged 31-40 increased 12 percentage points over the course of 2015—compared to a rise of just eight percentage points for tech workers overall. A closer look at the other age groups reveals just how concentrated this outward looking trend is among those in their thirties. Outward search from workers aged 21-30 and those aged 41-50 notched a bump of just 3 percentage points each in 2015. The tech sector remains robust, however. So what lies behind this trend? Silicon Valley’s sky-high cost of living is pushing people out The cost of living is likely a major contributing factor. To take just one measure: According to real estate brokerage Redfin, the median sale price of a home in Silicon Valley is $1,050,000. Sky-high housing costs are driving what Redfin describes as a “digital exodus.” Whether or not the situation is quite that dramatic, Indeed data confirms not only that tech workers are searching for work in other hubs but shows us precisely where they are looking. Although the median sale price of a home in New York, the number one city, can be high (more than $1 million in Manhattan), the picture changes drastically when you consider boroughs. For instance, prices in Brooklyn are less than half of Manhattan or Silicon Valley. And the other cities on our list feature house prices that are much, much lower than in the Bay Area. In San Diego, for instance, the median sale price of a home in late 2015 was $460,000. In the alternative tech hub of Austin however, it was $270,000—a quarter of what it costs to buy a home in Silicon Valley. Even allowing for lower salaries, an experienced tech worker’s take home pay goes much further outside the bay area. Unstable startups are less suited to experienced workers Another explanation for why tech workers are searching outside Silicon Valley could be the instability of startups. After all, according to Fortune Magazine, 90% of startups fail, and of those that survive only a handful go on to become a multi-billion dollar “unicorn” like peer-to-peer rental powerhouse Airbnb. Some tech workers aged 31-40 may be priced out of Silicon Valley’s startup-oriented labor market as they advance in their career and gain experience—which also leads them to require greater compensation. A typical startup cannot afford to pay high salaries, and instead offers the promise of a significant payout once the company is successful. More experienced workers may be less inclined to gamble on a startup and are increasingly looking for higher salaries outside of the Bay Area. Other hubs will benefit from the tech migration Silicon Valley will continue to thrive as a worldwide leader in launching innovative companies. However, the increasing number of experienced talent searching in other cities represents a clear opportunity for employers in alternative and emerging tech hubs. After all, there are many more jobs calling for software skills than there are job seekers to fill them. Employers in these smaller hubs should be taking advantage of every shift in the market that makes them more attractive to their target candidates. Those keen to attract these more experienced workers can craft creative strategies to attract them, which we explore in more detail in the Indeed study Beyond the Tech Talent Shortage. Silicon Valley’s loss may be these other cities’ gain in other ways. A 2005 study by the National Bureau of Economic Research looked at Nobel Prize winners in physics, chemistry, medicine, and economics over the last 100 years, as well as inventors of innovative technologies. The findings? People in their twenties were responsible for 14% percent of the innovations, the same proportion as the over fifties. A further 30% came from people in their forties. Meanwhile, individuals in their thirties were responsible for 40% of the innovations—and that, of course, is precisely the demographic we find increasingly on the search for work outside Silicon Valley.
Q: Wrong proof: If $0$ is the only eigenvalue of a linear operator, is the operator nilpotent I am making the following claim: If $0$ is the only eigenvalue of a linear operator $T$, then $T$ is nilpotent Proof: Since $0$ is the only eigen value, the charecteristic polynomial is of the form $p(x)=x^n$. Then $T^n=0$. Doubt: But this answer suggests that we need to assume that the underlying field is algebraically closed. I cannot understand what am I doing wrong. Any idea? A: It's hard to give an answer without more or less saying what was said in the other answer. The matrix $\begin{pmatrix} 0&1\\ -1&0\\ \end{pmatrix}$ has no real eigenvalues. To see this, it's characteristic polynomial is $x^2+1$, which has no real roots. Thus every real eigenvalue is vacuously $0$, but the matrix is not nilpotent. In this case there are complex eigenvalues which are nonzero, but there are no real eigenvectors corresponding to these. However, if we were in an algebraically closed field such as $\mathbb{C}$, and $T$ had no eigenvalues, then since the characteristic polynomial splits into linear factors, i.e. is of the form $\Pi(x-\alpha_i)$, each of the $\alpha_i$ is an eigenvalue, so if they are all $0$, then the characteristic polynomial, as you say, must be $x^n$ so by Cayley-Hamilton, $T^n=0$.
<?php /** * @file * Don't change anything here, it's magic! */ $aliasUrl = "https://drush-alias.lagoon.amazeeio.cloud/aliases.drushrc.php.stub"; $aliasCheckTimeout = 5; //do a head check against the alias stub file, report on failure $chead = curl_init(); curl_setopt($chead, CURLOPT_URL, $aliasUrl); curl_setopt($chead, CURLOPT_NOBODY, true); curl_setopt($chead, CURLOPT_CONNECTTIMEOUT, $aliasCheckTimeout); curl_setopt($chead, CURLOPT_TIMEOUT, $aliasCheckTimeout); // curl giveup timeout if(TRUE === curl_exec($chead)){ $retcode = curl_getinfo($chead, CURLINFO_HTTP_CODE); if($retcode === 200){ global $aliases_stub; if (empty($aliases_stub)) { $ch = curl_init(); curl_setopt($ch, CURLOPT_AUTOREFERER, TRUE); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_URL, $aliasUrl); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); $aliases_stub = curl_exec($ch); curl_close($ch); } eval($aliases_stub); } else { echo "Unable to get remote aliases stub, you may be unable to access the requested resource".PHP_EOL; } } else { echo "Unable to get remote aliases stub, you may be unable to access the requested resource".PHP_EOL; } curl_close($chead);
&FORCE_EVAL METHOD Quickstep &DFT BASIS_SET_FILE_NAME GTH_BASIS_SETS POTENTIAL_FILE_NAME POTENTIAL &MGRID CUTOFF 200 &END MGRID &QS &END QS &SCF SCF_GUESS atomic &END SCF &XC DENSITY_CUTOFF 1.e-9 DENSITY_SMOOTH_CUTOFF_RANGE 0 FUNCTIONAL_ROUTINE NEW &XC_FUNCTIONAL tpss &END XC_FUNCTIONAL &XC_GRID XC_DERIV NN50_SMOOTH XC_SMOOTH_RHO NONE &END XC_GRID &END XC &END DFT &SUBSYS &CELL ABC 5.0 5.0 5.0 &END CELL &COORD O 0.000000 2.000000 1.934413 H 0.000000 1.242864 2.520545 H 0.000000 2.757136 2.520545 O 2.500000 2.000000 1.934413 H 2.500000 1.242864 2.520545 H 2.500000 2.757136 2.520545 &END COORD &KIND H BASIS_SET DZVP-GTH POTENTIAL GTH-PADE-q1 &END KIND &KIND O BASIS_SET DZVP-GTH POTENTIAL GTH-PADE-q6 &END KIND &END SUBSYS &END FORCE_EVAL &GLOBAL PROJECT H2O-tpss PRINT_LEVEL MEDIUM &END GLOBAL
Q: Why were applets deprecated in JDK 9? I have recently read in an article posted by Oracle that they are going to mark the Applet class as deprecated in JDK 9. I have little experience with applets; I have only written some to understand the basics. Why are they unpopular, and what is the main reason for their deprecation? A: Applets were very popular a couple of years ago, but now the browser world changed and security is becoming a major focus for all major browser vendors. The Java team gave its complete set or reasons, alternatives etc. in the document Migrating from Java Applets to plugin-free Java technologies. On page 4, there is the following Executive Overview: With modern browser vendors working to restrict or reduce the support of plugins like Flash, Silverlight and Java in their products, developers of applications that rely on the Java browser plugin need to consider alternative options. Java developers currently relying on browser plugins should consider migrating from Java Applets to the plugin-free Java Web Start technology. Supporting Java in browsers is only possible for as long as browser vendors are committed to supporting standards based plugins. By late 2015, many browser vendors had either removed or announced timelines for the removal of standards based plugin support, while some are introducing proprietary browser-specific extension APIs. Consequently, Oracle is planning to deprecate the Java browser plugin in JDK 9. The deprecated plugin technology will be completely removed from the Oracle Java Development Kit (JDK) and Java Runtime Environment (JRE) in a future Java release TBD. Java Web Start applications do not rely on a browser plugin and will not be affected by these changes.
Q: How to add a listener in a particular cell in a grid in GXT I would like to add a listener when I am clicking the cell for categories only. this is the declaration of my columnConfig ColumnConfig<UserRights, Boolean> unlockConfig = new ColumnConfig<UserRights, Boolean>(properties.hasUnlock(), 50); unlockConfig.setHeader("Unlock"); cfgs.add(unlockConfig); ColumnConfig<UserRights, String> catConfig = new ColumnConfig<UserRights, String>(properties.categories(), 150); catConfig.setHeader("Categories"); cfgs.add(catConfig); cm = new ColumnModel<UserRights>(cfgs); grid = new Grid<UserRights>(store, cm); grid.getView().setAutoFill(true); grid.addStyleName("margin-10"); grid.setLayoutData(new VerticalLayoutContainer.VerticalLayoutData(1, 1)); grid.addRowClickHandler(new RowClickEvent.RowClickHandler() { @Override public void onRowClick(RowClickEvent event) { index = event.getRowIndex(); } }); rowEditing = new GridRowEditing<UserRights>(grid); rowEditing.addEditor(unlockConfig, new CheckBox()); how could I add a listener in the category column? Thanks in advance. A: There is nothing in your categories cell that prompts a user to click, it just contains text. What you should be doing is using the columnConfig.setCell(Cell cell) method to specify a cell that contains an interactive component. You could still try something along these lines: columnConfig.setCell(new SimpleSafeHtmlCell<String>(SimpleSafeHtmlRenderer.getInstance(), "click") { @Override public void onBrowserEvent(Context context, Element parent, String value, NativeEvent event, ValueUpdater<String> valueUpdater) { super.onBrowserEvent(context, parent, value, event, valueUpdater); if (event.getType().equals("click")) { } } });
Undercover Intel Cop Arrested for SUV driver beating View Full Caption MANHATTAN — The undercover detective arrested for being part of the road-rage motorcycle mob that assaulted a Manhattan dad spent weeks living among Occupy Wall Street protesters in their downtown encampment before the raid on Zuccotti Park, sources told DNAinfo New York. Detective Wojciech Braszczok, 32, of Queens, was undercover in 2011 providing an insider’s view of the Occupy protesters and their tent village in Zuccotti Park. Braszczok’s information, along with other intelligence gleaned by the NYPD, helped the department craft their shock-and-awe plan to clear the Occupy protesters out of Zuccotti Park. Sources said Braszczok presently works undercover inside the NYPD’s Organized Crime Intelligence Division and may have been riding with the bikers partly to find another long-term undercover project for the NYPD. Braszczok's alleged role in the motorcycle clash surfaced several days after the Sept. 29 incident when he finally told the NYPD he was among the mass motorcycle ride, but took no action to help Alexian Lien, who was beaten by riders. He claimed he did not want to compromise his undercover status and had arrived at the beat-down just as it was ending. But that was untrue, according to prosecutors from the Manhattan District Attorney's office and the NYPD's Internal Affairs Bureau. According to officials, video of the brutal assault on Lien not only captured Braszczok at the crime scene, it also showed him slamming his fist against the rear window of Lien’s Range Rover and kicking the side of the vehicle. READ DNAINFO'S WHO'S WHO IN THE CLASH BETWEEN BIKERS AND ALEXIAN LIEN On Tuesday, Braszczok became the sixth biker charged in connection the case. He was charged with riot and criminal mischief. "Sometimes an undercover spends too much time in a role, becomes the character and forgets what side of the law he is on," said a source familiar with the case. Prosecutors claim Braszczok joined the motorcycle chase after Lien ran over a biker while trying to get away from a group of motorcyclists that surrounded his car on the Henry Hudson Parkway. In an Internet video that has since gone viral, the motorcyclists can be seen chasing Lien from about West 125th Street to West 178th Street, smashing his windows, pulling him from the Range Rover, then beating him as his wife and 18-month-old daughter sat terrified in the car. Lien suffered cuts above both eyes, on his cheeks and scrapes to his hands, back and shoulders, his wife said. Braszczok, who had been placed on modified assignment, has now been formally suspended from the force. Also arrested was 32-year-old Clint Caldwell, of Flatbush, on Tuesday for assaulting Lien, cops said. He was awaiting charges Wednesday of gang assault, assault, and criminal mischief. The Manhattan District Attorney's office has charged four other men for their involvement in the attack, including Craig Wright, 29, who was arraigned earlier Tuesday on charges that he stomped on Lien's head and body after dragging him out of the Range Rover. He is being held on $150,000 bond. Christopher Cruz, Reginald Chance and Robert Sims were also charged in the incident. Still others remain at large. Police released photos Tuesday of several other men they would like to speak to about the attack. Sources say that as many as six off-duty officers were at the rally, including another detective who was recently arrested for fighting with his Queens assistant district attorney girlfriend. City correction officers were also among the bikers.
The dust of the 2019 NFL free agency is starting to settle and some mustaches have even returned to their original color. Football free agency is an entertaining frenzy that baseball can only dream of. Now that the bulk of the free agency signings are complete, the work begins. It’s time to analyze some of the moves for fantasy football purposes. Here, we are going to look at the San Francisco 49ers backfield and determine who’s fantasy star shines the brightest. Fantasy Football: Tevin Coleman or Jerick McKinnon Jerick McKinnon Disappointment followed Jerick McKinnon from Minnesota to San Francisco, albeit disappointment of a different variety. On two separate occasions “Jet” took a backseat role in lieu of another running back (Matt Asiata then Latavius Murray) for the starting nod. When the 49ers came calling with a good contract, it seemed as if McKinnon was finally getting the opportunity that both he and his fantasy owners were craving: a chance to be the feature back. Disappointment struck again, this time in the form a torn ACL during the preseason. McKinnon subsequently sat out the entirety of the first year of his contract. McKinnon served as the third-down/passing-down specialist for the Minnesota Vikings from 2014 to 2017. During that span, he only missed six games. In 58 games, he has rushed for 1,918 yards (4.0 yards-per-carry), seven rushing touchdowns and only three fumbles. He also had 142 receptions on 191 targets (74.3 percent catch rate) for 984 yards and five touchdowns. McKinnon excels at taking short receptions into open space for chunk yards. Where he struggles is in yards-per-carry, especially at a higher volume. The two seasons where his YPC dipped below 4.0 were both seasons where he saw more than 150 rushing attempts. McKinnon is great in space, but not as effective between the tackles. He’ll get what’s blocked for him, but not too much more. Tevin Coleman Reunited and it feels so good! It does not take an advanced degree in drawing straight lines to see the connection between head coach Kyle Shanahan and Tevin Coleman. Shanahan actively pursued grabbing Coleman in free agency. The two have some history together. Shanahan served as the offensive coordinator for the Atlanta Falcons during the 2015 and 2016 seasons, respectively. That period saw the one-two punch of Devonta Freeman and Coleman and culminated in a trip to a Super Bowl. Both were viable fantasy assets that season. While Freeman had a higher ceiling, Coleman came at a serviceable discount yet still produced. He finished the 2016 season with 11 touchdowns despite only rushing for 520 yards. Coleman saw an increase in workload last season due to an injury to Freeman and finished with a healthy 4.8 yards per carry, 800 rushing yards, and nine total touchdowns. Coleman represents a similar style back to McKinnon but does better between the tackles. Matt Breida Matt Breida ended last season with more times mentioned on the injury report than touchdowns. If Breida had the questionable tag, he should have been in starting lineups anyway. His 5.3 yards per carry average was ridiculous and despite seeing fewer targets, finished with more receiving yards than his previous season. His intended designation was to be the third-down back, but with an injury to McKinnon and later to Raheem Mostert, he carried more of the workload than anyone had intended or was expecting. Breida is a talented dual-threat back but should maintain a more limited role due to his smaller stature. He’s not built to carry a work-horse role, which was evident last season despite his Robocop style of success. Mining Through the 49ers Backfield Having four capable running backs is terrible for fantasy football purposes. While it adds depth for real football, the fantasy folks suffer varying degrees of perturbed trying to guess who’s hot-hand has the stat-filled week. While the 49ers have four capable backs on the roster right now it’s doubtful that it stays that way through the offseason. It is entirely possible that McKinnon gets either cut or traded and if so, it would be very soon. The guaranteed money portion of his four-year contract is already paid. The 49ers only owe him his annual salary if he’s on the active roster on April 1st. Meaning, they very well might be taking offers and making plans to part ways with him. That coupled with the statistical difference in production between him and Coleman certainly lends to the idea. Despite playing in two fewer games, Coleman has put up better career numbers in most statistical categories than McKinnon. Coleman has 2,340 rushing yards on 528 attempts to McKinnon’s 474 attempts yielding 1,918 rushing yards. McKinnon has more targets with 191 to Coleman’s 134. However, Coleman has 1,010 receiving yards to McKinnon’s 984 despite the 57 fewer targets. Coleman and Mostert both just got paid. That leaves Breida, who’s on the last year of his undrafted rookie deal which would pay out a base salary of $645,000. He’d be cheap to cut, which is unlikely as he’d be more valuable to trade. Teams may be inquiring on acquiring McKinnon, they also may be sniffing around Breida as well. Last Word on the 49ers Backfield Shanahan has always preferred a split backfield, which is okay for fantasy, but not preferred, as long as it’s not a three-way split with a similar style of backs. His offense has always been productive from the running back perspective. It’s expected that as this unfolds, there are one or two running backs on this offense worth drafting for fantasy. However, if things remain as they currently are, this backfield is murky. The statistical difference between McKinnon and Coleman is close, yet Coleman remains the more efficient and productive back. Early ADP has McKinnon at RB23, Coleman at RB29, and Breida at RB31. However, it would not shock me if the 49ers part ways with McKinnon in favor of Coleman. Either way, when the dust settles, Coleman should rise to the top and operate as the lead back in this offense and fall within RB2 range. Assuming it plays out that way, Breida remains a solid flex option for PPR formats. Embed from Getty Images
Survey for antibodies against maedi-visna in sheep in Poland. Eighteen flocks of sheep were serologically tested for antibody to maedi-visna virus. A total of 4284 serum samples were tested by agar gel diffusion test and 1015 (24%) were found positive. Prevalence of infection increased with the age of sheep. Within age groups, the lowest prevalence was in the group at 1 year or less (7%) and gradually increased to the highest prevalence at 5 years of older (52%). A difference in prevalence of infection among breeds and sexes was seen but not proven because of inadequate numbers. All flocks tested were infected with a range of serological prevalence from 1.2% to 45.9%.
[Preliminary Analysis of the Genetic Loci of SAG1 and SAG3, the Surface Antigens of Toxoplasma gondii, in HIV-positive People in Dali of Yunnan Province]. To identify the genotypes of Toxoplasma gondii that infects HIV-positive people in Dali of Yunnan Province through analyzing the genetic loci of the surface antigens SAG1 and SAG3. A total of 291 blood samples from HIV-positive cases were collected from the HIV/AIDS Prevention and Control Institution in Yunnan. Nested PCR was used to amplify SAGI and SAG3 genes in the blood samples. The products were digested with restriction enzymes Sau96 I, Hae II and Nci I, and sequenced. Of the 291 HIV-positive blood samples, 64 showed successful amplification of SAGI gene, and 42 of SAG3 gene, with product sizes of 390 bp and 225 bp, respectively. Enzymetic digestion of the PCR products resulted in fragments of 350 bp and 50 bp for SAGI, and -200 bp band for SAG3, consistent with RH, a particular type I strain of T. gondii. Sequencing of the SAG1 and SAG3 PCR products showed that their sequence identities with SAGI (Accession No. GQ253073) and SAG3 (Accession No. JX218225.1) of the type I strain of T. gondii were 99.98%-100% and 99.96% -99.98% respectively. The Toxoplasma gondii in HIV-positive cases in Dali of Yunnan Province is the type I strain of T. gondii.
5484 1.6452 2103 * -5.9 -12407.7 What is the product of 0.2 and -1.32189? -0.264378 What is the product of 42764 and -0.3? -12829.2 -8314 * -2.2 18290.8 4.6*60 276 Product of -0.3 and 0.7753. -0.23259 2*-0.67996 -1.35992 Product of 1.9297 and 3.8. 7.33286 64 times 14.84 949.76 36052 times 0.5 18026 Calculate 30*-0.4728. -14.184 What is the product of -0.277 and 0.0613? -0.0169801 Multiply 3.276 and 1.4. 4.5864 Product of 0.21 and 4995. 1048.95 Calculate 0.7*70.71. 49.497 What is the product of -547 and 0.085? -46.495 -0.08 * -4.184 0.33472 Work out 5 * -2004. -10020 Multiply 397.387 and 0.4. 158.9548 Work out -17 * -8468. 143956 -387 times 45 -17415 Multiply 1 and 8440. 8440 Product of 1.3694 and -1. -1.3694 What is -9254 times -0.3? 2776.2 32410*-6 -194460 -200.775 * 0.01 -2.00775 72*-223 -16056 Multiply 585604 and -5. -2928020 Calculate 0.1*10.9692. 1.09692 What is 20 times -12.877? -257.54 119001 * 0.1 11900.1 Multiply -158 and 3380. -534040 Calculate -13.3*4455. -59251.5 Product of -0.5 and -0.37255. 0.186275 -0.08 * 43725 -3498 What is the product of -51 and -14.63? 746.13 Product of -1.629 and -68. 110.772 What is -0.4 times 0.490906? -0.1963624 Product of 446872 and 0.3. 134061.6 What is 168409 times 1? 168409 -30.3 times 0.48 -14.544 Multiply 10 and 0.0057. 0.057 Product of 31.6687 and 0.2. 6.33374 What is -9494 times 10? -94940 What is -0.1 times 44539? -4453.9 -0.08*2.344 -0.18752 11789 * -2 -23578 What is the product of 0.04 and 0.3343? 0.013372 Multiply -9 and 26.956. -242.604 Multiply 0.06 and 55763. 3345.78 3790*-551 -2088290 Multiply -0.3 and 148.311. -44.4933 0.8 times 1602 1281.6 Work out -283395 * -0.2. 56679 1*176337 176337 What is the product of -19258 and 106? -2041348 4 times -534.09 -2136.36 What is the product of -0.5 and -54.63? 27.315 Product of 8 and 116.967. 935.736 Multiply -2265 and -138. 312570 Multiply 11005 and -2.7. -29713.5 6 * -0.191 -1.146 -35385*8 -283080 Calculate -43*0.843. -36.249 Multiply -1312 and 0.66. -865.92 What is 4 times -15236? -60944 What is 2032.351 times -0.3? -609.7053 Work out -474758 * 3. -1424274 Calculate 18168*-17. -308856 0.135*-20.483 -2.765205 -88.63 * -0.09 7.9767 Product of -10173 and 122. -1241106 -0.1 * -362.8 36.28 What is the product of -8 and 2511.4? -20091.2 Multiply 67 and -224. -15008 Work out -0.4 * -0.1042. 0.04168 Work out 0.09 * -0.2434. -0.021906 11724*0.7 8206.8 0.016881 times 0.4 0.0067524 1.228 times -162.6 -199.6728 0.07 * -38309 -2681.63 What is the product of 427 and -0.499? -213.073 Product of -0.3 and 0.109726. -0.0329178 Calculate 1.6*-290009. -464014.4 Multiply -0.11 and 0.1106. -0.012166 0.0013*1.02 0.001326 -9838 times -2.74 26956.12 Multiply 0.87 and -25. -21.75 -5489592*-0.5 2744796 What is 42 times -1942? -81564 Multiply 0.05 and 15769.6. 788.48 Multiply -0.2 and -0.3488. 0.06976 Calculate -1*-67169. 67169 What is 0.5 times -5.8935? -2.94675 Calculate -2*12749. -25498 Multiply 5.82 and 0.07. 0.4074 Work out -0.16 * -31027. 4964.32 Product of -5 and -414157. 2070785 2.88 times 43 123.84 0.06*-0.211468 -0.01268808 What is -9.539 times -40.4? 385.3756 3 * -53279 -159837 Work out 3464159 * 0.1. 346415.9 What is -5 times -7293? 36465 What is -1010.23 times -0.3? 303.069 Calculate -71428*-0.1. 7142.8 What is 21 times 493? 10353 What is -0.02 times 1725? -34.5 Product of 3 and 56. 168 Product of -4028 and 758. -3053224 0.4 * 251856 100742.4 Work out 87 * -18. -1566 What is the product of 42732 and 0.2? 8546.4 What is the product of -0.19 and 8341? -1584.79 What is 0.4 times 1.0195? 0.4078 What is the product of 0.624 and 93? 58.032 -942 * -4 3768 What is the product of 1990.9 and -32? -63708.8 What is -8830 times 24? -211920 Calculate -10064.44*-4. 40257.76 0.00685 * 0.3 0.002055 Product of 0.242178 and -0.5. -0.121089 -2*44437 -88874 Product of -25 and 4666. -116650 Product of 2.073 and 1540. 3192.42 28022*0 0 What is -6 times 4.0967? -24.5802 Product of 653 and -0.37. -241.61 What is the product of -0.008 and 41? -0.328 Multiply -0.1 and 9268. -926.8 Multiply -0.82 and -1.1247. 0.922254 Calculate -165*-55.5. 9157.5 Product of 84 and -319. -26796 Multiply 0.5 and 20639. 10319.5 Product of 0.3 and -163.81. -49.143 What is 0.072 times -4169? -300.168 What is -0.2 times 52795? -10559 -0.1 * -2.5153 0.25153 Multiply 4 and 71.2833. 285.1332 92.56 times 0.08 7.4048 213515 * -3 -640545 Work out -31 * -732. 22692 Product of -0.0372756 and -0.3. 0.01118268 Product of -60.18 and -0.02. 1.2036 1.4 * 7642 10698.8 What is -212 times 290? -61480 Multiply -0.26 and 16221. -4217.46 Calculate 0.59578*0.07. 0.0417046 Multiply 1054 and 24. 25296 Work out -25486 * -0.1. 2548.6 Work out 2796.289 * 0.3. 838.8867 Multiply 0.2 and 50591. 10118.2 Calculate -336602*-2. 673204 Multiply 3 and 276823. 830469 Calculate -0.2306*10. -2.306 Product of 93712 and 0.5. 46856 7*-776.641 -5436.487 -0.3*-0.0902474 0.02707422 0.05 times 11799 589.95 Product of -354.5 and 0.63. -223.335 Product of -882 and -12.2. 10760.4 -0.2756*-0.8 0.22048 65.981 * -0.2 -13.1962 -4.05 times -18 72.9 0.09241*-5.3 -0.489773 Work out 0.4518 * -3. -1.3554 -46 times 8.58 -394.68 Work out 132 * -0.234. -30.888 What is -0.0221 times 0.3? -0.00663 -18*203.6 -3664.8 Work out -1.52 * 1.3749. -2.089848 Calculate 0.55*464.8. 255.64 What is the product of -39.3 and 1044? -41029.2 Work out 5 * 8080. 40400 0.1103 * 2808 309.7224 -11*-5742 63162 What is the product of -298 and 2567? -764966 Multiply 34326 and -98. -3363948 Product of -263461 and 0. 0 Work out -74 * 0.82. -60.68 0.1111 * -0.04 -0.004444 1109.56 * -3 -3328.68 Work out -2 * -1231. 2462 Calculate -23143.41*5. -115717.05 Calculate 0.111*-9. -0.999 Work out -41 * -472. 19352 Product of -0.022364 and 86. -1.923304 -0.212227 times -0.1 0.0212227 3 * -12.091 -36.273 14.9*0.023 0.3427 -8765*-4 35060 What is the product of 0.0548 and 14.2? 0.77816 What is 18.6 times 64.1? 1192.26 Multiply 1.738 and 0.0061. 0.0106018 27.9 times 0.074 2.0646 What is the product of 911 and -0.1354? -123.3494 0.7 * 0.6093 0.42651 Product of 2 and -0.000004. -0.000008 Multiply -330 and -941. 310530 Multiply -3.4 and -0.355. 1.207 -0.59*-5086 3000.74 Calculate -4*-4415. 17660 11.846 times 12 142.152 18.33 times 0.8 14.664 -0.3 * 22482 -6744.6 50.5*-0.032 -1.616 Work out -40828.9 * 4. -163315.6 What is -82 times 48047? -3939854 What is the product of -5.5 and -100? 550 Product of 9685 and -0.007. -67.795 -3313 times -0.008 26.504 5599 * 0.13 727.87 11 * -25295 -278245 -0.03 * -14941 448.23 What is the product of -80700 and -0.4? 32280 Work out 0.5 * -17631.1. -8815.55 Calculate 0.4*-129937. -51974.8 2 * -0.09209 -0.18418 1048 times -0.04 -41.92 -5 times -12285 61425 Product of -0.2 and 66178. -13235.6 -0.4*-7464 2985.6 Work out -36189 * 0.4. -14475.6 Multiply 3 and 0.42864. 1.28592 -4*119.747 -478.988 Multiply 9.17 and -2.5. -22.925 3 times -242.42 -727.26 -0.2637*-1.3 0.34281 Work out -3 * -413.3. 1239.9 Work out 0.7712 * -82. -63.2384 -1.077375*4 -4.3095 Work out 0.04559 * 0.8. 0.036472 What is -0.0358 times -1375? 49.225 -1267 times 0.135 -171.045 20601 times -22 -453222 Multiply -479 and -490.7. 235045.3 What is 0.38 times 12.32? 4.6816 -0.3025*3.81 -1.152525 Multiply 0.0777 and -3.5. -0.27195 46*179.1 8238.6 What is the product of -4 and -27121.1? 108484.4 -1.81 * -229 414.49 12259 times 6 73554 Work out 11 * -6.8373. -75.2103 -78683 times -0.1 7868.3 Work out -75481 * 3. -226443 -0.040372*-4 0.161488 Work out 0.9 * -650. -585 -682618 * -0.5 341309 Product of 1682 and 2.33. 3919.06 Product of 43281 and -0.2. -8656.2 What is the product of -7420 and 4.54? -33686.8 What is the product of -45.53 and 1.6? -72.848 What is the product of 0 and 28.94? 0 Multiply -75 and -5327. 399525 -0.5269 times 0.01 -0.005269 What is the product of -0.123 and -680? 83.64 What is -178 times -6858? 1220724 Multiply -5 and 1662.82. -8314.1 Calculate 0.042069*-0.16. -0.00673104 What is the product of -0.4 and -78.8584? 31.54336 What is the product of -48205 and 0? 0 Multiply 1.03521 and 0.1. 0.103521 Calculate 15.3*98.7. 1510.11 What is the product of -0.01 and -0.24? 0.0024 Multiply -0.9642 and 0.28. -0.269976 Product of -1.2 and -7578. 9093.6 0.4*-75.2642 -30.10568 What is the product of 115 and 0.303? 34.845
The trouble with Checked Exceptions (C# architect) - davedx http://www.artima.com/intv/handcuffs.html ====== gioele The crux of the question: > Bill Venners: But aren't you breaking their code in that case anyway, even > in a language without checked exceptions? If the new version of foo is going > to throw a new exception that clients should think about handling, isn't > their code broken just by the fact that they didn't expect that exception > when they wrote the code? > Anders Hejlsberg: No, because in a lot of cases, people don't care. They're > not going to handle any of these exceptions. There's a bottom level > exception handler around their message loop. That handler is just going to > bring up a dialog that says what went wrong and continue. The programmers > protect their code by writing try finally's everywhere, so they'll back out > correctly if an exception occurs, but they're not actually interested in > handling the exceptions. > The throws clause, at least the way it's implemented in Java, doesn't > necessarily force you to handle the exceptions, but if you don't handle > them, it forces you to acknowledge precisely which exceptions might pass > through. It requires you to either catch declared exceptions or put them in > your own throws clause. To work around this requirement, people do > ridiculous things. For example, they decorate every method with, "throws > Exception." That just completely defeats the feature, and you just made the > programmer write more gobbledy gunk. That doesn't help anybody. ~~~ paol Exactly. I think the 2 key failings of the checked exception idea (there are others, but not as important) are 1) Precise exception specifications leak implementation details. This will bite you badly, e.g., when you want to change the implementation from under a published API. 2) The times when we only care that some exception occurred _vastly outnumber_ the times when we care about the exact types of exceptions that may occur. Checked exceptions don't add any value to the common case, in fact they make it worse, because generic exception handlers tend to live many levels up the call stack from where exceptions are thrown(1), and therefore they "aggregate" large amounts of code that that can collectively throw a huge range of exceptions. If you were to carry around detailed "throws" clauses, they would be a mile long. The funny thing is, all solutions that solve the problems created by checked exceptions do so by subverting the mechanism one way or another. So you're better off without checked exceptions in the first place. I've always been interested in this discussion because it's one of those cases where an idea seems very good on paper, only to turn out quite bad in practice. (1) This observation is what makes exceptions such a useful mechanism in the first place. ~~~ bunderbunder There is one part of the idea that I'd like to salvage, though, and that's having some clear documentation right there in the code about what kinds of exceptions I can expect to see a procedure throw. At the very least it'd be useful for when I'm wearing my "DLL author" hat, since it'd give me an easy way to look through the public interface and make sure exceptions won't be causing unrighteous leakage of implementation details for the end-user. I'd rather not put users in a situation where they have to look at the framework source in order to interpret an exception. It's kind of irritating when that happens. I think it's really not something that should require special language statements, though. A static analyzer should be able to automatically figure it out. ------ philbarr As a long time Java programmer, who has been writing C# for the last year and a half, I think leaving out checked exceptions from C# may have been a bad idea. What they've done is to identify that checked exceptions are a pain in many cases and simply removed them without providing a proper replacement. This particular quote demonstrates what I mean: "To work around [the need to declare what you intend to do with each exception] people do ridiculous things. For example, they decorate every method with, "throws Exception." That just completely defeats the feature, and you just made the programmer write more gobbledy gunk. That doesn't help anybody." Ok, so what you did instead was to effectively put "throws Exception" on every method by default. You actually implemented the workaround you've just criticised into the language? And what's the effect of this - whenever you call a method in an API, you have no idea what could go wrong with it, so you can't write code to handle it. It wouldn't be so bad if a method's potential exceptions were documented, but they _never_ are. So you end up in a situation where you just code for the simple "happy case", run it, see what breaks, put some exception handling in (for just Exception) where you can, run it again, etc. It's a dreadful way to write code. I'm not saying Java's checked exceptions were great - far from it - but C#'s "solution" is terrible. The number of times you end up with a NullException because some method failed somewhere deep in your code and something else didn't get initialised is unreal. If you don't believe me, try working with Sharepoint for a few minutes, and that's Microsoft code. [edit - spelling] ~~~ pohl _Ok, so what you did instead was to effectively put "throws Exception" on every method by default._ That implicit throws clause was already there in any language that contains unchecked exceptions for things like dereferencing a null pointer, or assertions, etc. So, no, that's not what removing checked exceptions does. Personally, I'm a big fan of Guava's Throwables class. The java I write these days transmutes all checked exceptions into unchecked wrappers at any API boundary that tries to force them on me. And, no, this doesn't mean I'm only coding for the happy case. Rather, I'm choosing where and how to handle the sadness. ~~~ lsd5you It depends what standards you hold the library to. Null pointer exceptions and other runtime exceptions are can be treated as errors - something which needs fixing in the code, rather than something that can be handled. A world of difference. ~~~ pohl And that is still the case with the Guava Throwables strategy, which only effects how checked exceptions are propagated. ------ ZitchDog I have thought quite a bit about checked vs unchecked exceptions and come to the realization that there is a fundamental philosophical problem that checked exceptions can never overcome. The idea with checked exceptions are that they should be used for errors which the caller should be forced to handle. But this property is entirely dependent on the context the function is being called from! For example, if the user is specifying a filename for a file, I should be expected to catch the error and display feedback to the user. However, if I just wrote the file to disk 5 seconds ago, I shouldn't be forced to trap the error! The crux of the problem is this: the severity and recoverability of an error is known only by the caller of the function, not the function itself. Checked exceptions attempt to foist this information into the function itself, where it is unknowable and generally inaccurate. Checked exceptions are a misfeature if I ever saw one. ~~~ scott_s I agree with the C# designers' reason that checked exceptions are not a good idea, but I don't agree with yours. In other words, your argument is different than theirs, and I don't agree with it. (Although I still agree with your conclusion.) _The crux of the problem is this: the severity and recoverability of an error is known only by the caller of the function, not the function itself._ I agree with that statement, but not that it is the crux of the problem with checked exceptions. That statement is an argument for general error reporting - it need not even be exceptions. Simply, the function that encountered the error does not know what to do about it, so it must report it on up the stack. We could just as easily accomplish that with error return values - I'm not saying we _should_ , just that I think your argument is too generic to support your conclusion. _Checked exceptions attempt to foist this information into the function itself, where it is unknowable and generally inaccurate._ And that I actually disagree with. Checked exceptions require that the calling function _acknowledge_ that an error has occurred, it does not require the calling function to _handle_ the error. The calling function could easily catch the exception and rethrow it, or just add the exception to its own throws clause. The C# designers' argument is that what I described is generally what people want to do, so why not just make it the default? That default is unchecked exceptions. ~~~ ZitchDog _Checked exceptions require that the calling function acknowledge that an error has occurred_ But why do they require the client to acknowledge the error? Better stated, why would one use a checked exception versus an unchecked? The general wisdom (and perhaps this is where I'm mistaken) is that checked exceptions generally indicate a more recoverable error. This is why checkedness doesn't belong in the function itself: it's an indication to the caller of the error's recoverability, which is completely unknowable. ~~~ scott_s Note that I was not arguing _for_ checked exceptions. Rather, I was arguing that your argument does not support your conclusion. With that said, I think it may be reasonable to say "You may be able to recover from a missing file, but no one can help you if this pointer is null." It's the difference between a logic error in the program (null pointer exception, array out of bounds exception) and configuration errors (missing file, lost connection). Checked exceptions are typically explicitly thrown by the called function; unchecked exceptions are typically encountered because the called function itself had an error. In other words, checked exceptions are when the called function recognizes an error, and throws it up to its caller. So, I am able to distinguish them myself. But I'd still rather not have checked exceptions. The mentality of "acknowledge all errors" makes more sense in the error-code model, as seen in C programs. In that case, if you don't at least check for and report all possible errors that can arise from calling a function, then they will _never_ get reported, and your program will silently be wrong. ~~~ ZitchDog _Note that I was not arguing for checked exceptions._ Right, I understand that. And I was merely pointing out a flaw in your argument against my argument. _You may be able to recover from a missing file, no one can help you if this pointer is null_ My argument (which you seem to be doing a great job of ignoring) is that this distinction is completely unreasonable. As the calling function, I am passing in the pointer, therefore I am the only one who can know whether the null pointer is a recoverable issue! Consider the ArrayOutOfBoundsException - perhaps the array bounds are passed in from the user interface, and the error should be trapped and reported to the user. This checked / unchecked distinction is simply nonsensical from the called function's point of view. _Checked exceptions are typically explicitly thrown by the called function; unchecked exceptions are typically encountered because the called function itself had an error_ This is not typical, is not how you are using them in your examples, and also makes no sense. What is the difference between encountering a particular error within a function or a sub function? What if the logic in the function is later extracted into a subfunction? ~~~ scott_s I'm sorry if you feel that I'm ignoring your distinction. I'm not trying to ignore it, but argue that I think we _can_ distinguish between "likely recoverable" and "likely not recoverable" for _most_ cases. Nothing prevents you from catching unchecked exceptions. So if you know that you're passing in a may-be-null-pointer-and-it's-okay, then you can do an unusual thing and catch that exception. I agree that we can't classify all exceptions as recoverable or not recoverable from inside the called function, with full accuracy. But I disagree that we can't make reasonable guesses that will be true in most cases. The function may encounter an error but it is not the caller's fault - that's what I mean by "the called function itself has an error." ~~~ ZitchDog Sure, we _could_ guess at all sorts of things from within a function. We _could_ guess that the function is running from a terminal, and simply print the result of the function to stout. Hell, we could guess what the caller is going to do with the result of our function and just do that instead! Why even have a caller at that point? Of course I'm using hyperbole to point out the absurdity of making this distinction, which on the surface may seem reasonable. The fact is, however, the fewer assumptions a function makes about it's calling context, the better. The whole point of functions is that they are to be reused in ways the author may not have intended. The checkedness of an exception is an assumption about the calling context of a function which should not exist. ------ AndrewDucker I am _very_ glad that C# does not have checked exceptions. Anders is completely correct in that the vast majority of the time individual methods do not handle specific exceptions, they just roll things back and pass the exception up the way. Adding exception attributes in all of these places would be a massive pain. ~~~ thebluesky In my experience many C# programmers simply fail to add much exception handling code at all because the compiler doesn't force them to. The net result is code which breaks in spectacular fashion when the first unexpected condition is met. Checked exceptions are a nuisance (I'm actually glad Scala doesn't enforce them vs Java), but often leads to inexperienced C# devs writing very fragile apps. With C# you have to examine docs and source to identify exceptions which could be thrown. Many programmers don't bother, leading to things breaking. In Java the compiler forces you to think about it. I'm not in favour of checked exceptions, but the arguments against them tend to be a bit simplistic. ~~~ jsolson > In my experience many C# programmers simply fail to add much exception > handling code at all because the compiler doesn't force them to. Still better than: try { // Do broken stuff } catch (Exception e) { throw new RuntimeException(e); // TODO: Add a domain-specific runtime exception so we can actually catch this } I find this or something like it[0] spread across every Java codebase I find myself mired in, and I read a _lot_ of Java these days (much to my dismay, but so it goes). [0]: Actually, that's not fair. Most people don't bother to include the TODO. ~~~ Roboprog You meant to say "catch (Throwable e)", right? But, yeah, I've had to write that block too many times. Throwable will catch Errors as well as Exceptions, such as when a constructor called from a dependency injection framework fails. It's kind of irked me for a while that I had to know that. I would have rather had "catch" work on _anything_ , with the option to check the class and rethrow once in a great while. That is, in the few places where the error checking/logging actually belongs (as stated in the article, with which I obviously agree). There is indeed great irony in Java code littered with catches, which then falls out and kills the main loop without logging anything. (I recently had to diagnose such a case in a system at work over the phone -- sure enough, that's exactly what was coded) ~~~ sixcorners You are not supposed to, and don't need to, catch Errors. The only exceptions you need to worry about as far as checked exceptions go are classes that inherit from Exception but not RuntimeException. ------ philf > You end up having to declare 40 exceptions that you might throw. And once > you aggregate that with another subsystem you've got 80 exceptions in your > throws clause. It just balloons out of control. What he completely ignores is the possibility to wrap exceptions to either aggregate them or to convert them into unchecked exceptions. ~~~ paol But that's exactly one of the main points of the detractors of checked exceptions (of which I'm one): the only sane way of working with it is to subvert it. Whether you convert the throws clause to a common supertype (which very soon converges to the base type Exception), or wrap everything in a RuntimeException, you are effectively emulating a language without checked exceptions. So what was the point in the first place? ~~~ specialist "the only sane way of working with it is to subvert it." Perhaps. In my projects, I use wrapped exceptions to convert "what blew up" exceptions into "what to do about it" exceptions. A poor man's event routing. For example, I wrote an ETL workflow thingie that would retry, reset, restart, backoff (throttle) depending on what was failing. I'd be totally game for pre declaring what exceptions your code will catch, harkening back to the days of "on message do this" type programming. Then the compiler can tell verify that all the thrown exceptions get caught by somebody. Meanwhile, I've never had a problem with checked exceptions. Methinks the real cause of angst over checked exceptions are all these mindless frameworks, APIs, libraries, strategies, design patterns, aspects, inversion of common sense, injection of nonsense, etc. The complexity borne of the manner in which Spring, JPA, Hibernate, connection pooling, aspects, MVC, blah blah blah attempt to hide, obfuscate, decouple, mystify, pseudo architect "solutions" ... well, it's just insane. Someone upthread off-handedly noted the irony of declaring all these exceptions only to have the main event loop eat an exception without any logging. Story of my life. Debugging silent death failures really, really sucks. Which is why I work as "close to the metal" as possible and worry not about unchecked exceptions. ------ kbd For old articles, please put a date in the title. In this case, (2003). ~~~ AndrewDucker If it was something that was time-dependent, I'd agree, but in the case of an interview talking about the design of two languages in current use,it's timeless. ~~~ sirclueless That's all the more reason to put the date there. It signals an article that is relevant today despite being almost ten years old. The argument is considerably more impressive when you realize that it happened ten years ago and is still relevant, but I didn't know it was a timeless classic until I saw this comment thread. ------ Uchikoma Checked exceptions are fine. The problem is, that they are not composable and most languages have no syntactic sugar for dealing with them as Monads. Other checked error mechanisms like Either or are much easier to manage. I do hope e.g. Java sometimes in the future gets syntactic sugar to deal with checked exceptions as Monads. ------ lenkite The Java Programming Language's checked exception feature is fundamentally broken. All newer Java specifications have acknowledged this and utilize un- checked exceptions. ~~~ tomjen3 All but one (and arguably the most important) Android. Android is full of brand new checked exceptions. And I do what I always do with checked exceptions: catch (e) { throw new RuntimeExeception(e); } (you can't keep adding throws clauses because the compiler won't let you add them if the interface you implement can't handle them). ------ rvkennedy I have long wondered about the syntax of exceptions - if it was: void function() { try: function_body... catch(Exception e): finally: } By leaving the function body in the same indentation as it would be without exception handling, this might help to make the code a little more readable than: void function() { try { function_body... } catch(Exception e) { } finally { } } ~~~ ajitk My mind parsed the first code block as a mix of Python and C. IMO, the later would be more readable since it involves parsing only one rule that the eye is already used. ~~~ udp It reminds me more of C, where labels and goto are often used for error handling (labels using the same `:` syntax). ~~~ ajitk Now that you you mention labels, I see the similarity to C. I might have been influenced by OP's reference to use _indentation_ to make code more readable instead of using braces. ------ bokchoi I tend to agree that Java's checked exceptions are annoying (hello SQLException). However, when I used C# I was annoyed that the VS intellisense and the C# generated documentation didn't tell me what exceptions might actually be thrown! In practice this meant just not handling anything at all. ~~~ darrenkopp That's because the developer didn't put the exception into the documentation. But at the same time you really shouldn't rely on that because something down the line could throw an exception that isn't covered in the documented exceptions. ------ viraptor The thing that really annoys me about checked exceptions is that map/reduce/fold like functions throw anything by definition. Basically as soon as you allow any general type function as a parameter, you need to mark yourself as throwing everything. That is really limiting in some scenarios. ------ dscrd Exceptions are a bad idea. Even in the more saner languages such as python, they tend to obfuscate needlessly. ~~~ je42 so you prefer to decentralize your exception handling code ? how do you scale that ? ~~~ dscrd The problem is that when exceptions are available, people will start using them for non-exceptional things and (the worst case) basic control flow. Case in point from a popular framework, Django: the standard way of getting an object from the DB via the ORM is Class.objects.get. This method either returns a single object or raises on exception if there are zero rows in the db or another exception if there are two or more rows. It may also raise other kinds of exceptions. Now, it's clear that having zero rows is not very exceptional, and even >1 is somewhat debatable. Note especially that this is not just some weekend project by a nobody, it is a framework that is widely used and respected. ~~~ Ingaz >> and (the worst case) basic control flow. I call it "java-way goto" I can't agree about Class.objects.get - method must return exactly one row by known pk value. So it perfectly sane to raise exception for zero rows. Class.objects.filter is working the way you want. ~~~ dscrd Except that get accepts as parameters non-unique non-primary fields too. I have every reason to believe that the Django devs are capable people who thought about this and landed on this specific usage with a clear rationale... but it still is a misuse of error handling capabilities of a language. And a sort of misuse that everyone else does sometimes as well.
PINE BLUFF PUBLIC SCHOOLS Inclement Weather Policy In the event of inclement weather in the Pine Bluff/Jefferson County area, the following listing of places you can check to see if Pine Bluff School District schools will be closed due to inclement weather. PBSD works with other area districts in Jefferson County, Arkansas to determine the safety of travel when bad weather arrives.
Background ========== Analyzing the effects of genes and/or environmental factors on the development of complex diseases is a great challenge from both the statistical and the computational perspectives. Calle et al. \[[@B1],[@B2]\] recently developed the model-based multifactor dimensionality reduction (MB-MDR) technique, which tackles association and interaction analysis by assigning genotype cells to different risk categories. This method is also applicable to one-dimensional screening. Currently, the MB-MDR approach uses permutation testing to assess significance \[[@B3]\], thereby also correcting for multiple testing. An additional major problem arises when associations between a trait of interest and rare variants are targeted. In this context, it is unclear which of the family-based or population-based designs will be more advantageous. Also, traditional regression methods break down because parametric assumptions are hardly fulfilled for rare variants \[[@B4]\]. In this paper, we explore the utility of several methods, both parametric and nonparametric, to test for or model genetic associations using population-based and family-based data from Genetic Analysis Workshop 17 (GAW17). Methods ======= Data set and quantitative trait association analysis ---------------------------------------------------- The data provided by GAW17 include a subset of genes grouped according to pathways that had sequence data available in the 1000 Genomes Project. Effect sizes for coding variants within these genes were assigned using PolyPhen and SIFT predictions of the likelihood that the variant would be deleterious. Two hundred replicates were generated. Our analyses involve the quantitative trait Q1, which was simulated as a normally distributed phenotype. Furthermore, we restrict attention to the available single-nucleotide polymorphisms (SNPs) on chromosome 4 (944 in total). All simulated singular SNP effects (SNPs C4S1861, C4S1873, C4S1874, C4S1877, C4S1878, C4S1879, C4S1884, C4S1887, C4S1889, and C4S1890 in the *KDR* gene and C4S4935 in the *VEGFC* gene) are assumed to be additive on the quantitative trait scale, such that each copy of the minor allele increases or decreases the mean trait value by an equal amount. In addition, values of Q1 were simulated to be higher in smokers, and the listed variants in the *KDR* gene were involved in *KDR*-smoking interaction effects on the trait. There are 944 markers in 81 genes on chromosome 4. The sample size for both population- and family-based data is 697 with family data comprising 8 families with 202 founders and 3 offspring generations. The founders were randomly sampled from the unrelated individuals data set, and genotypes of offspring were sampled using Mendelian inheritance. It should be noted that genetic information is the same for all replicates; only phenotype and smoking status differ. For the family data, we compare the performance of the MB-MDR approach (family-based, FAM-MDR) to the association test (PBAT) screening \[[@B5]-[@B7]\] (version 3.61), whereas for unrelated individuals we compare the MB-MDR approach to penalized regression (the penalized package in R, v. 2.9). Power is estimated on the basis of rejection of the null hypothesis for the SNP under investigation, whereas the family-wise error rate (FWER) is estimated on the basis of rejection of the null hypothesis for any of the SNPs with no effect. In addition, we reevaluated power and FWER by collapsing rare variants in the genes of chromosome 4. In the following subsections, we briefly describe the main characteristics of the approaches we consider in this comparative study. MB-MDR modeling --------------- The MB-MDR technique for one dimension involves three steps. First, each marker's genotype cells are assigned to one of three categories---high risk (H), low risk (L), or no evidence (O)---on the basis of the result of association tests (*t* tests) on each of the individual cells versus all other cells with the response variable, using a liberal *p*-value threshold of 0.1 \[[@B3]\]. If this threshold is not attained for whatever reason, the cell is labeled O. Next, an association test is performed with the new predictor variable *X* in {H, L, O} on the outcome variable. Association with the trait is investigated by testing H versus L \[[@B3]\] using a *t* test. In the last step, permutation-based step-down max *T* adjusted *p*-values \[[@B8]\] with 999 replicates are computed to assess significance over all considered marker sets, theoretically ensuring control of FWER at 5%. We also implement the step-down min *P* procedure \[[@B8]\], based on 999,999 replicates. The MB-MDR approach has been adapted to accommodate family-based study designs and uses principles of genome-wide rapid association using mixed model and regression \[[@B9]\]. In particular, the MB-MDR approach for families (FAM-MDR \[[@B10]\]) first involves performing a polygenic analysis using the complete pedigree structure. Then MB-MDR (for unrelated individuals) is applied to familial correlation-free residuals obtained from the polygenic modeling. Family-based association testing for family-based designs --------------------------------------------------------- The PBAT screening approach of Van Steen et al. \[[@B7]\] is adopted to identify the top 10 most powerful genotype-phenotype combinations and to independently test these using the family-based association test (FBAT) statistic \[[@B6]\]. To be more in line with our MB-MDR analyses, we report results of the dominant genetic model rather than the additive genetic model. Family 7 was split into nuclear families for better handling by PBAT. Type I errors are Bonferroni controlled. Penalized regression for population-based designs ------------------------------------------------- To select the 10 most interesting predictors for Q1, we also apply a least absolute shrinkage and selection operator (LASSO) penalized regression \[[@B11]\]. We decrease the penalizing parameter *λ* with a precision of 0.001 to obtain at least 10 (nonzero) markers in the model, to be in line with PBAT screening. However, sometimes a few more markers were selected (maximum of 12). The covariates Sex, Age, and Smoke are fixed and unpenalized in the regression model. We repeated this analysis for each replicate to obtain a screening technique for the main effects. After this screening procedure, the selected markers were put in a linear regression model to test for association with Q1, again fixing Age, Sex, and Smoke in the model. *P*-values are Bonferroni corrected for the number of markers in the data. Gene-based collapsing method ---------------------------- Following Li and Leal \[[@B12]\] for discrete traits and Morris and Zeggini \[[@B13]\] for quantitative traits, we collapsed variants with a minor allele frequency (MAF) less than 0.01 within each gene into a single variable coded 0 for absence and 1 for presence of at least one variant allele in an individual. The reported MAFs were evaluated using all individuals separately within each considered study design. Results ======= Additional file [1](#S1){ref-type="supplementary-material"} presents estimated power levels for association of important main effects with Q1 using the aforementioned methods. We observe that the MB-MDR approach for unrelated individuals has some power (0.14 for max *T* and 0.34 for min *P*) to find C4S1878, the marker with the largest MAF (0.16), but also elevated FWER estimates (0.13 and 0.50, respectively). With penalized regression, the highest power is achieved for markers C4S1884 (MAF = 0.02) and C4S1877 (MAF \< 0.001), irrespective of whether a gene-collapsing method was adopted or not. However, these results are downplayed by the inflation of the corresponding FWER (\>0.3). PBAT exhibits extremely high power (0.94) to detect C4S4935 (MAF \< 0.001) but also has an extreme FWER of 0.895. When a correction is made for the presence of linkage, interestingly, PBAT's power drops to 0 and its FWER drops to 0.015. On the other hand, the FAM-MDR approach has only limited power (0.18 for max *T* and 0.17 for min *P*) to detect C4S4935 but keeps the FWER under control. A graphical representation of the relation between error rates for nonfunctional markers and the corresponding markers' MAFs is given in Figure [1](#F1){ref-type="fig"}. ![**Marker-specific error rates**. Marker-specific error rates as a function of minor allele frequency (MAF) for all nonfunctional markers. (a) MB-MDR using max *T* on unrelated individuals and (b) PBAT with default options.](1753-6561-5-S9-S32-1){#F1} Finally, collapsing rare variants increases the estimated power of the MB-MDR approach on unrelated individuals, both for the common variants C4S1878 (0.375 for max *T* and 0.38 for min *P*) and C4S1884 (0.205 for max *T* and 0.155 for min *P*) on the *KDR* gene and for the collapsed variable obtained from the rare variants on the *KDR* gene (0.355 for max *T* and 0.47 for min *P*). For the FAM-MDR analysis for families, collapsing increases the power to detect the variant C4S4935 on the *VEGFC* gene (0.275 for max *T* and 0.345 for min *P*). FWER for unrelated individuals remains high, whereas for family data FWER is under control. Discussion ========== Using the considered methods, we observed that different markers were highlighted in unrelated individuals versus families. Given the extent of monomorphic and nearly monomorphic causal variants with Q1 on chromosome 4, it is not surprising that none of the adopted methods perform satisfactorily in identifying genetic effects in the presence of rare variants. In particular, marker C4S4935 (Additional file [1](#S1){ref-type="supplementary-material"}) has only one heterozygous individual in the unrelated individuals data, and hence no method will be powerful enough to highlight this marker. However, this heterozygous individual was selected as a founder and propagated in one of the eight families, leading to an increased number of copies of the variant allele and consequently increased power to identify C4S4935 in the family data. As a side remark, we also investigated whether the MB-MDR approach was able to identify the gene-smoking interaction effect present in the data. It is not surprising that detecting it was virtually powerless. Six out of 10 SNPs showing gene-environment interaction with smoking have such extremely low MAFs that no homozygous individuals for the rare allele and only one heterozygous individual were observed. Hence, for these SNPs, information about their potential to change the effects of smoking on Q1 is basically nonexistent because the one heterozygous individual is either a smoker or a nonsmoker. For unrelated individuals, none of the 944 markers are monomorphic, whereas 403 of the markers are monomorphic in the family data, leaving only 541 markers of interest in 77 genes. This can be explained by existing founder effects. The beauty of the permutation-based corrective method for multiple testing used in the MB-MDR approach is that it tackles the issue of testing a large number of marker sets for evidence of gene-gene interactions with the trait, by controlling FWER at 5% \[[@B3]\]. We argue that the uncontrolled FWER levels might be a direct consequence of the distributional properties of association test statistics involving rare variants and their effect on the validity of both the adopted testing procedure and the applied multiple testing corrective methods. For instance, max *T* and min *P* adjusted *p*-values are known to be similar when the test statistics are identically distributed. When this is not the case, max *T* adjustments may be unbalanced such that not all tests equally contribute to the adjustment, leading to suboptimal power. The drawback of the min *P* implementation is that it is less computationally tractable than the max *T* approach and that a large number of permutations are needed to detect possible improved effects over max *T* implementations. A promising alternative approach may be the max *T* scaled method of Nacu et al. \[[@B14]\]. This method adjusts each test statistic by subtracting its null mean and dividing by its null standard deviation, leading to comparable null distributions. The max *T* scaled method can be considered a parametric and fast version of the min *P* method and requires a comparable number of permutations as the max *T* approach. The total contribution to FWER of markers in linkage disequilibrium with functional markers (*r*^2^ \> 0.9) is only 0.01; hence linkage disequilibrium can be ruled out as an explanation of the increased FWER. In contrast, rare effects seem to be the major cause of the observed elevated FWER estimates. This is further supported by the observation that 1 out of 200 simulated replicates gives an erroneous result among the markers with MAF \> 0.1 (Figure [1a](#F1){ref-type="fig"}). Under the assumption that the markers with MAF \< 0.1 (90% of the data) and the markers with MAF \> 0.1 (10% of the data) behave similarly, we expect that 1 + 9 = 10 out of 200 replicates will give rise to an erroneous result (all markers considered). Hence the FWER would indeed be controlled at 5%. The same reasoning can be adopted to explain the conservativeness of the PBAT approach in the presence of rare variants, especially when the empirical variance option (test of no association in the presence of linkage) is used. In effect, the apparent liberal results observed in Pedigree based association testing with default screening parameters seem to be caused by a limited number of problematic markers (Figure [1b](#F1){ref-type="fig"}). Omitting marker C4S4694 (with MAF = 0.08) from the analysis indeed decreases the FWER from 0.895 to 0.445. When we remove three additional markers with moderate MAFs showing errors in multiple replicates, FWER tends to 0.04. Joint application of the MB-MDR approach and gene collapsing leads to increased power, which can be explained by both the reduced multiple testing burden (note the increase in power for the common variants) and the creation of variables that exhibit larger amounts of information. For unrelated individuals, of the 944 markers, 199 have MAF ≥ 0.01 and 745 have MAF \< 0.01, and they are collapsed into 72 gene-specific variables. For the families, of the 541 nonmonomorphic markers, 227 have MAF ≥ 0.01 and 314 have MAF \< 0.01, and they are collapsed into 60 gene-specific variables. Surprisingly, FWER increased when the MB-MDR approach was applied to unrelated individuals. Notably, one of the drawbacks of adopting collapsing methods is that singular effects for rare variants cannot be distinguished from global gene effects. Conclusions =========== We compared several genetic association strategies to detect main effects, including the MB-MDR approach, PBAT screening, and penalized regression. Although none of the methods exhibited sufficient power to detect rare variants, remarkable differences were observed between these methods within and between study designs. At this point it is not clear whether these differences are due to the particular way the genetic effects were simulated in the family-based or population-based data or whether they are actually due to the methods themselves. However, most important, we postulate that the rarity of certain marker alleles hampers the validity of model assumptions and distributional properties of test statistics as well as assumptions underlying some commonly used measures to correct for multiple testing or to control false-positive rates. Competing interests =================== The authors declare that there are no competing interests. Authors' contributions ====================== JMMJ, TC and KVS participated in the statistical analyses related to the MB-MDR technique. LDL and KVS participated in the analyses related to PBAT and penalized regression. FVL participated in optimizing the MB-MDR software for this project. AE participated in data handling and manipulation. All authors read and approved the final manuscript. Supplementary Material ====================== ###### Additional file 1 **Table 1 - Power to detect functional markers and FWER** Power and FWER results are shown for the MB-MDR approach and penalized regression on unrelated individuals, and FAM-MDR and FBAT results are shown for family data, both on original and collapsed chromosome 4 data. Power values greater than 0.1 are indicated in bold, and FWER values greater than 0.1 are indicated in italic. ^a^ At least 10 markers are selected using penalized regression, with Sex, Age, and Smoke as the fixed covariates. ^b^*P*-values are Bonferroni-corrected according to the total number of markers. Sex, Age, and Smoke are fixed covariates in the final model. ^c^ PBAT screening uses FBAT statistic to test the null hypothesis of no association in the presence of linkage. ###### Click here for file Acknowledgments =============== JMMJ, TC, FVL, and KVS acknowledge research opportunities offered by the Belgian Network BioMAGNet (Bioinformatics and Modeling: From Genomes to Networks), funded by the Interuniversity Attraction Poles Program (Phase VI/4), initiated by the Belgian State Science Policy Office. Their work was also supported in part by the Information Society Technologies (IST) Program of the European Community, under the PASCAL2 Network of Excellence (Pattern Analysis, Statistical Modeling, and Computational Learning) grant IST-2007-216886. In addition, TC is a postdoctoral researcher at the Fonds de la Recherche Scientifique (FNRS), and FVL acknowledges support from Alma in Silico, which is funded by the European Commission and Walloon Region through the Interreg IV Program. This article has been published as part of *BMC Proceedings* Volume 5 Supplement 9, 2011: Genetic Analysis Workshop 17. The full contents of the supplement are available online at <http://www.biomedcentral.com/1753-6561/5?issue=S9>.
5 Tips for Evaluating & Improving City Parks: Trust for Public Land Public parks are an essential component to a healthy and thriving city. Parks offer opportunities for fitness, relaxation and community building, giving children a place to play and adults a place to stay active. They incorporate green spaces, playgrounds, multi use trails, and open areas for sports and recreation to give city residents the chance to enjoy their natural surroundings. Why Parks Are Important in City Planning If you are involved in urban design and planning – as a public official, landscape architect or civil engineer – you know the importance of having green space in a city. When city parks are successful, they offer a clear return on investment, including: Promoting public health Boosting tourism and property value Rejuvenating local economies Creating more energy-efficient spaces Connecting neighbors to each other Exposing residents to nature Unfortunately, not every neighborhood has safe, green spaces for residents to enjoy. The Trust for Public Land, a nonprofit organization that works to conserve land for parks, gardens and other natural areas, reports that there is just 1 park for every 14,000 Americans. In neighborhoods that lack parks, children either play in streets and empty lots or stay inside and engage in more sedentary activities, increasing their risk for health problems such as obesity and diabetes. How to Evaluate City Parks The Trust for Public Land developed ParkScore as a system to analyze a city’s current access to parks and green space. ParkScore evaluates factors including acreage, playgrounds, recreational facilities, spending per resident and park accessibility within a 10-minute walk. Cities can earn a maximum score of 100. Of the 50 largest US cities, Minneapolis has the highest score at 81.0 and Fresno has the lowest at 27.5. The Trust for Public Land also outlines these seven factors of excellence for city parks: 1. A clear expression of purpose 2. An ongoing planning and community involvement process 3. Sufficient assets in land, staffing, and equipment to meet the system’s goals 4. Equitable access 5. User satisfaction 6. Safety from crime and physical hazards 7. Benefits for the city beyond the boundaries of the parks How can you use this information to enhance the green spaces in your city? 5 Tips for Improving Your City Parks 1. Use ParkScore and the seven factors of excellence as metrics. Make it a priority to build sustainable and thriving neighborhood parks in your city. Examine your ParkScore and compare your city with others around the country. Learn from best practices and case studies, then set your own city planning goals. 2. Identify the needs of your community. What do the people in your city – both children and adults – want in a neighborhood park? Do kids want a wide field for soccer, football and Frisbee? Would adults like long running and walking paths that are safe to use in all seasons? Survey stakeholders in your community to find out what they would use most, and explore what construction materials and other resources you would need to meet these goals. 3. Locate underserved areas to find where neighborhood parks are most needed. Which areas in your city aren’t within a 10-minute walk from a park? Consider how strategic city planning could resolve this problem. 4. Explore creative ideas to use your city’s available space. Even highly developed cities can use innovative urban design to develop more green spaces. Look at vacant factories, shipyards, empty lots, rail depots and other spaces not in use to see if they could be converted into city parks. Maintenance costs add up over time, so it’s important to build parks with a long-term strategic plan in mind. What construction materials will be a smart investment over the years? Which materials will stand the test of time and still work well decades from now? Elevated greenways, multi use trails, pedestrian bridges and other boardwalks built with PermaTrak’s precast concrete require no maintenance and are easy to install, durable, and design-flexible, making them a perfect fit for city parks. With more timber boardwalk alternates available, the debate around concrete boardwalks vs. timber boardwalks has become more prevalent. Engineers are faced with 6 key design differences between concrete and timber boardwalks...read more About This Blog The PermaTrak boardwalk blog articles are written for landscape architects, engineers, and agency or municipality professionals. We aim to provide educational resources for designing and building boardwalks, pedestrian bridges, trail and greenway systems.
Dmytro Shutkov Dmytro Shutkov (, born 3 April 1972) was a Ukrainian footballer who played in Ukrainian Premier League club Shakhtar Donetsk his entire career. Now he is a part of goalkeeper coaching staff of Shakhtar. Career statistics Club External links Dmytro Shutkov profile at FC Shakhtar Donetsk website. Category:1972 births Category:Living people Category:Sportspeople from Donetsk Category:Soviet footballers Category:Ukrainian footballers Category:Ukraine international footballers Category:FC Shakhtar Donetsk players Category:Soviet Top League players Category:Ukrainian Premier League players Category:Association football goalkeepers Category:FC Shakhtar Donetsk non-playing staff
It won’t be long until the season is over. I know, y’all are like, “but, Lucas it’s only the second preseason game.” You’re not wrong, but the season will come and go in a blink of an eye, so enjoy every bit of it. Minnesota kicks-off against Jacksonville on Saturday at noon. One of the biggest things to watch for that I’m not including on my three things, because it’s just obvious. That’s to watch these two defenses and see where one stacks up against the other. Here are the three things to keep your eye on throughout the game: 3. Tight end competition Wednesday in practice, tight end Josiah Price suffered a season-ending knee injury. Now, the tight end depth lost one player. It’s debatable if Price would have even made the roster, but that solidifies a position battle between Blake Bell and rookie Tyler Conklin for the third tight end spot and ixnayed a spot for Price on the active roster. Conklin only caught one pass last week for two yards, but Bell caught zero passes. I got a thing for rookies beating out players who’ve been in the league for a few years. Conklin only appeared in eight games last season with Central Michigan after missing the first five because of an injury. He still managed to scrape up 28 catches for 406 yards and five touchdowns, averaging about 50 yards a game. Bell didn’t see much action last season with the Vikings. He only managed three catches for 19 yards in 13 games. He was placed on injured reserve after a shoulder injury he sustained on December 15. The year before that he was in San Francisco where he played in 13 games and only recorded four catches for 85 yards before going on the injured reserve, again. 2. Look for Forbath to strike back Yes, Daniel Carlson had a fantastic first preseason game. Now, this week it will probably be Kai Forbath hitting field goals and extra points, while Carlson handles kick-offs. Forbath isn’t going to let Carlson just take his job. Last week Carlson hit what seemed to be a perfect 57-yard field goal and a 39-yard field goal earlier in the game. He managed to make all four of his extra attempts. He had a solid debut, but can Forbath match his performance? Why do I make a big deal about kickers? Well just look back on track records of former kickers who played for the Vikings and blew big games (cough, cough Gary Anderson and Blair Walsh, cough, cough). 1. Return of Dalvin Cook Finally we get to see number 33 hit the field and take some live-game reps. Well, hopefully since it’s not a for sure thing. I feel like he might get a decent amount of touches, but this is also where we will get a hint of if he’ll be the same player that he was on pace to be last year before he got hurt. Before getting hurt last season Cook was on pace to exceed 1,000 yards rushing and some fantasy experts expect him to rush for over 1,100 this season. I’d say pump the breaks a little bit, considering how well Latavius Murray played last season. I wouldn’t mind if offensive coordinator John DeFlippo decided to split the workload between the two or just run with whoever is feeling it that day. There is no way Murray will take a backseat to Cook.
Alberto Abengózar Alberto Abengózar Martínez (; born 5 July 1989 in Alcázar de San Juan, Province of Ciudad Real, Castile-La Mancha) is a Spanish footballer who plays as a striker. External links Villarrobledo official profile Category:1989 births Category:Living people Category:People from the Province of Ciudad Real Category:Spanish footballers Category:Castilian-Manchegan footballers Category:Association football forwards Category:Segunda División players Category:Segunda División B players Category:Tercera División players Category:CF Gimnástico Alcázar players Category:Getafe CF B players Category:Atlético Albacete players Category:Albacete Balompié players Category:La Roda CF players Category:Ontinyent CF players Category:CD Olímpic de Xàtiva footballers Category:Internacional de Madrid players
♪ STING OF THE NEEDLE DROPPING ON A VINYL ♪ ♪ NEON SINGER WITH A JUKEBOX TITLE ♪ ♪ FULL OF HEARTBREAK ♪ ♪ 33-45-78 ♪ ♪ WHEN IT HURTS THIS GOOD, YOU GOTTA PLAY IT TWICE ♪ ♪ ANOTHER VICE ♪ ♪ ALL DRESSED UP IN A PRETTY BLACK LABEL ♪ ♪ SWEET SALVATION ON THE DINING ROOM TABLE ♪ ♪ WAITING ON ME ♪ ♪ WHERE THE NUMB MEETS THE LONELY ♪ ♪ IT'S GONE BEFORE IT EVER MELTS THE ICE ♪ ♪ ANOTHER VICE, ANOTHER CALL ♪ ♪ ANOTHER BED I SHOULDN'T CRAWL OUT OF ♪ ♪ AT 7AM WITH SHOES IN MY HAND ♪ ♪ SAID I WOULDN'T DO IT BUT I DID IT AGAIN ♪ ♪ AND I KNOW I'LL BE BACK TOMORROW NIGHT ♪ ♪ I WEAR A TOWN JUST LIKE A LEATHER JACKET ♪ ♪ WHEN THE NEW WEARS OFF I DON'T EVEN PACK IT ♪ ♪ IF YOU NEED ME ♪ ♪ I'LL BE WHERE MY REPUTATION DON'T PRECEDE ME ♪ ♪ MAYBE I'M ADDICTED TO GOODBYES ♪ ♪ ANOTHER VICE, ANOTHER TOWN ♪ ♪ WHERE MY PAST CAN'T RUN ME DOWN ♪ ♪ ANOTHER LIFE, ANOTHER CALL ♪ ♪ ANOTHER BED I SHOULDN'T CRAWL OUT OF ♪ ♪ AT 7AM WITH SHOES IN MY HAND ♪ ♪ SAID I WOULDN'T DO IT BUT I DID IT AGAIN ♪ ♪ AND I KNOW I'LL BE GONE TOMORROW NIGHT ♪ ♪ ANOTHER VICE ♪ ♪ STANDING AT THE SINK NOT LOOKIN' IN THE MIRROR ♪ ♪ DON'T KNOW WHERE I AM OR HOW I GOT HERE ♪ ♪ WELL THE ONLY THING THAT I KNOW HOW TO FIND ♪ ♪ IS ANOTHER VICE ♪ ♪ ANOTHER VICE ♪ ♪ ANOTHER VICE ♪ ♪ ANOTHER VICE ♪ ♪ ANOTHER VICE ♪ ♪ ANOTHER VICE ♪
PEF Member Call to Action: Cost-Benefit Analysis bill needs to be law PEF members, We need to get the Cost-Benefit Analysis bill into law. Sign and send a personalized letter to the Governor now, to support the bill! Click Here. To further protect state jobs and save taxpayers billions of dollars, PEF is working to get Gov. Andrew Cuomo to sign legislation that would regulate the process for awarding contracts for certain types of consultant services by state agencies. “This is not the first time PEF has proposed this bill, so we made some revisions to it because it was vetoed in the past. We worked with the bill’s sponsors, state Assemblyman Harry Bronson and state Sen. Joseph Robach, and put in some new language,” Amorosi said. The bill regulates the process for awarding outside contracts. If signed, it would amend the state finance law. It would not allow a state agency to enter a contract for consultant services anticipated to cost more than $750,000 in a twelve month period unless the agency has conducted a review to determine whether or not state employees could do the same job, for the same amount of money or a lower cost. The state comptroller would also review the completed business plan. Amorosi pointed out the bill provides reasonable exceptions, such as in cases that require specialized services, urgent or short-term projects, or where the agency can demonstrate a quantifiable improvement in services that cannot be reasonably duplicated by state employees. New York state spends more than $2 billion per year on consultants. In this era of transparency and fiscal responsibility, PEF is moving forward to get this bill signed by the governor by the end of this year.
Polarization-mode-dispersion (PMD) is a common phenomenon that occurs when light waves travel in optical media such as optical fiber and optical amplifiers. PMD occurs in an optical fiber as a result of small birefringence induced by deviations of the fiber's core from a perfectly cylindrical shape, asymmetric stresses or strains, and/or random external forces acting upon the fiber. PMD causes the two orthogonal polarization components of an optical signal corresponding to two principle states of polarization (PSP) of a transmission link to travel at different speeds and arrive at a receiver with a differential group delay (DGD). As a result, the waveform of optical signals may be significantly distorted, resulting in more frequent errors at the receiver. PMD is wavelength-dependent in that the amount or level of PMD imparted by an optical component (e.g., optical fiber) at a given time will generally vary for different wavelength-division-multiplexing (WDM) channels corresponding to different signal wavelengths or frequencies. Polarization-dependent loss (PDL) is another common phenomenon in optical fiber transmission. Optical components such optical add/drop modules (OADM's) tend to have PDL, which attenuate optical signals depending on the relative polarization state with respect to the PSP's of the PDL component. Polarization-dependent gain (PDG) is also a common phenomenon in optical fiber transmission. Optical components such as Erbium-doped fiber amplifiers (EDFAs), tend to have PDG, which amplify optical signals depending on their relative polarization state with respect to the PSPs of the PDG component. PDL and PDG cause signals to have different amplitudes at the receiver, which makes the optimal decision threshold different for different bits (depending on their polarization), and thus degrades the receiver performance when the receiver decision threshold can only be fixed to a certain level for all the bits. PDL may also cause varying optical signal-to-noise-ratio (OSNR) for bits with different polarization, and further degrade the system performance. PDL or PDG induced OSNR degradation cannot be compensated for since the process of adding random amplified spontaneous emission (ASE) noise cannot be undone. It is known that PMD, PDL, and PDG are significant penalty sources in high-speed (e.g., 10 Gb/s and 40 Gb/s) transmissions. PMD compensation (PMDC) is normally desirable to increase system tolerance to PMD. However, due to the stochastic nature of PMD and its wavelength dependence, PMDC is normally required to be implemented for each wavelength channel individually, and is thus generally not cost-effective. Various prior art methods have been proposed to achieve PMDC simultaneously for multiple WDM channels. Channel switching is one technique that has been proposed to mitigate the overall PMD penalty in a WDM system. However, such systems sacrifice system capacity due to the use of extra channels for PMD protection. Multi-channel PMDC before wavelength de-multiplexing has also been proposed to mitigate the PMD degradation in the WDM channel having the most severe PMD. However, such a mitigation scheme may cause degradation of other channels. Another scheme for a multi-channel shared PMDC has been proposed in which the most degraded channel is switched, by optical or electrical means, to a path connected to the shared PMDC; however, the speed of PMDC is limited (by the speed of the optical or electrical switching). In current PMDC schemes, PMD induced system outages, during which the PMD penalty exceeds its pre-allocated system margin and system failure occurs, are present, though reduced. Forward-error-correction (FEC) is an effective technique for increasing system margin cost-effectively. It has been determined, however, that FEC cannot extend the tolerable PMD for a fixed PMD penalty at a given average bit-error-rate (BER), even though the additional margin provided by FEC can be used to increase the PMD tolerance. It has been suggested that sufficient interleaving in FEC may increase PMD tolerance. However, there is no known practical method to provide the deep interleaving needed to avoid a PMD outage which may last minutes or longer in practical systems.
Tracy Hines Tracy Lee Hines (born May 1, 1972) is an American professional auto racing driver. He was the 2000 USAC Silver Crown Champion and 2002 USAC National Sprint Car Champion. He currently does not have a full-time ride in NASCAR as he competes for Tony Stewart Racing in three USAC series. NASCAR Hines made his first attempt at a Busch race in 2000, when he attempted to qualify for the Cheez-It 200 in a car owned by Jimmy Spencer. He did not make the field. 2003 Hines broke into NASCAR career in 2003, when he and NASCAR Craftsman Truck owner Jimb came to an agreement with Hines to run 5 truck races for him in the later portions of 2003. His career started at Indianapolis Raceway Park (IRP). Hines qualified 30th in the No. 27 Dodge Motorsports Dodge Ram and had just made it into the top-10 when he wrecked and crashed into the wall, finishing 32nd. At the next race at Texas Motor Speedway, he qualified 4th, and ran in the top-15 all day, coming home with an eleventh-place finish. Hines ran his last two races that season at Martinsville Speedway and Phoenix International Raceway. At both races, Hines qualified the No. 7 in 22nd place, and finished 13th. 2004 In 2004, Tommy Baldwin signed Hines to drive three races for the Hungry Drivers program, a Busch Series competition to see who would drive his No. 6 Ragú Dodge Intrepid that season. In his debut at Texas, he started 14th and finished 20th despite a late spin. After a 25th at Talladega Superspeedway, Hines had his best finish of the year, a 17th at Michigan International Speedway. Hines continued to run in the Truck Series, replacing Matt Crafton in the No. 88 Menards Chevrolet Silverado for ThorSport Racing, competing for NASCAR Rookie of the Year. He finished 20th, 16th and 29th in the first three races, before posting a 5th-place finish at Mansfield Motorsports Speedway. Starting at Texas, Tracy Hines had a streak of 8 straight top-17 finishes, capped off by a 9th at IRP. He also led 2 laps at Gateway. Hines finished off the 2004 season, with a pair of 13ths and earned an 18th-place points finish. 2005 In 2005, the No. 88 had gone back to Crafton, and Paul Wolfe was in the No. 6 Hellmann's Dodge for 2005. Despite a lack of sponsorship, ThorSport fielded a second truck for Hines, the No. 13. In 23 races he finished in the top-20 only 7 times. He was released with two races to go in the season after two wildly disappointing years in good equipment. Hines drove one race in 2005 the No. 43 Channellock Dodge for The Curb Agajanian Performance Group at California, where he started 26th and finishing 36th after a late crash. After Wolfe was released from the No. 6, Evernham Motorsports, who now owned the car, hired Hines to drive at The Milwaukee Mile, where he started ninth and finished nineteenth. He also ran at IRP in the No. 6, starting fifth and finishing 24th. Later in the season at Texas, he attempted a Busch race for Glynn Motorsports, however the No. 92 Ultra Comp Trailers Dodge crashed in practice and withdrew. 2006 Tracy Hines was to have signed to drive the No. 92 Glynn Motorsports Dodge in the Busch series, but the team dissolved. Instead, he signed to drive the No. 14 Dodge Charger for FitzBradshaw Racing, with sponsorship from TakeMeOn Vacation, Bluegrass, and JaniKing. Hines was teamed with fellow hoosier Joel Kauffman. After an aborted attempt at Rookie of the Year, Hines resigned from Fitz Bradshaw Racing. Hines plans to spend the rest of this season racing sprint cars. Return to open-wheel 2007 Hines raced USAC sprint, midget, and Silver Crown cars for Tony Stewart Racing. He sustained a fractured pelvis and left femur, and dislocated right knee in an off-road motorcycle wreck on April 30, 2007. 2008 Hines recorded the fastest ever midget car lap on an asphalt quarter mile at Slinger Super Speedway when he ran a 10.845 second qualifying lap on May 17, 2008. 2009–2013 Hines continued his career in the USAC ranks for several years before returning to NASCAR competition in 2013, driving in the Camping World Truck Series for ThorSport Racing in the inaugural Mudsummer Classic on the dirt at Eldora Speedway. Hines finished 13th in the event after starting in 16th. Motorsports career results NASCAR (key) (Bold – Pole position awarded by qualifying time. Italics – Pole position earned by points standings or practice time. * – Most laps led.) Busch Series Camping World Truck Series Season still in progress Ineligible for series points References External links Category:Living people Category:1972 births Category:People from New Castle, Indiana Category:Racing drivers from Indiana Category:NASCAR drivers Category:World of Outlaws drivers
Though many people would happily dig into a lovingly home-cooked meal from their parents, one vegan woman in Italy has been fined $1,170 for threatening to stab her mother for making traditional meat sauce in her presence. Italian newspaper Gazzetta di Modena reported this week that the 48-year-old woman has been ordered by local courts to pay a $520 court fine and $650 to her mother for physically threatening her with a kitchen knife, after the sexagenarian whipped up Bolognese sauce in their newly shared home. According to The Telegraph, the “newly unemployed” daughter had recently moved back into her mother’s small apartment, where she often cooked in the rezdore tradition of chefs in the Emilia Romagna dialect. OSCAR MAYER SAYS HOT DOGS ARE SANDWICHES; TWITTER ISN'T SO SURE The vegan daughter told court officials that, prior to moving to live with her mother, she had long been avoiding "sensory" and "olfactory contact” with animal products. Lawyers further told the Gazzetta di Modena that there had been “an escalation of aggressive episodes, always over food,” before things nearly took a turn for the fatal. Furious with the smell of meat sauce simmering on the stove one day in March 2016, the daughter reportedly grabbed a knife and made a grave threat. FOLLOW US ON FACEBOOK FOR MORE FOX LIFESTYLE NEWS “If you won’t stop on your own then I’ll make you stop. Quit making ragú, or I’ll stab you in the stomach,” the angry daughter said, as per The Telegraph, inciting her mother to press charges. With the complaint making its way to Modena tribunal court well over two years later, Justice of Peace Nadia Trifilo ultimately ruled in favor of the 69-year-old mother. She smacked the daughter with a $520 court fine and ordered her to pay $650 to her mother as compensation. The identities of the mother and daughter were not disclosed. Naturally, the Twitterverse had a whole lot to say about the wild tale. “The existence of Bolognese sauce is definitely in the top 3 on my list of reasons for why I'm not a vegan,” one said. “Must need a steak,” another clapped. “Someone needs some bacon in their life,” another meat lover agreed. “Had I been the judge, I would have jailed the daughter for attempted murder,” one mused on a more serious note.
Objection 2. Further, the virtuousgood consists in accord with reason, as was clearly shown above (55, 4, ad 2). But that which accords with reason is natural to man; since reason is part of man'snature. Therefore virtue is in man by nature. Objection 3. Further, that which is in us from birth is said to be natural to us. Now virtues are in some from birth: for it is written (Job 31:18): "From my infancy mercy grew up with me; and it came out with me from my mother's womb." Therefore virtue is in man by nature. I answer that, With regard to corporeal forms, it has been maintained by some that they are wholly from within, by those, for instance, who upheld the theory of "latent forms" [Anaxagoras; Cf. I, 45, 8; 65, 4]. Others held that forms are entirely from without, those, for instance, who thought that corporeal forms originated from some separate cause. Others, however, esteemed that they are partly from within, in so far as they pre-existpotentially in matter; and partly from without, in so far as they are brought into act by the agent. In like manner with regard to sciences and virtues, some held that they are wholly from within, so that all virtues and sciences would pre-exist in the soulnaturally, but that the hindrances to science and virtue, which are due to the soul being weighed down by the body, are removed by study and practice, even as iron is made bright by being polished. This was the opinion of the Platonists. Others said that they are wholly from without, being due to the inflow of the active intellect, as Avicenna maintained. Others said that sciences and virtues are within us by nature, so far as we are adapted to them, but not in their perfection: this is the teaching of the Philosopher (Ethic. ii, 1), and is nearer the truth. To make this clear, it must be observed that there are two ways in which something is said to be natural to a man; one is according to his specific nature, the other according to his individualnature. And, since each thing derives its species from its form, and its individuation from matter, and, again, since man's form is his rational soul, while his matter is his body, whatever belongs to him in respect of his rational soul, is natural to him in respect of his specific nature; while whatever belongs to him in respect of the particular temperament of his body, is natural to him in respect of his individualnature. For whatever is natural to man in respect of his body, considered as part of his species, is to be referred, in a way, to the soul, in so far as this particular body is adapted to this particular soul. In both these ways virtue is natural to man inchoatively. This is so in respect of the specific nature, in so far as in man's reason are to be found instilled by nature certain naturallyknown principles of both knowledge and action, which are the nurseries of intellectual and moral virtues, and in so far as there is in the will a naturalappetite for good in accordance with reason. Again, this is so in respect of the individualnature, in so far as by reason of a disposition in the body, some are disposed either well or ill to certain virtues: because, to wit, certain sensitive powers are acts of certain parts of the body, according to the disposition of which these powers are helped or hindered in the exercise of their acts, and, in consequence, the rational powers also, which the aforesaid sensitive powers assist. In this way one man has a natural aptitude for science, another for fortitude, another for temperance: and in these ways, both intellectual and moral virtues are in us by way of a natural aptitude, inchoatively, but not perfectly, since nature is determined to one, while the perfection of these virtues does not depend on one particular mode of action, but on various modes, in respect of the various matters, which constitute the sphere of virtue's action, and according to various circumstances. It is therefore evident that all virtues are in us by nature, according to aptitude and inchoation, but not according to perfection, except the theological virtues, which are entirely from without. This suffices for the Replies to the Objections. For the first two argue about the nurseries of virtue which are in us by nature, inasmuch as we are rational beings. The third objection must be taken in the sense that, owing to the natural disposition which the body has from birth, one has an aptitude for pity, another for living temperately, another for some other virtue. Objection 2. Further, sin and virtue are contraries, so that they are incompatible. Now man cannot avoid sin except by the grace of God, according to Wisdom 8:21: "I knew that I could not otherwise be continent, except God gave it." Therefore neither can any virtues be caused in us by habituation, but only by the gift of God. Objection 3. Further, actions which lead toward virtue, lack the perfection of virtue. But an effect cannot be more perfect than its cause. Therefore a virtue cannot be caused by actions that precede it. I answer that, We have spoken above (51, A2,3) in a general way about the production of habits from acts; and speaking now in a special way of this matter in relation to virtue, we must take note that, as stated above (55, A3,4), man'svirtue perfects him in relation to good. Now since the notion of good consists in "mode, species, and order," as Augustine states (De Nat. Boni. iii) or in "number, weight, and measure," as expressed in Wisdom 11:21, man'sgood must needs be appraised with respect to some rule. Now this rule is twofold, as stated above (19, A3,4), viz. humanreason and Divine Law. And since Divine Law is the higher rule, it extends to more things, so that whatever is ruled by humanreason, is ruled by the Divine Law too; but the converse does not hold. It follows that humanvirtue directed to the good which is defined according to the rule of humanreason can be caused by human acts: inasmuch as such acts proceed from reason, by whose power and rule the aforesaid good is established. On the other hand, virtue which directs man to good as defined by the Divine Law, and not by humanreason, cannot be caused by human acts, the principle of which is reason, but is produced in us by the Divine operation alone. Hence Augustine in giving the definition of the latter virtue inserts the words, "which God works in us without us" (Super Ps. 118, Serm. xxvi). It is also of these virtues that the First Objection holds good. Reply to Objection 2. Mortal sin is incompatible with divinely infused virtue, especially if this be considered in its perfect state. But actual sin, even mortal, is compatible with humanly acquired virtue; because the use of a habit in us is subject to our will, as stated above (Question 49, Article 3): and one sinful act does not destroy a habit of acquired virtue, since it is not an act but a habit, that is directly contrary to a habit. Wherefore, though man cannot avoid mortal sin without grace, so as never to sin mortally, yet he is not hindered from acquiring a habit of virtue, whereby he may abstain from evil in the majority of cases, and chiefly in matters most opposed to reason. There are also certain mortal sins which man can nowise avoid without grace, those, namely, which are directly opposed to the theological virtues, which are in us through the gift of grace. This, however, will be more fully explained later (109, 4). Reply to Objection 3. As stated above (1; 51, 1), certain seeds or principles of acquired virtue pre-exist in us by nature. These principles are more excellent than the virtues acquired through them: thus the understanding of speculative principles is more excellent than the science of conclusions, and the natural rectitude of the reason is more excellent than the rectification of the appetite which results through the appetite partaking of reason, which rectification belongs to moral virtue. Accordingly human acts, in so far as they proceed from higher principles, can cause acquired humanvirtues. Article 3. Whether any moral virtues are in us by infusion? Objection 1. It would seem that no virtues besides the theological virtues are infused in us by God. Because God does not do by Himself, save perhaps sometimes miraculously, those things that can be done by second causes; for, as Dionysius says (Coel. Hier. iv), "it is God's rule to bring about extremes through the mean." Now intellectual and moral virtues can be caused in us by our acts, as stated above (Article 2). Therefore it is not reasonable that they should be caused in us by infusion. Reply to Objection 1. Some moral and intellectualvirtues can indeed be caused in us by our actions: but such are not proportionate to the theological virtues. Therefore it was necessary for us to receive, from God immediately, others that are proportionate to these virtues. Reply to Objection 2. The theological virtues direct us sufficiently to our supernatural end, inchoatively: i.e. to God Himself immediately. But the soul needs further to be perfected by infused virtues in regard to other things, yet in relation to God. Reply to Objection 3. The power of those naturally instilled principles does not extend beyond the capacity of nature. Consequently man needs in addition to be perfected by other principles in relation to his supernatural end. Article 4. Whether virtue by habituation belongs to the same species as infused virtue? Objection 1. It would seem that infused virtue does not differ in species from acquired virtue. Because acquired and infused virtues, according to what has been said (3), do not differ seemingly, save in relation to the last end. Now humanhabits and acts are specified, not by their last, but by their proximate end. Therefore the infused moral or intellectualvirtue does not differ from the acquired virtue. Objection 2. Further, habits are known by their acts. But the act of infused and acquired temperance is the same, viz. to moderate desires of touch. Therefore they do not differ in species. Objection 3. Further, acquired and infused virtue differ as that which is wrought by God immediately, from that which is wrought by a creature. But the man whom God made, is of the same species as a man begotten naturally; and the eye which He gave to the man born blind, as one produced by the power of generation. Therefore it seems that acquired and infused virtue belong to the same species. On the contrary, Any change introduced into the difference expressed in a definition involves a difference of species. But the definition of infused virtue contains the words, "which God works in us without us," as stated above (Question 55, Article 4). Therefore acquired virtue, to which these words cannot apply, is not of the same species as infused virtue. I answer that, There is a twofold specific difference among habits. The first, as stated above (54, 2; 56, 2; 60, 1), is taken from the specific and formal aspects of their objects. Now the object of every virtue is a good considered as in that virtue's proper matter: thus the object of temperance is a good in respect of the pleasures connected with the concupiscence of touch. The formal aspect of this object is from reason which fixes the mean in these concupiscences: while the material element is something on the part of the concupiscences. Now it is evident that the mean that is appointed in such like concupiscences according to the rule of humanreason, is seen under a different aspect from the mean which is fixed according to Divine rule. For instance, in the consumption of food, the mean fixed by humanreason, is that food should not harm the health of the body, nor hinder the use of reason: whereas, according to the Divine rule, it behooves man to "chastise his body, and bring it into subjection" (1 Corinthians 9:27), by abstinence in food, drink and the like. It is therefore evident that infused and acquired temperance differ in species; and the same applies to the other virtues. The other specific differences among habits is taken from the things to which they are directed: for a man's health and a horse's are not of the same species, on account of the difference between the natures to which their respective healths are directed. In the same sense, the Philosopher says (Polit. iii, 3) that citizens have diverse virtues according as they are well directed to diverse forms of government. In the same way, too, those infused moral virtues, whereby men behave well in respect of their being "fellow-citizens with the saints, and of the household [Douay: 'domestics'] of God" (Ephesians 2:19), differ from the acquired virtues, whereby man behaves well in respect of human affairs. Reply to Objection 1. Infused and acquired virtue differ not only in relation to the ultimate end, but also in relation to their proper objects, as stated. Reply to Objection 2. Both acquired and infused temperance moderate desires for pleasures of touch, but for different reasons, as stated: wherefore their respective acts are not identical. Reply to Objection 3.God gave the man born blind an eye for the same act as the act for which other eyes are formed naturally: consequently it was of the same species. It would be the same if God wished to give a man miraculouslyvirtues, such as those that are acquired by acts. But the case is not so in the question before us, as stated.
Provo’s Freedom Festival reversed its decision to block LGBTQ groups from its yearly parade following a compromise struck between opposing sides on Thursday night. Following a tense two-hour meeting, representatives with the Independence Day celebration elected to allow local queer and trans organizations to participate, with a few stipulations. Members of participating LGBTQ groups must wear carry American flags and dress in red, white, and blue. “Nobody is allowed to have rainbow flags,” Jerilyn Pool, founder of the local nonprofit QueerMeals, told INTO. “We had to fight to use ‘LGBTQ’ on signage.” Five LGBTQ organizations had previously been denied participation in the Freedom Festival, which is one of the largest July 4 gatherings in the nation. In addition to the Grand Parade, the month-long calendar of events includes a flag retirement ceremony, softball tournament, and a baby contest. Meanwhile, the band OneRepublic will be playing the Stadium of Fire concert at Brigham Young University. “The Fourth of July is a big deal here,” said Stephenie Larsen, founder of the Encircle LGBTQ youth drop-in center based in Provo. “The celebration is huge, and the parade is a focal point.” The initial refusals were issued on Tuesday, just hours after Provo and the Freedom Festival adopted nondiscrimination policies claiming the event would not exclude people based on their religion, faith, ethnicity, and sexual orientation. Notably, gender identity was not included in that list. But in rejecting the applications of LGBTQ groups, officials claimed their proposals for participation weren’t “patriotic enough.” LGBTQ advocates weren’t buying the excuse. In an interview prior to Thursday’s compromise, Equality Utah Executive Director Troy Williams said it was “a random clause they’ve used to try and disguise their bigotry.” “They are trying to hide their bigotry behind the American flag,” Williams told INTO in a phone conversation. “But there’s nothing that fuels the fires of patriotism more than people who fight for the rights and liberties that have been historically denied to them.” Larsen, though, is sadly used to being shut out of the Freedom Festival. Almost the exact same thing happened to Encircle last year. The evening before the LGBTQ youth center was set to march in the 2017 parade, Encircle’s members were scheduled to meet briefly to pick up t-shirts and learn a dance they had planned to do as they walked. But earlier in the day, organizers called and said Encircle’s inclusion was a “mistake.” “I had to go and tell them we weren’t in the parade,” Larsen remembered. “As I was driving down, I thought, ‘I should just act like this is no big deal. If I’m sad about it, I’ll make them feel worse about it.’ But of course, I started bawling as soon as I started telling them. I felt like I was saying: ‘I’m sorry, once again you’re not good enough for your community.’” Encircle held a pancake breakfast for its members the next day, but Larsen claimed the flip-flop was “tough” on LGBTQ youth who seek out the center for support and shelter. Many of these young people—who come from conservative, Mormon families—have few other places to go where they can be affirmed. “They’re used to feeling judged by other people in the community,” Larsen said. “That rejection is something they live with.” After the 2017 incident, Encircle met with the Freedom Festival organizers every two months to build bridges and have dialogues with local community members in Provo. In these regular meetings, Larsen informed them that the entire country would be watching Utah if LGBTQ groups were turned away yet again. She was right. When news broke that LGBTQ groups had been refused for a second consecutive year, social media users tweeted at OneRepublic to pull out of Stadium of Fire—which would be a major blow to Provo’s economy. Although the concert used to primarily host country artists like Toby Keith and Brad Paisley, it has increasingly courted Top 40 pop stars as it attempts to draw in a mainstream crowd. Five years ago, Carly Rae Jepsen and Kelly Clarkson toplined Stadium of Fire, which featured a performance from Cirque du Soleil. Miley Cyrus and the Jonas Brothers have also appeared in recent years. While advocates have hailed the Freedom Festival’s decision to reverse its earlier ban as “historic,” Larsen admitted she can’t help but have mixed emotions. “I do think it is historic,” she said. “But I get upset and frustrated because I focus on the intent. They’re just letting us in because, what, is Toyota not going to sponsor us? Is the county commissioner going to take back his money?” While groups like Encircle, Mormons Building Bridges, and Provo Pride will be participating in the Freedom Festival, some have chosen to sit out. Instead of taking part in the parade, QueerMeals will stand in solidarity with LGBTQ marchers by gathering at secure locations along the parade route. Pool said she felt uncomfortable with restrictions placed on queer and trans marchers, who she said are being forced to “hide who they are.” Obscured by a sea of American flags, paradegoers might not know the groups are LGBTQ. “To have to cloak themselves in patriotism in order to be acceptable is unfair,” claimed Pool, who identifies as a straight ally. “A lot of the queer community is about being visible and saying: ‘We’re in your community. This is who we are.’ Having a lot of those visible elements stripped away doesn’t feel good to a lot of people.” But compromise or not, many in Utah’s LGBTQ community believe the controversy shows the state is headed in the right direction. On Wednesday, Utah Sen. Orrin Hatch made an impassioned plea for LGBTQ acceptance on the floor of the Senate. Addressing the 32 documented suicides of Mormon youth in the three months following the release a 2015 policy excommunicating the children of same-sex couples, Hatch said no person “should feel less because of their orientation.” “They deserve our unwavering love and support,” Hatch claimed. “They deserve our validation and the assurance that not only is there a place for them in this society but that it is far better off because of them. These young people need us and we desperately need them.” Williams said that seeing a “conservative bastion of Mormonism” follow Hatch’s lead has gotten the community “fired up.” Even as the local LGBTQ community continues to face obstacles to full inclusion, they will keep fighting. “We’re going to be louder than we ever have been,” he said.
HAMILTON BULLDOGS RELEASE GOALTENDER SCOTT DARLING Carrying three goaltenders on a hockey team’s roster is awkward. Carrying four is unwieldy. Scott Darling joined Robert Mayer, Peter Delmas and Cedrick Desjardins when he was brought in from the Wheeling Nailers of the ECHL November 16th on a professional tryout contract. His size fueled the Bulldogs interest. Darling is 6’6″ and 230 lbs. The problem has been he hasn’t been able to get that big body in front of the puck often enough to get consideration for an American Hockey League contract. He was released Tuesday and presumably is on his way back to Wheeling. Darling was in the Hamilton lineup twice as a backup and got into one game. From Hamilton Bulldogs Communications- HAMILTON, ONT – Hamilton Bulldogs General Manager Marc Bergevin announced today that the Club has released goaltender Scott Darling from his Professional Try-Out Contract which he signed on November 13th. Darling, 23, appeared in one game for the Bulldogs on November 14th against the Rochester Americans playing 24 minutes and 32 seconds while stopping all eight shots he faced.
1. Field of the Invention The present invention relates to a device for setting fastening elements. 2. Description of the Related Art One known device is disclosed in DE 10 2005 054 719 B3. The known device is provided with a set control link having a set control slot, and with a rivet ram connected to a set control pin which in turn engages in the set control slot. Also present is a feed shaft rod to which the set control link is non-rotatably mounted. The device is further equipped with a drive unit, by means of which the feed shaft rod can be driven to rotate in order to move the rivet ram between a retracted, pre-installation position and an extended, installation position. In this way, a fastening element embodied particularly as an expansion rivet can be set mechanically by, for example, pushing a rivet pin in between spring arms of an expansion rivet via the movement of the rivet ram. In this device, a rivet holding head connected to the feed rod protrudes relatively little beyond an end face of a receiving housing, thus resulting in an overall compact design.
MORINGA POTENT CAPSULES Organicveda presents to you the Moringa Potent capsules, concocted from the healthiest and richest sources of nutrition available on the planet – the Moringa plant and the Indian Gooseberry (Amla). This vitalizing blend of nourishment can be taken by persons of all ages, be it children, adults or seniors. Our capsules are produced with the perfect balance of the finest quality ingredients, achieving an end-product which leaves the consumer invigorated and rejuvenated. The potent capsules consists of Moringa Oleifera, Amla Fruit Powder (Indian gooseberry) and Black Pepper, This is a wonderful supplement that packs a host of vitamins, minerals, nutrients, anti-oxidants, amino acids, protein, iron, calcium, magnesium, potassium, fiber, phyto-nutrients, etc. Piperine in black pepper is a good way to increase nutrient absorption after consuming. Package Quantity Ingredients Used MORINGA leaves and Seeds BENEFITS Moringa Provides Antioxidants Fortunately, eating a diet rich in antioxidants can help increase your blood antioxidant levels to fight oxidative stress and reduce the risk of health diseases. Based on the ORAC Value analysis, Moringa leaf powder has up to 157600 μmol of antioxidants per 3.5 ounces (100 grams). This is even more than goji berries -枸杞 (25300 μmol/100grams) and Spirulina (24000 μmol/100grams). For More details click here.Moringa leaf powder Organic moringa seeds are very high in fiber, which can make them essential to a healthy food regimen AMLA FRUIT POWDER BENEFITS Amla powder increases skin health and acts as a body coolant, flushes out toxins and acts as an antioxidant. BLACK PEPPER POWDER BENEFITS Piperine in black pepper is a good way to increase nutrient absorption after consuming SUGGESTED USE 2 capsules a time, two times a day or as recommended by your healthcare professional The appropriate dose of moringa potent capsules depends on several factors such as the user’s age, health, and several other conditions. Be sure to follow relevant directions on product labels and consult your pharmacist or physician or other healthcare professional before using. Get in touch with Organicveda today for the Best quality Moringa leaves, Seeds along with Amla fruit powder & Black pepper. What are you waiting for? Buy our Moringa Potent Capsules today and enjoy the many Benefits of Moringa Moringa Potent Capsules side effects and Interactions We currently have no information for moringa potent capsules side effects and Interactions with food or medicines. However, Before using Moringa Capsules, pls consult your healthcare professional.
Q: How to make a percentage button work in react-native Edit Was able to make it work, but now when I click Tax button it says that result is NaN. How do I fix it? Have a small calculating app build in react-native. All buttons work except the Tax one. For the Tax button, I need it to add 12% to the result of either addition, subtraction, multiplication or division of two numbers. Can someone help me to understand what I'm doing wrong here? Tried looking online, but wasn't able to find the solution. Would be very glad if someone will point to the mistake. import React from 'react'; import { StyleSheet, Text, View, Button, TextInput } from 'react-native'; export default class Counter extends React.Component { state = { num: 0, } inp1 = 0; inp2 = 0; handleSubtract = () => { this.setState({ num:this.inp1-this.inp2 }) } handleAdd = () => { this.setState({ num: this.inp1 + this.inp2 }) } handleDivide = () => { this.setState({ num: this.inp1 / this.inp2 }) } handleMultiply = () => { this.setState({ num: this.inp1 * this.inp2 }) } handleTax = () => { var newNum = this.num / 100 * 12; this.setState({ num: newNum }) } handleNum1 = (text) => { this.inp1 = parseInt(text); } handleNum2 = (text) => { this.inp2 = parseInt(text); } render() { return ( <View style={styles.flexBox}> <Text style={styles.flexTitle}>Hi, welcome to my app!</Text> <View style={styles.inpBox}> <TextInput style={[styles.inps, {marginRight: 10}]} placeholder="Num1" keyboardType="phone-pad" onChangeText={this.handleNum1} /> <TextInput style={styles.inps} placeholder="Num2" keyboardType="phone-pad" onChangeText={this.handleNum2} /> </View> <View style={styles.butBox}> <View style={styles.button}> <Button onPress={this.handleSubtract} title="Subtract" /> </View> <View style={styles.button}> <Button onPress={this.handleAdd} title="Add" /> </View> <View style={styles.button}> <Button onPress={this.handleMultiply} title="Multiply" /> </View> <View style={styles.button}> <Button onPress={this.handleDivide} title="Divide" /> </View> <View style={[styles.button, {height: 65, width: 65}]}> <Button onPress={this.handleTax} title="Tax" color="#f00" /> </View> </View> <Text style={styles.numBox}> {this.state.num} </Text> </View> ); } } const styles = StyleSheet.create({ flexBox: { flex: 1, flexDirection: "column", justifyContent: "center", alignItems: "center", }, flexTitle: { padding: 10, }, inpBox: { flexDirection: "row", }, inps: { width: "20%", height: 50, textAlign: "center", }, butBox: { flexDirection: "row", width: "100%", alignItems: "center", }, button: { width: "20%", height: 50, }, numBox: { padding: 20, fontSize: 32, } }); A: There is only one mistake on your code to make it work the way you want it. In your function handleTax You are referring to the wrong num. Instead of this.num refer to it like this - this.state.num will look like below: handleTax = () => { var newNum = this.state.num / 100 * 12; this.setState({ num: newNum }); } This will get rid of the NaN result. To get your proper tax calculation is like below: var newNum = this.state.num + this.state.num * 0.12;
import sys from . import model from .error import FFIError COMMON_TYPES = {} try: # fetch "bool" and all simple Windows types from _cffi_backend import _get_common_types _get_common_types(COMMON_TYPES) except ImportError: pass COMMON_TYPES['FILE'] = model.unknown_type('FILE', '_IO_FILE') COMMON_TYPES['bool'] = '_Bool' # in case we got ImportError above for _type in model.PrimitiveType.ALL_PRIMITIVE_TYPES: if _type.endswith('_t'): COMMON_TYPES[_type] = _type del _type _CACHE = {} def resolve_common_type(parser, commontype): try: return _CACHE[commontype] except KeyError: cdecl = COMMON_TYPES.get(commontype, commontype) if not isinstance(cdecl, str): result, quals = cdecl, 0 # cdecl is already a BaseType elif cdecl in model.PrimitiveType.ALL_PRIMITIVE_TYPES: result, quals = model.PrimitiveType(cdecl), 0 elif cdecl == 'set-unicode-needed': raise FFIError("The Windows type %r is only available after " "you call ffi.set_unicode()" % (commontype,)) else: if commontype == cdecl: raise FFIError( "Unsupported type: %r. Please look at " "http://cffi.readthedocs.io/en/latest/cdef.html#ffi-cdef-limitations " "and file an issue if you think this type should really " "be supported." % (commontype,)) result, quals = parser.parse_type_and_quals(cdecl) # recursive assert isinstance(result, model.BaseTypeByIdentity) _CACHE[commontype] = result, quals return result, quals # ____________________________________________________________ # extra types for Windows (most of them are in commontypes.c) def win_common_types(): return { "UNICODE_STRING": model.StructType( "_UNICODE_STRING", ["Length", "MaximumLength", "Buffer"], [model.PrimitiveType("unsigned short"), model.PrimitiveType("unsigned short"), model.PointerType(model.PrimitiveType("wchar_t"))], [-1, -1, -1]), "PUNICODE_STRING": "UNICODE_STRING *", "PCUNICODE_STRING": "const UNICODE_STRING *", "TBYTE": "set-unicode-needed", "TCHAR": "set-unicode-needed", "LPCTSTR": "set-unicode-needed", "PCTSTR": "set-unicode-needed", "LPTSTR": "set-unicode-needed", "PTSTR": "set-unicode-needed", "PTBYTE": "set-unicode-needed", "PTCHAR": "set-unicode-needed", } if sys.platform == 'win32': COMMON_TYPES.update(win_common_types())
NHL salaries since Bettman took over as commish ( 1993 ) have gone from 580,000 a year to 2.4 million.. and NHL total revenue has gone up to 3 Billion from 400 million.. So , maybe Gary should drop his drawers at center ice and the players could line up to kiss his butt? or just maybe he has fulfilled his commitment to "grow the game" and the owners can line up? Even if as Dogsalmon says .......................... ( waiting for it ) Personal note.. When I started watching the Nucks play (1970) average NHL salaries were 18,000.. about the same as I made a year.. I've already told all of my family I don't want anything NHL related for Christmas or even the foreseeable future. I know I can't be the only one. Season ticket holders may be replaced by the next guy in line but I can do my part and not support this nonsense. I will probably watch games when they lace em up again but I certainly won't be contributing to HRR in any way for a long time. I really hope when they pull the pin on the season that they do not have a weighted lottery for the 2013 draft. 30 teams, 30 balls, not 3 balls for teams like the Leafs, Flames and Coilers because they sucked shit for the last three years, and 1 ball for the teams who actually made the playoffs. Nobody in their right mind can predict what would have happened had there been a season this year. The Canucks could have gotten ravaged with injuries and sold off all their UFAs at the deadline and finished in last. To have a weighted lottery based on what happened from 2009-12 with ZERO regard to the cancelled season except a guess is disingenous and bush league but then again it is the NHL so anything is possible. Blob Mckenzie wrote:I really hope when they pull the pin on the season that they do not have a weighted lottery for the 2013 draft. 30 teams, 30 balls, not 3 balls for teams like the Leafs, Flames and Coilers because they sucked shit for the last three years, and 1 ball for the teams who actually made the playoffs.
Q: Why are some PHP email bodies containing the surrounding html code and other are not? And that is not PHPMailer I have various emails being sent out via PHP mail. The problem is, for example when I receive the email, it is displayed in the body properly without any surrounding html. For some reason, other people that I am testing it out with are getting it with the html showing up. Here is an example of one of the emails being sent: $Email = $result['Email']; $check_profi = $result['check_profi']; $check_reply = $result['check_reply']; if($prof->id != $auth->id && $check_profi == 'checked') { $to = $Email; $subject = "$auth->first_name $auth->last_name left you a comment on Blah.com"; $message = "$auth->first_name $auth->last_name left you a comment on Blah.com: <br /><br />\"$body\"<br /><br /> <a href='http://www.Blah.com.php?id=" . $prof->id . "'>Click here to view</a><br /><br />Do LIFE,<br />"; $from = "Blah <noreply@Blah.org>"; $headers = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n"; $headers .= "From: $from"; mail($to, $subject, $message, $headers); } I have been suggested PHP mailer but that is a huge way around something simple I need to do. I already have my mail all set up and it works great,except this small issue. A: Perhaps you're forgetting some necesary headers. Normally I use an specific mail library to avoid these issues, like PHPMailer ( http://sourceforge.net/projects/phpmailer/ )
Men's Volleyball Menzel, Davis Represent USA in International Play Aug 12, 2011 SANTA BARBARA, Calif. - Having finished second-place in the NCAA tournament this past April, one would think that UC Santa Barbara stars Jeff Menzel and Dylan Davis would be content to take some well-deserved R&R. Not so for the pair of Gauchos, as both were selected to represent the USA in international play over the summer. Menzel, who ranks third all-time in kills at UCSB, is starting at outside hitter for Team USA at the World University Games held in Shenzhen, China. In his first game on August 12, Menzel led his team to a straight set win over Team Mexico on the strength of 15 kills and a percentage of .407, tops among all players. Menzel is lining up on the floor against other phenoms such as opposite Murphy Troy, who earned the AVCA National Player of the Year award for his work at USC this past season. The U.S. will be looking to medal for the first time since 2007, when they captured the bronze in Thailand. Dylan Davis, who will be the only returning starter for the Gauchos in 2012, was selected to the USA Men's Junior National Team (open to players aged 21 and under) that headed to Rio de Janeiro to participate in the FIVB Volleyball Men's Junior World Championship Brazil. Davis, who started at middle blocker, was a key cog on a team that finished the tournament in fourth place, the highest ever finish for the junior team. The team had previously never finished higher than seventh place in the 16 team tournament. The fourth place finish was disappointing for Team USA, whose stated goal was to bring a medal home. The Americans went 2-1 in each of the two preliminary rounds before losing to Argentina in the semifinals and Serbia in the bronze medal match. While Davis' time representing is complete for the moment, Menzel and the rest of Team USA have four preliminary matches left before the quarterfinal stage of the tournament. To follow the rest of the action go here.
Bilstone Bilstone is a small village in the Hinckley and Bosworth district of Leicestershire, England. It is approximately west from the county town and city of Leicester, and east from Twycross and the A444 road. The village forms part of the civil parish of Shackerstone. The population is included in the civil parish of Market Bosworth. A half mile to the south, on Gibbet Lane, was a gibbet post. It dated from 1800, but had disappeared by 1988. The post was close to a contemporary murder. At the west of the village is a Grade II listed early 19th-century farmhouse. At the north of the village on Mill Lane is a disused 18th-century watermill, with adjoined 19th-century buildings. The mill was operational in the 1950s; today its machinery doesn't exist. Bilstone is listed in the Domesday Book as in the Guthlaxton Hundred of Leicestershire, with two ploughlands, three households and three freemen. In 1066 Countess Godiva was Lord, she remaining as such in 1086, also becoming Tenant-in-chief to William I. In 1870 Bilston was in the parish of Norton Juxta Twycross with a population of 116 and 25 houses. John Grundy, Sr., land surveyor and civil engineer, was born in Bilstone c. 1696. References External links “Bilstone”, Genuki Category:Villages in Leicestershire Category:Hinckley and Bosworth
1. Field of the Invention The present invention concerns a softening agent and more particularly, it relates to a concentrated softening agent for use in clothing, which agent exhibits less viscosity increase with age and is capable of providing various kinds of fabrics with excellent softness and antistatic property. 2. Description of the Prior Art During wearing and repeated washing of clothings, fabric processing chemicals are washed out or the clothings per se are hardened due to deterioration whereby to result in an undesired feel. In view of the above, softening agents capable of providing fabrics with softness and antistatic property have generally often been used in homes. At present, most of the commercially available, home use, softening agents comprise, as the main ingredient, cationic surface active agents having 1-2long chain alkyl groups in one molecule, for example, di(hardened tallow alkyl)dimethylammonium salts. These softening base materials comprising such quaternary ammonium salts as the main ingredient are less water soluble and are usually produced in the form of 3 to 5 wt% aqueous dispersion or emulsion. Along with the increase of clothing for which the softening agents are used, there has keenly been demanded a concentrated softening agent for use in clothing which agent is a highly concentrated aqueous dispersion, for reducing the distribution cost and packaging cost, and reducing the amount of storage space required in homes and shops. However, if the concentration of the softening base material exceeds 5% by weight, the viscosity of the aqueous dispersion is remarkably increased so as to cause various handling troubles. For producing softening agents at such a high concentration, there have been known, for example, (1) a method of adding a water soluble cationic surface active agent, PA1 (2) a method of adding an ethylene oxide adduct of a higher alcohol or alkyl phenol, PA1 (3) a method of adding urea or ethylene glycol and PA1 (4) a method of adding a water soluble salt. PA1 (a) from 10 to 20% by weight of one or more of quaternary ammonium salts, PA1 (b) from 0.5 to 3% by weight of an addition product of (i) from 10-50 mols of alkylene oxide containing, as an essential ingredient, ethylene oxide, with (ii) one mol of an unsaturated or branched alcohol having 12 to 24 carbon atoms or an unsaturated or branched aliphatic acid having 12 to 24 carbon atoms, PA1 (c) from 0.5 to 2.0% by weight of a monohydric alcohol having 1 to 3 carbon atoms, PA1 (d) from 3 to 15% by weight of a di- or tri-valent polyol having 2 to 3 carbon atoms, PA1 (e) from 0.05 to 0.4% by weight of an inorganic salt, and PA1 (f) from 0.3 to 5% by weight of one or more of polyether compound or derivative thereof which is prepared by adding an alkylene oxide containing ethylene oxide as the essential ingredient to a compound having three or more active hydrogen atoms, in which the total weight for the polyoxyethylene chain moiety is more than 60% of the entire weight and the molecular weight is from 5,000 to 2,000,000. PA1 R.sub.2a : saturated or unsaturated, linear or branched C.sub.8 -C.sub.24 aliphatic hydrocarbon group or hydroxy-substituted aliphatic hydrocarbon group, PA1 R.sub.3a, R.sub.4a, R.sub.6a : C.sub.1 -C.sub.3 alkyl group, hydroxy-substituted alkyl group or ##STR2## in which n=1-10, Ya=hydrogen or CH.sub.3, R.sub.5a : saturated or unsaturated branched C.sub.24 -C.sub.36 aliphatic hydrocarbon group or substituted aliphatic hydrocarbon group; PA1 R.sub.7a, R.sub.8a : saturated or unsaturated linear or branched C.sub.7 -C.sub.21 aliphatic hydrocarbon group or substituted aliphatic hydrocarbon group; PA1 A, B: C.sub.1 -C.sub.3 alkylene group; PA1 Xa: CH.sub.3 SO.sub.4, C.sub.2 H.sub.5 SO.sub.4, C.sub.n H.sub.2n+1 COO, in which n=0-17, C.sub.n H.sub.2n+1 OPO.sub.3 in which n=8-18, HOCH.sub.2 COO, ##STR3## halogen. However, the methods (1)-(3) can provide no satisfactory effect since the concentration is insufficient or the viscosity increases with time. In the case of the method (4), although an effect of lowering the initial viscosity can be obtained, there is no satisfactory effect of suppressing the increase of the viscosity with aging. In addition, if the salt is added in a great amount, the aqueous dispersion tends to be separated and, accordingly, satisfactory concentrated softening agents for use in clothing have not yet been obtained.
But last month I spent two weeks in Tanzania’s rural regions, speaking with mid-level and frontline education officials who are closer to supervising, managing and delivering education. I was surprised to hear similar messages. The ‘recent changes,’ ‘new leadership,’ and ‘our President John Pombe Magufuli’ were phrases repeated by many. I heard about greater accountability and monitoring – that teachers were scared of not performing, teacher attendance was improving, and ward and district supervision had intensified. I also heard that payments to schools – for capitation grant and free education – were more regular; that desks were being ordered; and that parents were more aware of their entitlement for education, having heard the president talking on the news. These are still early days, and my anecdotes do not present a complete picture - but they do offer exciting examples of how a change of leadership could potentially filter throughout the system. The questions are whether this talk will turn into behavioural change, and whether this change, in turn, will be sustainable. To me, the situation has strong parallels with the questions the RISE country research team is asking through its new project in Tanzania. The RISE team is examining the effects of educational reforms that predate Magufuli’s presidency, but capture the same spirit of aiming to change the status quo. The RISE team is embarking on an analysis of Tanzania’s national education programme, Big Results Now in Education. This impressive reform effort was also initiated by strong leadership, with the previous Government launching a programme in six priority sectors in 2012 to aid transition to middle income status. With leadership from the previous president and consultation with over 30 organisations from central and local government, civil society, academia and development partners, the reform won substantial buy-in for its ambitious project components and outcome targets. If Big Results Now achieves its intended impact goals, it could really transform education outcomes for children in Tanzania. The programme was established to tackle accountability mechanisms and incentives, the way teachers teach, school financing, and teacher motivation. Last month I heard multiple times that motivation is low due to unreliable salary payments to teachers. Studies have found teachers to be absent from the classroom more than 50 percent of the time (see here and here), - with this absenteeism often attributed to motivation. If the programme has made progress towards addressing this problem, the impacts on education could be transformative. The RISE team will be researching whether the anticipated impacts of the Big Results Now reform package are indeed borne out. They will be looking at a number of components and routes through which the reforms aim to improve learning outcomes. Critically, the RISE team will try to understand not only what worked, but why. A key part of answering the ‘why’ question is about understanding how coalitions for reform were founded and sustained, and about how behaviour changed throughout the system. What was going on back in 2012 that led to this big push from stakeholders? Will any momentum be maintained and supported by the new government? And how does this trickle down to action and implementation by all the actors in the system – teachers, local government officers, local politicians, and the community? I’m excited to see what emerges over the coming years, as the research team starts finding answers, providing policymakers with a strong understanding of how, with the right circumstances, rapid improvements in learning outcomes might be achieved. This will help ensure that positive talk actually turns into concrete benefits for millions of children. Nicola Ruddle is an education specialist at Oxford Policy Management and leads the Country Engagement workstream on the RISE team. RISE blog posts reflect the views of the authors and do not necessarily represent the views of the organisation or our funders.
Q: How to tell TypeScript there is no undefined in Array? I have the error: Type (string | undefined)[] is not assignable to type string | string[] | undefined, because my function this.errors get the type string | string[] | undefined. But before to return the array I filtered it with undefined. It means there is no undefined. And the value can be string or undefined. The definition of setErrors return array of errors. This is my error: This is my function: this.setErrors(() => { const errors = validators.map(val => val(this.inputValue)); const filteredErrors = errors.filter(el => el); return filteredErrors && filteredErrors.length !== 0 ? filteredErrors : undefined; }); Validators: const REGEXP_UPPERCASE = /[A-Z]/; const REGEXP_LOWERCASE = /[a-z]/; const REGEXP_ONE_DIGIT = /.*[0-9].*/; export const newPasswordValidators = (args: { min: number; max: number }) => { const { min, max } = args; return { minMaxLength: (value?: string) => value && value.length < min || value.length > max ? 'Not valid length' : undefined, requiredToFill: (value?: string) => !value ? 'Required to fill.' : undefined, uppercaseLetter: (value?: string) => value && !REGEXP_UPPERCASE.test(value) ? 'Use at least one uppercase letter.' : undefined, lowercaseLetter: (value?: string) => value && !REGEXP_LOWERCASE.test(value) ? 'Use at least one lowercase letter.' : undefined, atLeastOneDigit: (value?: string) => value && !REGEXP_ONE_DIGIT.test(value) ? 'Use at least one digit.' : undefined, }; }; Super class method: public setErrors(errors: string | string[] | undefined | (() => (string | string[] | undefined))) { this._errors = errors; } Can you tell me please how can I get rid from this error? A: Did you ever hear of typecasting (to be exact, it's more a type assertion in TS due to JS' dynamic nature)? Sounds like this is what you need here, there are two ways to do it which produce basically the same result: const x: string[] = <string[]>strAndUndefinedArr; const y: string[] = strAndUndefinedArr as string[]; This works everywhere, you can also do stuff like functionCall(<string[]>arr); //or functionCall(arr as string[]) You can read more on this here
by Ten days into the uprising in Benghazi, Libya, the United Nations’ Human Rights Council established the International Commission of Inquiry on Libya. The purpose of the Commission was to “investigate all alleged violations of international human rights law in Libya.” The broad agenda was to establish the facts of the violations and crimes and to take such actions as to hold the identified perpetrators accountable. On June 15, the Commission presented its first report to the Council. This report was provisional, since the conflict was still ongoing and access to the country was minimal. The June report was no more conclusive than the work of the human rights non-governmental organizations (such as Amnesty International and Human Rights Watch). In some instances, the work of investigators for these NGOs (such as Donatella Rovera of Amnesty) was of higher quality than that of the Commission. Due to the uncompleted war and then the unsettled security state in the country in its aftermath, the Commission did not return to the field till October 2011, and did not begin any real investigation before December 2011. On March 2, 2012, the Commission finally produced a two hundred-page document that was presented to the Human Rights Council in Geneva. Little fanfare greeted this report’s publication, and the HRC’s deliberation on it was equally restrained. Nonetheless, the report is fairly revelatory, making two important points: first, that all sides on the ground committed war crimes with no mention at all of a potential genocide conducted by the Qaddafi forces; second, that there remains a distinct lack of clarity regarding potential NATO war crimes. Not enough can be made of these two points. They strongly inferthat the rush to a NATO “humanitarian intervention” might have been made on exaggerated evidence, and that NATO’s own military intervention might have been less than “humanitarian” in its effects. It is precisely because of a lack of accountability by NATO that there is hesitancy in the United Nations Security Council for a strong resolution on Syria. “Because of the Libyan experience,” the Indian Ambassador to the UN Hardeep Singh Puri told me in February, “other members of the Security Council, such as China and Russia, will not hesitate in exercising a veto if a resolution – and this is a big if – contains actions under Chapter 7 of the UN Charter, which permits the use of force and punitive and coercive measures.” Crimes Against Humanity. The Libyan uprising began on February 15, 2011. By February 22, the UN Human Rights Chief Navi Pillay claimed that two hundred and fifty people had been killed in Libya, “although the actual numbers are difficult to verify.” Nonetheless, Pillay pointed to “widespread and systematic attacks against the civilian population” which “may amount to crimes against humanity.” Pillay channeled the Deputy Permanent Representative to the UN from Libya, Ibrahim Dabbashi, who had defected to the rebellion and claimed, “Qaddafi had started the genocide against the Libyan people.” Very soon world leaders used the two concepts interchangeably, “genocide” and “crimes against humanity.” These concepts created a mood that Qaddafi’s forces were either already indiscriminately killing vast numbers of people, or that they were poised for a massacre of Rwanda proportions. Courageous work by Amnesty International and Human Rights Watch last year, then much later the 2012 report from the UN belies this judgment, (as does my forthcoming book Arab Spring, Libyan Winter, AK Press), which goes through the day-by-day record and show two things: that both sides used excessive violence and that the rebels seemed to have the upper hand for much of the conflict, with Qaddafi’s forces able to recapture cities, but unable to hold them. The UN report is much more focused on the question of crimes committed on the ground. This is the kind of forensic evidence in the report: (1) In the military base and detention camp of Al Qalaa. “Witnesses, together with the local prosecutor, uncovered the bodies of 43 men and boys, blindfolded and with their hands tied behind their backs.” Qaddafi forces had shot them. Going over many of these kinds of incidents, and of indiscriminate firing of heavy artillery into cities, the UN Report notes that these amount to a war crime or a crime against humanity. (2) “Over a dozen Qadhafi soldiers were reportedly shot in the back of the head by thuwar [rebel fighters] around 22-23 February 2011 in a village between Al Bayda and Darnah. This is corroborated by mobile phone footage.” After an exhaustive listing of the many such incidents, and of the use of heavy artillery against cities notably Sirte, the UN report suggests the preponderance of evidence of the war crime of murder or crimes against humanity. There is no mention of genocide in the Report, and none of any organized civilian massacre. This is significant because UN Resolution 1973, which authorized the NATO war, was premised on the “the widespread and systematic attacks currently taking place in the Libyan Arab Jamahiriya against the civilian population” which “may amount to crimes against humanity.” There was no mention in Resolution 1973 of the disproportionate violence of the thuwar against the pro-Qaddafi population (already reported by al-jazeera by February 19), a fact that might have given pause to the UN as it allowed NATO to enter the conflict on the rebels’ behalf. NATO’s partisan bombardment allowed the rebels to seize the country faster than they might have had in a more protracted war, but it also allowed them carte blanche to continue with their own crimes against humanity. With NATO backing, it was clear that no one was going to either properly investigate the rebel behavior, and no-one was going to allow for a criminal prosecution of those crimes against humanity. Violence of this kind by one’s allies is never to be investigated as the Allies found out after World War 2 when there was no assessment of the criminal firebombing of, for example, Dresden. No wonder that the UN Report notes that the Commissioners are “deeply concerned that no independent investigation or prosecution appear to have been instigated into killings committed by thuwar.” None is likely. There are now over eight thousand pro-Qaddafi fighters in Libyan prisons. They have no charges framed against them. Many have been tortured, and several have died (including Halah al-Misrati, the Qaddafi era newscaster). The section of the UN report on the town of Tawergha is most startling. The thirty thousand residents of the town were removed by the Misratan thuwar. The general sentiment among the Misratan thuwar was that the Tawerghans were given preferential treatment by the Qaddafi regime, a claim disputed by the Tawerghans. The road between Misrata and Tawergha was lined with slogans such as “the brigade for purging slaves, black skin,” indicating the racist cleansing of the town. The section on Tawergha takes up twenty pages of the report. It is chilling reading. Tawerghans told the Commission “that during ‘interrogations’ they were beaten, had hot wax poured in their ears and were told to confess to committing rape in Misrata. The Commission was told that one man had diesel poured on to his back which was then set alight; the same man was held in shackles for 12 days.” This goes on and on. The death count is unclear. The refugees are badly treated as they go to Benghazi and Tripoli. To the Commission, the attacks against Tawerghans during the war “constitute a war crime” and those that have taken place since “violate international human rights law” and a “crime against humanity.” Because of the “current difficulties faced by the Libyan Government,” the Commission concludes, it is unlikely that the government will be able to bring justice for the Tawerghans and to undermine the “culture of impunity that characterizes the attacks.” NATO’s Crimes. For the past several months, the Russians have asked for a proper investigation through the UN Security Council of the NATO bombardment of Libya. “There is great reluctance to undertake it,” the Indian Ambassador to the UN told me. When the NATO states in the Security Council wanted to clamor for war in February-March 2011, they held discussions about Libya in an open session. After Resolution 1973 and since the war ended, the NATO states have only allowed discussion about Libya in a closed session. When Navi Pillay came to talk about the UN Report, her remarks were not for the public. Indeed, when it became clear to NATO that the UN Commission wished to investigate NATO’s role in the Libyan war, Brussels balked. On February 15, 2012, NATO’s Legal Adviser Peter Olson wrote a strong letter to the Chair of the Commission. NATO accepted that the Qaddafi regime “committed serious violations of international law,” which led to the Security Council Resolution 1973. What was not acceptable was any mention of NATO’s “violations” during the conflict, “We would be concerned, however, if ‘NATO incidents’ were included in the Commission’s report as on a par with those which the Commission may ultimately conclude did violate law or constitute crimes. We note in this regard that the Commission’s mandate is to discuss ‘the facts and circumstance of….violations [of law] and…crimes perpetrated.’ We would accordingly request that, in the event the Commission elects to include a discussion of NATO actions in Libya, its report clearly state that NATO did not deliberately target civilians and did not commit war crimes in Libya.” To its credit, the Commission did discuss the NATO “incidents.” However, there were some factual problems. The Commission claimed that NATO flew 17,939 armed sorties in Libya. NATO says that it flew “24,200 sorties, including over 9,000 strike sorties.” What the gap between the two numbers might tell us is not explored in the report or in the press discussion subsequently. The Commission points out that NATO did strike several civilian areas (such as Majer, Bani Walid, Sirte, Surman, Souq al-Juma) as well as areas that NATO claims were “command and control nodes.” The Commission found no “evidence of such activity” in these “nodes.” NATO contested both the civilian deaths and the Commission’s doubts about these “nodes.” Because NATO would not fully cooperate with the Commission, the investigation was “unable to determine, for lack of sufficient information, whether these strikes were based on incorrect or outdated intelligence and, therefore, whether they were consistent with NATO’s objective to take all necessary precautions to avoid civilian casualties entirely.” Three days after the report was released in the Human Rights Council, NATO’s chief Anders Fogh Rasmussen denied its anodyne conclusions regarding NATO. And then, for added effect, Rasmussen said that he was pleased with the report’s finding that NATO “had conducted a highly precise campaign with a demonstrable determination to avoid civilian casualties.” There is no such clear finding. The report is far more circumspect, worrying about the lack of information to make any clear statement about NATO’s bombing runs. NATO had conducted its own inquiry, but did not turn over its report or raw data to the UN Commission. On March 12, UN Secretary General Ban Ki-moon went to the UN Security Council and stated that he was “deeply concerned” about human rights abuses in Libya, including the more than eight thousand prisoners held in jails with no judicial process (including Saif al-Islam Qaddafi, who should have been transferred to the Hague by NATO’s logic). Few dispute this part of the report. The tension in the Security Council is over the section on NATO. On March 9, Maria Khodynskaya-Golenishcheva of the Russian Mission to the UN in Geneva noted that the UN report omitted to explore the civilian deaths caused by NATO. “In our view,” she said, “during the NATO campaign many violations of the standard of international law and human rights were committed, including the most important right, the right to life.” On March 12, Russia’s Foreign Minister Sergei Lavrov accused NATO of “massive bombings” in Libya. It was in response to Lavrov’s comment that Ban’s spokesperson Martin Nesirky pointed out that Ban accepts “the report’s overall finding that NATO did not deliberately target civilians in Libya.” NATO is loath to permit a full investigation. It believes that it has the upper hand, with Libya showing how the UN will now use NATO as its military arm (or else how the NATO states will be able to use the UN for its exercise of power). In the Security Council, NATO’s Rasmussen notes, “Brazil, China, India and Russia consciously stepped aside to allow the UN Security Council to act” and they “did not put their military might at the disposal of the coalition that emerged.” NATO has no challenger. This is why the Russians and the Chinese are unwilling to allow any UN resolution that hints at military intervention. They fear the Pandora’s box opened by Resolution 1973. Vijay Prashad’s new book, Arab Spring, Libyan Winter (AK Press) will be out in late March. On March 25, he will be speaking at the plenary panel of the United National Anti-War Coalition National Conference in Stamford, CT, alongside Bill McKibben, Richard Wolff and Nada Khader on “Global Economic Meltdown, Warming and War.”
credit line There continues to be a misunderstanding in the Financial Planning industry about the effectiveness and the benefits of using funds from a Reverse loan to protect a client’s principal from running out of funds. I thought the following article was quite informative based upon an analysis to see what the probability of retained equity and investment principal would be in the future if a person also utilized a Reverse loan as part of their retirement strategy. “Two researchers proved through analysis published in February that a reverse mortgage credit line can lead to “substantially greater cash flow survival probabilities” for people who are planning for retirement. Published in the Journal of Financial Planning, Barry Sacks, Ph.D. and Stephen Sacks, Ph.D. detail three strategies for using home equity in the form of a reverse mortgage credit line to increase the safe maximum initial rate of retirement income withdrawals. Examining a last resort strategy; a credit line strategy used after other investments have shown negative returns; and drawing upon the reverse mortgage credit line first, before other forms of investment, Sacks and Sacks find that the retiree’s portfolio plus home equity net worth after 30 years is about twice as likely to be greater when one of the latter two strategies is used. ‘“The conventional wisdom holds that home equity, drawn upon in the form of a reverse mortgage (discussed below) or similar product, should be used as a last resort, only if and when the account is exhausted,” the authors write. “This is a rather passive approach. We show that the probability of cash flow survival is substantially enhanced by reversing the conventional wisdom.”’ The reverse mortgage is not necessarily the best option for everyone, they write, but for those who do decide to take a reverse mortgage, the research shows how it can best benefit them in retirement. The use of a credit line planned in advance is far more beneficial than the “last resort” strategy, they find.” View the full reverse mortgage analysis. Written by Elizabeth Ecker Recent Posts Testimonials Dear Lorraine, You began the process for me, the process of breathing again. We were both working our display booths at the Ventura County buildings; mine with the County Fire Department and you represented your advisory company. As you descibed the highlights of a Reverse mortgage to me, it dawned on me that this could be the answer to my problem, ie., how could I pay for my wife's medical condition. After that lucky meeting, I checked out your company & found out it was doing business with FHA & apparently doing so without trouble. I checked out 3 other similar agencies, all with similar track records. So- I stayed with my original choice and didn't regret it for a single moment. So I thank you once again and strongly recommend your company to any other seekers of a compassionate and willing helper. Sincerely, Raymond A. O'Grady Newbury Park, CA Networked
Tom Coburn was the last honest man in Washington. In politics today, you have Democrats and Republicans. You have so-called liberals and conservatives. And you have self-styled “progressives” and libertarians. These people love labels. They love being labeled. That’s because they love talking about themselves. And they love it, even more, to hear other people talking about them. “She’s a real ‘progressive,’ ” people say about the newest child-legislator as she pounds on the rostrum demanding that more of the money you earned be taken from you and used on old, broken-down, evil socialist scams that have been debunked for over 100 years now. “Progressive”? Or, “Look at that great libertarian talk!” they say about some old coot demanding an end to the “war on drugs” in America — a truly cockamamie scheme that always winds up having you and me pay more taxes rehabbing losers, jailing them for other crimes, or supplying them directly with needles and methadone. This was not Tom Coburn. He was a doctor. He delivered babies. If he screwed up, if he failed to know exactly what he was doing, if he decided to grandstand in the delivery room, somebody would die. Such principles, of course, run counter to all that Washington stands for today. When Dr. Coburn refused to give up his practice back home, Senate Democratic Leader Harry Reid — one of the most dishonest and destructive people to ever occupy a seat in Congress — slapped Dr. Coburn with “ethics charges” for earning outside income. You see, in Washington, open theft is the only acceptable form of income. Dr. Coburn retaliated and settled the “ethics charges” by continuing to deliver babies back home — for free. Dr. Coburn came to Washington to represent the good people of Oklahoma with the purest of intentions. He came as a citizen legislator — precisely as the Founders intended — to guard ferociously the interests of his constituents back home. Those interests also happened to line up exactly with the interests of most constituents around the country. In a perfect display of his electoral humility, Dr. Coburn is among the rarest of politicians who kept every promise he ever made to limit his own terms in Congress. Why does it always seem like only the good are cut down by principles? Dr. Coburn did not mount the barricades and proclaim that the federal government served no purpose whatsoever. Nor was he a demagogue who claimed the federal government was the answer to every problem. He understood — as the Founders did — that there are functions only a federal government can perform. Dr. Coburn just wanted the government to perform those functions wisely, effectively and economically. Most important — and this is what really set him apart from so many of the gasbags in Washington these days — Dr. Coburn was not interested in grandstanding without result. Or, even worse, grandstanding only to emerge with a legislative product that was even worse. He was never a member of any Suicide Caucus. He was not interested in “poison pills” or legislative Scud missiles. Dr. Coburn never held a leadership position. He was never a committee chairman. He learned the rules of the House and the Senate and used those rules to chip away at the federal Leviathan. Certainly, he was an agonizing thorn in the side of Republican leadership. Yet even Republican Leader Mitch McConnell realized Dr. Coburn’s genius, saying that his nickname “Dr. No” failed to fully capture him. Dr. Coburn “did not let his strong principles sideline him from creative policymaking or bipartisan cooperation,” Mr. McConnell said. “Tom’s convictions did not drive him away from the table. They inspired him to become a central player.” Dr. Coburn was intelligent, diligent and utterly without self-regard. If you made the mistake of calling him “Sen. Coburn,” he might correct you. “Dr. Coburn” was fine. • Charles Hurt can be reached at churt@washingtontimes.com or @charleshurt on Twitter.
Bitcoin’s Scalability: SegWit and Lightning Networks The obvious need for Bitcoin to scale is just that: obvious. If you’ve been following the cryptocurrency scene, you have probably heard of Segregated Witness and Lightning Networks, but probably not what they are - both strengths and weaknesses. A Bumpy Roadmap of Solutions Bitcoin Core developers have long created a scaling roadmap and back at 2015, Segregated Witness was first introduced. This protocol, developed by Pieter Wuille, first proposed that a soft-fork update with the SegWit protocol in mind would suffice in order to increase the block size limit and give Bitcoin a scalability solution. Unfortunately for most Bitcoin miners, an increase of block size does not sit very well. Many decentralists like Bitcoin at the current setting. They fear that introducing an increase in transactions per second (tps) or block size would give rise to centralized mining, creating oligarchy-like pools that can dominate the little pools. Decentralists also assume that creating protocols could give decrease the anonymity of the cryptocurrency, going against the primary operating principle of Bitcoin. On the other end of the spectrum, people who are proponents of the increase - let’s call them blockgressives - are sure that the increase in size will only lead to Bitcoin having less volatility, most likely leading to a lower price, but will become far more stable as a cryptocurrency. Blockgressives and Decentralists, both, have their fair arguments, but the currency has to move into a direction or it will eventually fall behind - with Litecoin having already implemented SegWit into their core code, it just goes to show as to how willing cryptos are to experiment. A complete different point-of-view from Bitcoin Core developer, Luke-jr, claimed that an increase in block size isn’t really even needed - the surprising third statement! In a reddit comment, he said that if we put aside all the inefficient and microtransaction usages, a block size of 500k would suffice. So many opinions to weigh on, but SegWit and Lightning Networks still remain a mystery.Meanwhile, Bitcoin’s mempool reaches 110 MB, leaving us confused, thinking: So... is this an issue or not? Barely Explain Like I’m Five: Segregated Witness There are two Bitcoin transaction fields that interest us when it comes to understanding SegWit: scriptPubKey - Public Key Script scriptSig - Signature Script Script is a language developed by Satoshi Nakamoto himself, a stack language that executes from left-to-right in a Last In, First Out (LIFO) manner. These signatures, although small in size, surprisingly take up a lot of space in a block. According to Dr. Peter Wuille, the developer of SegWit, digital signatures account for 65% of the space in transactions. The entire idea of SegWit is to separate transaction signatures - taking the data from the signatures and only attaching it to the end of a transaction, after bundling them, while also increasing the size of a block from 1 MB to 4 MB. Not only does SegWit claim to increase the throughput of the entire Bitcoin network, it also nullifies some risks that have to do with Bitcoin transactions, such as the recipient modifying the sender’s transaction ID and sending himself more coins. Although SegWit is a very useful protocol, and as of 07/01/2017, 40% of the community signaled a willingness for SegWit support (The requirement is 95%), there are many issues that the community cannot simply agree with when it comes to its implementation. Many people believe that the trade-off is between the increase of transactions per second (tps) and decentralization. The increase of block size could potentially increase the tps (I say could because if traffic increases, there will eventually be congestion, it’s only a matter of time) but could also lead to a centralization of block mining. This is because larger blocks take longer to transfer between nodes, this could give a head start to larger pools that could essentially outgrow all other competitors. An increase in size from 1 MB to 4 MB will burden full nodes far more than currently (blockchain size is already past the 100 GB milestone). This is because to run a full node, the bitcoin blockchain uses your bandwidth, CPU power, and a sizeable amount of disk space. All of which increase in demand with the possible increase of block size. The other issue is that SegWit is primarily worried about authenticating transactions, skipping many signature protocols “checkpoints”, which can lead to issues with tracing back all the transactions inside the Bitcoin blockchain - a key feature that allows us to find the genesis of every Bitcoin ever spent. This protocol is complicated. The many users that vouch for it, agree that it is a great solution to stabilizing the Bitcoin value and making it more accessible, some disagree and say the complete opposite. Both sides have their arguments, and Bitcoin development is at a “halt”, but the network is growing very fast and it requires a solution just as fast. Lightning Networks: Bunching Signatures Up Lightning Network is another great protocol that can open the door up for a new type of transaction inside the Bitcoin blockchain: Microtransactions. Although the fee for a Bitcoin transaction is usually very low, these fees cripple microtransactions completely. Lightning Networks promises a solution through the use of payment channels. Payment channels open when two or more parties want to transact. Transactions start process and end within a payment channel. These transactions follow standard procedures of signature confirmation and all data necessary needed to authenticate them. Though the process is normally done as if within the Bitcoin blockchain, the payment channels are much faster than the normal network. The Lightning Network then groups all the transactions are done within the payment channel, creates a total sum from the back-and-forth trading and sends only the last summed transaction data to the blockchain. This saves space from the many little transactions that are advertised to a block, leaving room for bigger transactions. This protocol also promises to monitor the blockchain for any potential double-spenders, making it virtually impossible to double-spend any amount of money, since any attempt is met with refunds. Lightning Network would also allow for different blockchains to communicate with one another, which can lead to many interesting projects being developed. Although the features are very interesting, there are many uncertainties to them. First, the promise of a potentially “unlimited” tps does not mean that the network will not face any issues anymore. If Bitcoin wants to take on the throughput of Visa, payment channels have to be opened and closed at incredibly fast rates. This could lead to large network congestions which the Bitcoin blockchain is not ready to handle (yet?). Furthermore, Lightning Networks promise to solve some security flaws very easily. Almost too easy. The double-spending protection feature is one of them. Lightning Network promises that they will provide a way of monitoring the blockchain constantly, 24/7, and protect from any potential double-spenders, but such a feature doesn’t exist for now. The problem with payment channels is that they could bring more issues than they could solve. For example, a user with malicious intent could resend an old payment channel, advertising that into a new payment channel and could possibly send more coins to himself from your personal wallet. Finally, Lightning Network can lead to centralization. since people are required to send money to one-another in the channel. Which begs the question: How many people have to partake in a channel for it to be effective? Which leads to the main dilemma: Do third-parties have to partake in each payment channel, effectively making Lightning Network a bank? Protocols: The Good, the Bad and the Conclusion These protocols sound and are ingenious, but the loopholes behind them are a cause for a lot of predicaments within the Bitcoin community, and until the miners and nodes are satisfied, we won’t be seeing any implementations. Promises of unlimited throughput, higher security, lower costs are met with the exact opposite when further analyzed, which is why Bitcoin doesn’t have one simple answer to everything.
Memories Official media Telekinesis Fanstuff Telepathy The site Cameos: The Armored Episodes In the lead up to the first Pokémon movie, Mewtwo Strikes Back, Mewtwo played a role in the anime itself. Over a span of three episodes, we see a mysterious armor-clad form with cold glowing eyes mercilessly act as Giovanni's unstoppable weapon, his rarest and most prized Pokémon, before this supposedly evil creature destroys the Team Rocket Headquarters and flies away without any further explanation. It was not until the movie when fans discovered this Pokémon's true identity and the lengths he would go for revenge. Episode 63: Battle of the Badge This episode heavily foreshadowed The First Movie, showing Gary and Ash's attempts to win the Earth Badge, with Giovanni, the Viridian Gym leader, still in possession of his prized super-clone. Gary had the misfortune of facing the unidentifiable armor-clad Mewtwo, losing in seconds, and blacking out (along with his entire cheer-leading squad). Gary: It's here. Ash: What happened? What's here? Gary: A Pokémon that we've never seen... did this... There's something different about this one. This Pokémon's not just powerful, it's evil. Ash: Evil? There can't be an evil Pokémon... English dub of Battle of the Badge This dialogue marks the first efforts of 4Kids to turn Mewtwo into a flat, unsympathetic, and eeeevil character, no matter how forced it is. The anime had previously made the point that there really is no such thing as an evil Pokémon -- Pokémon merely want to make their trainer happy. No, let's not wonder if Mewtwo is brutal because of a tragic past, or because of the influence of his heartless trainer. No, "there's something different about this one," Mewtwo's "not just powerful, it's evil." Gary can't accept his loss to the point of claiming that Mewtwo must be cloaked in the evil power of Satan to have been able to have defeated him, ooo. What a sore loser, Gary. I knew this dialogue obviously wasn't what was said in Japanese, but had no way of knowing what otherwise was being said. Thanks to Dogasu's Backpack for revealing the mystery to me: Shigeru and Satoshi merely talk about how the mysterious Pokémon was so strong that his Pokémon "couldn't do a thing against it." Anyway, after Gary's complete annihilation, Ash goes in to attempt to win an Earth Badge. Unlike Gary, Ash has the stupid dumb luck of not entering the gym until after Giovanni and Mewtwo leave on some urgent Team Rocket business, so the twerp simply needs to battle Jessie and James to earn his badge. Like Ash hasn't already beaten them like sixty times before. Gary watches in disbelief (probably at how unfair this all is). This episode also reveals that Giovanni has a portrait of Mewtwo's sprite in his office. It appears in both Team Rocket's daydream of delivering stolen Pokemon to the boss, as well as in real life when Jessie and James do manage to bring Giovanni Misty's Togepi (which doesn't go over well). Episode 64: It's Mr. Mime Time During this episode, Giovanni is still away on official business. Jessie, James, and Meowth contact the boss at a Team Rocket outpost via giant-screen webcam, groveling for forgiveness, since they managed to blow up the Viridian City Gym (well, technically it was Togepi's fault). Their call appears to have interrupted his latest ego-stroking session with Mewtwo (visible behind him in the video feed), and he snaps that he wants rare Pokémon, not excuses. After hanging up on them, he turns to Mewtwo and continues in a self-congratulatory tone, "They can search the entire universe, but they'll never find a Pokémon as rare as you" (which is pretty ironic, given how often Jessie and James end up palling around with Mew, Mewtwo, and other legendaries...). Episode 67: Showdown at the Po-ké Corral Early in this episode, Jessie, James, and Meowth approach Team Rocket headquarters to report on their chronic failure to capture any Pokémon. As they gaze up at the impressive building, they lose their nerve and start to run away, but suddenly the entire headquarters explodes! The three of them glimpse the blue streak of Mewtwo flying away from the destruction, shedding his armor as he zooms away. As the Team Rocket trio cautiously approaches the smoldering wreckage of what used to be their headquarters, they wonder if their boss, er, had his final blast off. But, miraculously, he and Persian climb out of the rubble completely unscathed (Persian actually looks happy, even with the circumstances). Giovanni is too distracted with obviously bigger issues to yell at Jessie and James too much, which they mistakenly and egotistically take as as vote of confidence in their abilities.
Q: how to filter an array using a binary representation of an integer argument? > > A binary representation of a number can be used to select elements from an array. For example, n: 88 = 23 + 24 + 26 (1011000) array: 8, 4, 9, 0, 3, 1, 2 indexes: 0 1 2 3 4 5 6 selected * * * result 0, 3, 2 so the result of filtering {8, 4, 9, 0, 3, 1, 2} using 88 would be {0, 3, 2} In the above, the elements that are selected are those whose indices are used as exponents in the binary representation of 88. In other words, a[3], a[4], and a[6] are selected for the result because 3, 4 and 6 are the powers of 2 that sum to 88. Write a method named filterArray that takes an array and a non-negative integer and returns the result of filtering the array using the binary representation of the integer. The returned array must big enough to contain the filtered elements and no bigger. So in the above example, the returned array has length of 3, not 7 (which is the size of the original array.) Futhermore, if the input array is not big enough to contain all the selected elements, then the method returns null. For example, if n=3 is used to filter the array a = {18}, the method should return null because 3=20+21 and hence requires that the array have at least 2 elements a[0] and a1, but there is no a1. If you are using Java or C#, the signature of the function is int[ ] filterArray(int[ ] a, int n) I have this question I am trying to solve and I wrote this code public static int[] filterArray(int[] a, int n) { int[] x = new int[a.length]; int j = 0; int count = 0; while (n > 0) { int digit = n % 2; n /= 2; x[j] = digit; j++; } System.out.println(Arrays.toString(a)); System.out.println(Arrays.toString(x)); for (int k = 0; k < x.length; k++) { if (x[k] == 1) count++; } System.out.println("count is " + count); int[] z = new int[count]; for (int i = 0; i < z.length; i++) { for (int k = 0; k < x.length; k++) { if(x[k] == 1) { z[i] = a[k]; } } } System.out.println(Arrays.toString(z)); return x; } When I try to test is with a test array of System.out.println(Arrays.toString(filterArray(new int[] { 0, 9, 12, 18, -6 }, 11))); It is giving me the following output [18, 18, 18] the correct output is [0, 9, 18] A: The problem is your nested loops: you're iterating over the entire input array for each element of the output array and keep overwriting the values with the last found element (step through your code with a debugger and you'll see that). To fix that, swap your loops and keep track of the "next" output index: int i = 0; for (int k = 0; k < x.length; k++) { if(x[k] == 1) { z[i] = a[k]; i++; //advance the output index } } That will make your code faster as well since now you don't have O(n2) complexity but just O(n).
[Indications and contraindications of liver biopsy]. In each individual case the indication for liver biopsy depends on assessment of the risks relative to the potential benefits of the procedure. Besides hepatological risk factors, hemostasis and overall health problems must be considered. The benefit of liver biopsy is dependent on recognition of the pathognomonic lesion within the tissue sample obtained. Size and distribution of the different histological features and the size of the biopsy cylinder are, therefore, important determinants for the success of the examination. Whether or not a histological diagnosis may be useful for optimal management of a patient can best be judged if the clinical question has been well defined before the biopsy is performed.
Median fin function in bluegill sunfish Lepomis macrochirus: streamwise vortex structure during steady swimming. Fishes have an enormous diversity of body shapes and fin morphologies. From a hydrodynamic standpoint, the functional significance of this diversity is poorly understood, largely because the three-dimensional flow around swimming fish is almost completely unknown. Fully three-dimensional volumetric flow measurements are not currently feasible, but measurements in multiple transverse planes along the body can illuminate many of the important flow features. In this study, I analyze flow in the transverse plane at a range of positions around bluegill sunfish Lepomis macrochirus, from the trailing edges of the dorsal and anal fins to the near wake. Simultaneous particle image velocimetry and kinematic measurements were performed during swimming at 1.2 body lengths s(-1) to describe the streamwise vortex structure, to quantify the contributions of each fin to the vortex wake, and to assess the importance of three-dimensional flow effects in swimming. Sunfish produce streamwise vortices from at least eight distinct places, including both the dorsal and ventral margins of the soft dorsal and anal fins, and the tips and central notched region of the caudal fin. I propose a three-dimensional structure of the vortex wake in which these vortices from the caudal notch are elongated by the dorso-ventral cupping motion of the tail, producing a structure like a hairpin vortex in the caudal fin vortex ring. Vortices from the dorsal and anal fin persist into the wake, probably linking up with the caudal fin vortices. These dorsal and anal fin vortices do not differ significantly in circulation from the two caudal fin tip vortices. Because the circulations are equal and the length of the trailing edge of the caudal fin is approximately equal to the combined trailing edge length of the dorsal and anal fins, I argue that the two anterior median fins produce a total force that is comparable to that of the caudal fin. To provide additional detail on how different positions contribute to total force along the posterior body, the change in vortex circulation as flow passes down the body is also analyzed. The posterior half of the caudal fin and the dorsal and anal fins add vortex circulation to the flow, but circulation appears to decrease around the peduncle and anterior caudal fin. Kinematic measurements indicate that the tail is angled correctly to enhance thrust through this interaction. Finally, the degree to which the caudal fin acts like a idealized two-dimensional plate is examined: approximately 25% of the flow near the tail is accelerated up and down, rather than laterally, producing wasted momentum, a loss not present in ideal two-dimensional theories.
A computational method to detect epistatic effects contributing to a quantitative trait. We develop a new computational method to detect epistatic effects that contribute to a complex quantitative trait. Rather than looking for epistatic effects that show statistical significance when considered in isolation, we search for a close approximation to the quantitative trait by a sum of epistatic effects. Our search algorithm consists of a sequence of random walks around the space of sums of epistatic effects. An important feature of our approach is that there is learning between random walks, i.e. the control mechanism that chooses steps in our random walks adapts to the experiences of earlier random walks. We test the effectiveness of our algorithms by applying them to synthetic datasets where the phenotype is a sum of epistatic effects plus normally distributed noise. Our test statistic is the rate of success that our methods achieve in identifying the underlying epistatic effects. We report on the effectiveness of our methods as we vary parameters that are intrinsic to the computation (length of random walks and degree of learning) as well as parameters that are extrinsic to the computation (number of markers, number of individuals, noise level, architecture of the epistatic effects).
Category Archives: Fall 2014 In early 2013, the Board and subsequently the membership voted to change the Bylaws to combine the Nominations Committee and the Elections Committee into one committee. Thus the Nominations and Elections committee inaugural year was this year. The first task of the Committee was to determine who should be nominated for the positions of Vice- President/President Elect, Treasurer, and Director. After requesting nominations from the membership and brainstorming ideas for potential candidates, the Committee presented the following slate of candidates to the Board for approval: Vice-President/President Elect: Debbie Ginsberg and Julie Pabarja Treasurer: Stephanie Crawford and Valerie Kropf Director: Jesse Bowman and Robert Martin The Board approved the slate and approved the election take place between Tuesday, February 18th and Friday, March 14th. The Committee working with AALL got all of the voting credentials set for all of the eligible voters in time for the start of the election. There were no CALL members without e-mail addresses, so no paper ballots were produced. The election began as scheduled on February 18th and ended as scheduled on March 14th. There were a few members that did not receive their e-mail from AALL with their voting credentials, however those issues were resolved quickly and the members were able to vote. Of the 266 members eligible to vote, 144 members elected the following candidates: Julie Pabarja, Vice-President/President Elect Stephanie Crawford, Treasurer Robert Martin, Director The Nominations and Elections Committee members included Denise Glynn, Ramsey Donnell, Joan Ogden, Kathleen Bruner, Lenore Glanz, Lyonette Louis-Jacques, and Susan Retzer. A very special thank you to Joan Ogden – she is a wealth of information and frankly the knowledge keeper of all things related to the CALL election. We would not have had such a successful election without her wonderful expertise. This year we had a 54.1% voting rate. In years past the rates have been: 54.7% in 2012/2013 45.4% in 2011/2012 46.5% in 2010/2011 51.9% in 2009/2010 47.8% in 2008/2009 Hopefully next year we can have an unprecedented turn-out to choose our new Board members from a wonderful slate of dedicated CALL members! The Placement & Recruitment Committee continued utilizing the job posting procedure which was updated during the 2012-2013 membership year. The procedure successfully serves members, employers, job seekers, and enables an efficient process for committee use. Employers fill out a simple web form on the CALL website. The committee member who receives the form confirms the posting and uploads it on the website, usually within a day. The procedure also triggers an email to the listserv. When requested by an employer submitting a job opening, the committee member sends an announcement to nearby library schools. It helps connect CALL with library students and recent alumni from graduate library programs. Using this described procedure, the committee posted 30 jobs on the CALL website and listserv during the membership year. One enhancement to the Jobs Form this year was the addition of a print button which allows each job position to be easily printed. The print feature was added through the efforts of Debbie Ginsberg of the Public Relations Committee. The committee also collaborated with the Public Relations Committee in the updating of the CALL Brochure. Additionally, Ms. Kara Dunn, an attorney interested in law librarianship contacted the Committee who wanted a “Day in the Life: Law Librarian” experience as promoted on the CALL Jobs and Careers web page. A member of the committee met with her and was able to connect with a law librarian from Northwestern University. It was challenging to find somebody who was willing to meet with Ms. Dunn and by extension, arrange for a shadowing opportunity. It is suggested that the CALL board deliver some sort of message to the membership that part of their professional responsibilities is to try, as much as it is possible, to make themselves available to individuals who are considering entering the profession. Projects for 2014-2015 Participate in local college and university career Work with Continuing Education committee to develop a CALL program on the profession of law librarianship for library science and other interested In collaboration with the Education Committee, develop and education program relating to placement or recruitment. Meetings In the 2013-2014 year, the Committee conducted two in-person status meetings, in addition to an online meeting. The in-person meetings were well received, as they better facilitated discussion and brainstorming among the members regarding Committee plans and objectives. Following the meetings, the Committee members were provided a meeting summary with action items. Public Relations The Committee continued to seek opportunities to promote CALL and its members to the public and through AALL. CALL at AALL/AALL Marketing Award Member Scott Vanderlin set up a unique and engaging table display at AALL’s 2013 Annual Meeting. The display, “Call CALL in the Exhibit Hall” featured a phone which attendees used to listen to pre-recorded information about CALL. The table was submitted for an AALL Excellence in Marketing Award, Best Use of Technology and was selected as the winner. Scott will also design the table for the 2014 Annual Meeting. Brochure With the help of member Carolyn Hersch and Membership Committee Member Karl Pettit, the CALL Brochure was revised for the 2014 AALL Annual Meeting. The new brochure is informative yet visually engaging. Law Day Working with outgoing Director Pamela Cipkowski, CALL put an ad in the Chicago Daily Law Bulletin for Lay Day 2014 (May 1). Pictures at Events Member Emily Barney and other members took many pictures at CALL business meetings in 2013-14. The pictures have been useful for projects such as the new Brochure. CALL also now has a Flickr account to help share these pictures and promote their use. CONELL/Give-Away CALL will participate in the CONELL Marketplace at the 2014 Annual Meeting. Members will be on hand to answer questions as well as give away brochures and lip balm. Plans for the Future Over the 2014-15 year, the Committee plans to work more closely with AALL, providing the organization with more information about chapter events and members. The Committee will also reach out to the Meetings Committee to help promote events and speakers. The Committee will also work more to promote relevant events at individual libraries and member happenings. Social Media Facebook/Twitter The Committee is currently engaging with the CALL membership through the social media platforms Facebook (79 Likes), Twitter (64 Followers) and LinkedIn (76 Members). This year, the social media efforts were focused on improving user engagement on Facebook. Through Facebook Insights, we were able to determine that the posts that users engaged with the most were announcements for the CALL business meetings. Our Facebook fans also responded well to articles on the law librarian profession. Other posts that had a large reach discussed current news and developments related to the Association, including CALL award winners, AALL election results and the AALL Annual Conference. In the 2013-2014 year, the Committee unlinked the CALL Twitter account from the Facebook account. This allowed us to generate more Twitter friendly content, which entailed limiting our posts to 140 characters or less. The Committee aims to use Twitter as a platform to interact with CALL members, potential members and the law librarian community. To achieve this goal, the CALL twitter page has begun following the feeds of individual CALL members, law school libraries and professional library associations. Website Archives The Board requested that the Committee research how other chapters archive their websites. From what the Committee was able to learn, other chapters do not have procedures to formally archive their sites. Website Updates The Committee added a Grants/Awards section to better promote these important activities. While no longer in the navigation menu, CALL website continues to promote “Finding IL Law” through a prominent side button. The Committee upgraded the CALL WordPress installation to version 3.9 to ensure that the site remains secure and up to date. Plans for the future The Committee will solicit member feedback on the website’s usability. We will also work with the Bulletin Committee to help create a more web-friendly CALL Bulletin. Listserv Member Kara Young monitored CALL’s listserv and made changes as needed. Kara will serve as co-chair in 2014-15. Often, whether in a law school or law firm setting, the challenge is finding enough time to get everything done. In particular, as summer’s leisurely stroll winds down and fall accelerates into full sprint mode, you may wish you had more time in your day.
(b) g (c) 2/7 c Let n = 11.4 + -11. Let q = -10 + 22. Let d = q - 12.4. What is the nearest to 2 in 6/5, d, n? 6/5 Suppose -3*t = 0, 2*n = n - 5*t - 4. Let z = 6 - 6. Let s be (1/10)/(12/24). Which is the closest to z? (a) -2 (b) s (c) n b Let n = -0.125 - 25.375. Let w = 25 + n. What is the nearest to -2/7 in 3, 4/3, w? w Let g = 961 + -908. Which is the closest to -2/3? (a) -3 (b) g (c) 0.1 c Suppose -g - 7*n + 5*n = 2, 3*g + 2 = -4*n. What is the nearest to g in 0.5, -5, -2? 0.5 Let g = 1.51 + -1.56. Let i = 0.25 - g. What is the nearest to 8 in -0.3, i, -3? i Let w = -9.9 - -10. Let s = -149.1 + 149. Which is the nearest to s? (a) w (b) -0.1 (c) -5 b Let s(w) = w**2 - 8*w - 6. Let x be s(9). Let d = -11.84 - -11.94. What is the closest to 0.3 in -4/5, x, d? d Let d be 0 - (-3)/12 - 0. Let h = -109 - -114. Let v = -6 - -5.6. What is the closest to d in h, -2, v? v Let q = 3/46 + -153/230. Let v = 6.1 + -6. Let i = -36.4 + 36. Which is the nearest to v? (a) 0.4 (b) i (c) q a Let w be 9/(-12)*42/63. Suppose 4*n + 10 = -2. Let u be ((-12)/(-10))/(n/10). What is the nearest to 2/5 in u, -0.1, w? -0.1 Let l = 63.3 - 63.4. Which is the nearest to l? (a) -37 (b) 2 (c) -1 c Let r = -719 + 719. Which is the nearest to r? (a) -7 (b) 1 (c) -3/4 c Let p = 2 - 2.5. Let k = -7.1031 + 0.1031. Which is the nearest to 0? (a) p (b) k (c) 0.2 c Let o = -195 + 194. Which is the closest to o? (a) 2/11 (b) 0.5 (c) 4 a Let q = 507/2 + -253. Let j = 1/2 + -7/10. Which is the closest to j? (a) 0.4 (b) 2 (c) q a Let p be 72/18 - (-84)/(-22). Which is the nearest to p? (a) 7 (b) 3/7 (c) -2/5 b Suppose -3*v = 1 + 2. Let w be (-85)/102 + v/(-3). What is the nearest to 3 in -4, w, 3? 3 Suppose x + 76 = -4*n, -2*n + 2*x - 21 = 7. Which is the closest to 1? (a) 1 (b) 3 (c) n a Let t = -5.2 + 0.2. Let x = 14/23 - 89/184. Let l = -37.9 + 38. Which is the closest to l? (a) t (b) 0.5 (c) x c Let g = -23.2 - -21.2. What is the closest to 0 in g, 3/4, -1/3? -1/3 Let h(u) = 12*u - 57. Let w be h(5). Which is the nearest to 0? (a) 2 (b) 29 (c) w (d) 4 a Let z = -309 - -308. What is the nearest to z in -5, 1, 5? 1 Let h = 1 - 2. Let a = -1.3 + -86.7. Let y = -85 - a. What is the closest to y in h, 2/13, -4? 2/13 Let k = 342 - 341.6. Which is the nearest to -2? (a) k (b) 5 (c) 0.3 c Let d = 17 - 17. Let w = -3.7 + 0.7. Let f be 0 + -1 + 3 - -2. Which is the closest to d? (a) f (b) w (c) 2/7 c Let z be ((-11)/(-55))/(-2*(-2)/8). What is the nearest to 1 in -12, z, 5/2, -4/5? z Let f = -14 + 14.9. Let a = 2.9 - f. What is the closest to -2/9 in 1, a, -3? 1 Let h be 3*(20/(-3))/(-4). Let x be 3 + (-1 - 13/h). What is the nearest to x in 1, 2/13, -2/5? -2/5 Let f = -2 - -2. Let q = -45 - -45.3. Which is the nearest to f? (a) q (b) -8 (c) -2 a Let x = 0 - -0.2. Let r = 0.14 - 0.238. Let j = -0.198 - r. Which is the closest to j? (a) x (b) 2/5 (c) 6 a Let o = 45.33 + -46. Let d = 0.47 + o. Which is the nearest to 0? (a) -0.5 (b) -2 (c) d c Suppose 6*g = 3*g - 21. Let b(n) = n**3 + 5*n**2 - 15*n - 8. Let q be b(g). Let c = -0.46 - 0.04. What is the closest to 0.2 in 0, q, c? 0 Let v = 69 + -481/7. Let n = 3221/5 + -12879/20. Which is the closest to -5/2? (a) n (b) 3/2 (c) v a Let n = 128 - 88. Let r = -40.4 + n. Which is the nearest to 1/3? (a) 2 (b) 0.2 (c) r b Let t = -398.5 + 397. Which is the closest to -1? (a) -1/3 (b) t (c) 0.4 b Let s(u) = -u**3 - 2*u**2 + 4*u + 3. Let l be s(-3). Let v = -2203/4 + 1657/3. Let o = -1/12 + v. Which is the closest to -1? (a) 5 (b) l (c) o b Let f = 20 + -20.7. Let j = 0.3 - f. What is the nearest to j in 1, -1/7, -8? 1 Let f = -639 - -638.73. What is the nearest to f in 3/2, 0.4, 3? 0.4 Let u = -30.5 + 34.5. Let c = -0.03 - 1.97. What is the nearest to -2/5 in c, u, 17? c Suppose 15 = 14*d + 1. Let x be 9/6*8/(-6). What is the nearest to x in -1, d, -0.5? -1 Let z = -0.2 + 0.1. Let w = 0.5068 - 0.9068. Which is the closest to z? (a) w (b) -4/7 (c) -1 a Let p = 757 - 648.1. Let v = p - 109. What is the closest to v in 2/7, -0.16, -3/8? -0.16 Let p be 9/(-46)*(5 - 133/21). What is the nearest to -1 in p, -5, -1, -2/3? -1 Let l = 364/3 + -122. Let d = -12 - -10. Which is the closest to l? (a) 3 (b) d (c) -0.3 c Let z be -6 - -3 - (-1)/(-8)*-39. Let j = z - 2. Which is the nearest to j? (a) 1/5 (b) 1/7 (c) -2/3 b Let h be (-1 - 0) + (-1)/(-1). Let m = 0.027 - 16.527. Let t = m + 17. What is the nearest to -1 in h, t, 5? h Suppose 8 + 7 = -5*w, -11 = -4*x - 3*w. Let s be (2/((-4)/(-2)))/x. Let h be -2 - ((-10)/3 + 1). What is the closest to 0.1 in s, 5, h? s Let i = -157 + 152. Which is the nearest to -0.02? (a) -2/5 (b) i (c) -1 a Let z = -35092 + 350527/10. Let a = 235/6 + z. Let o = 7 - 7.1. What is the closest to o in -5, 0, a? a Suppose 2*m + 3 = -r + 2, 5*m + 2*r = 0. Suppose -4*s - 15 = 5*o, -6 = -3*s + 4*o - m*o. Which is the nearest to 2/5? (a) -1/4 (b) 3 (c) s c Let a = -0.02 + -1.98. Suppose -f + k - 4*k = 21, 4*f = k - 6. What is the closest to 0.1 in -0.4, a, f? -0.4 Let r = 180 + -180. What is the closest to -1/5 in -2/23, r, 2/5? -2/23 Let u(h) = h**3 + 8*h**2 + 5*h + 34. Let z be u(-8). What is the closest to 2/15 in -0.3, 0.1, z? 0.1 Suppose b = -4*t + 1, t - 4*b - 6 - 7 = 0. Let a = 20.91 + -18.9. Let i = a - 0.01. What is the nearest to t in i, -1/6, 5? i Let o be 1589/(-11) + 1 + 1. Suppose -5*d - f = -713, -284 = -2*d - f - 0*f. Let k = d + o. Which is the closest to 1? (a) k (b) -4 (c) 1/4 a Let t be (48/(-120))/((-2)/5). Which is the nearest to t? (a) -2/7 (b) -0.5 (c) -3 (d) 0.3 d Suppose 0 = 2*f + d - 3, 3*f + 4*d + 8 - 15 = 0. Let j = 0.5 + 1.5. Let k be 152/(-36) + (-4)/(-18). Which is the nearest to j? (a) k (b) -3 (c) f c Let l = -6/19 - 45/38. Let n = 79 - 81. What is the nearest to -0.2 in n, -3/5, l? -3/5 Let h be ((-4)/3)/(30/(-45)). What is the nearest to 1 in 4, -5, h, 2/13? 2/13 Let u = 6.2 - 5.8. Which is the nearest to -1? (a) 0.5 (b) u (c) -1 c Let d(f) = 13*f + 13. Let q be d(-6). Let n be (-10026)/154 + (-2)/(-7). Let x = q - n. Which is the nearest to -2/3? (a) -5 (b) x (c) -1 c Let y = -15.3 - -18.3. What is the nearest to y in 2, 1, -0.2? 2 Let o = 3.585 - 3.885. Let g = 6 - 4. Let v = g + -2.1. What is the closest to v in -2, o, 6/11? o Let h = 933 + -931. Which is the nearest to 1/5? (a) -2 (b) -6 (c) h c Let n be (-56)/(-18) + (-19)/171. What is the closest to -1/10 in n, 3/7, 4, 0.4? 0.4 Let f = -2717 + 2716. Suppose 4*a - 12 = -3*d, 0*a + 6 = -5*d + 2*a. What is the nearest to f in 0.4, d, 0.06? d Let o = -5.4 - -5. Let h = -20.1 - -20. Which is the closest to h? (a) -4 (b) 5 (c) o c Let l(g) = g**3 - 25*g**2 + 2*g - 45. Let d be l(25). What is the nearest to 0.2 in -0.5, d, 2, 4? -0.5 Let u be (2/2 + (-11)/4)*4. Let d be u/2*6/(-28). Which is the closest to 0? (a) -1 (b) d (c) 0 c Let x = 1651 + -1649. What is the closest to 32 in x, 5, 0.4, -1/9? 5 Let p = -806 - -807.2. Let c = -30.7 - -29. Let y = c + p. What is the nearest to -2/5 in 2/9, y, 0.2? y Let d be ((-14)/49)/((8/2)/4). Let f = -0.5 - 0.3. Let l = f + 1. Which is the closest to 0? (a) d (b) -2 (c) l c Let r = -198/5 + 40. Let m be (-1)/26*(7 + -3). Let j be 50/(-8)*4/20 + 1. Which is the closest to -1? (a) m (b) r (c) j c Let c(v) = -v**2 - 9*v - 22. Let q be c(-8). Let w be 18/(-9) - ((-4)/q - 2). Let o(y) = y - 10. Let u be o(8). What is the nearest to w in 0.4, 5, u? 0.4 Suppose -u + 5*x + 2 = 3*x, -u = 3*x - 2. Let n be (-2)/(84/(-45)) - (-8)/(-16). Let p = 116 - 111. Which is the nearest to 1? (a) n (b) u (c) p a Let u = -73 - -72.29. Let q = u + 0.41. Let w = -3 - -4. Which is the closest to -1? (a) 5 (b) q (c) w b Let v = 1619 + -1618.84. Which is the nearest to -1/2? (a) v (b) 0.06 (c) 2 b Suppose -v - 4*p - 2 = -10, v + 5*p - 9 = 0. Suppose q + 4*w = -3*q - 1316, -v*q - 1308 = 2*w. Let o = 4223/13 + q. What is the closest to 0.1 in o, -1/8, 5? -1/8 Let c = 914/5 - 184. Let x be (-9)/(-8)*(-30)/135. Let y = -0.3 + 0.5. What is the closest to x in -4, c, y? y Let m = -576 + 577. Let l = -0.1 - -0.1. Let z = -4 + l. Which is the closest to 0? (a) z (b) -0.3 (c) m b Let p(x) = -x**3 - x**2 - 2*x - 19. Let k be
After Aam Aadmi Party accused the Bharatiya Janata Party (BJP) of being behind attacks on party chief Arvind Kejriwal, BJP on Friday hit back, saying politics in India cannot be fought with theatrics and offered security for the AAP leader. "Politics in India is not a film script and real issues of India will not be solved by theatrics," BJP spokesperson Nalin Kohli said on AAP's accusations about BJP being behind protests against Kejriwal. Another senior BJP leader Sunil Oza, a former MLA from Gujarat, said people of Varanasi would never indulge in activities like throwing eggs on anyone. "If Kejriwal is worried about his security, we can provide that through our karyakartas (workers)," Oza said. Kejriwal, who is fighting Lok Sabha polls from here against Modi, faced a barrage of stones and brickbats thrown at him by over a dozen youth shouting "Har Har Modi, Ghar Ghar Modi" slogans near the Banaras Hindu University campus on Thursday evening. He faced further protests on Friday, after which AAP accused BJP of being behind these protests and has also sought action by the Election Commission. BJP has however distanced itself from any such protests and said that it was not at all worried about Kejriwal in these elections, as he was a "mere distraction". Oza said BJP has been repeatedly appealing to the people of the city to restrain themselves and if Kejriwal is not being welcomed by people here spontaneously, he should seek police protection. "We are open to providing him our own workers to help him with his security, if he desires so," the leader said.
Species must overcome multiple barriers to successfully establish in a novel range[@b1]. One such barrier is associated with mutualistic interactions, which are predicted to limit invasion success because the new range may not contain effective mutualist partners required for initial establishment[@b2]. Legumes (*Fabaceae*) are a globally distributed and highly diverse family of flowering plants, many of which are dependent on symbiotic nitrogen-fixing bacteria for growth and reproduction[@b3]. Anthropogenic activity has introduced legumes into new regions and continents at an unprecedented global scale[@b4]. However, it remains untested whether legume dependency on symbiotic nitrogen fixation has facilitated or hindered establishment into novel ranges. Legume hosts acquire rhizobial symbionts horizontally via the environment. This could limit legume establishment following long distance dispersal by reducing access to compatible symbiont partners[@b5][@b6] or suitable environmental conditions for efficient nitrogen fixation[@b7]. On the other hand, symbiotic nitrogen fixation has been purported to facilitate legume colonization, especially in disturbed or degraded habitats[@b8][@b9], potentially favouring the establishment of symbiotic nitrogen-fixing legumes over non-symbiotic legumes. According to these contrasting hypotheses, symbiotic nitrogen fixation could either impede or promote plant establishment in new ranges. However, we currently lack a global macro-ecological analysis to support or refute the generality of either of these claims. Morphological and molecular evidence show that legumes (*Fabaceae*) form one large monophyletic group and that rhizobial symbiosis evolved from a single origin over 59 million years ago[@b10][@b11]. This origin was followed by multiple gains and losses of the ability to symbiotically fix nitrogen across multiple clades[@b10][@b12], allowing us to directly compare the relative prevalence of symbiotic and non-symbiotic legume species in non-native areas. Using an expert-annotated global legume distribution database[@b13] and the most comprehensive list of nitrogen-fixing trait data available[@b10], we evaluated differences in recent establishment in introduced areas between symbiotic and non-symbiotic legumes (see Methods). In total, our data set comprises 3,213 symbiotic species and 317 non-symbiotic species. Each species record consists of its symbiotic nitrogen-fixing status and a broad characterization of its global range distribution (as shown by a list of geographic polygons, referred to as 'regions\' hereafter, describing countries, islands or states in which the species is found), and the 'native\' or 'non-native\' designation for each geographic region (see Methods section). We found that non-symbiotic legumes have spread to a greater number of geographic areas compared to symbiotic species at a global scale, providing evidence that symbiosis with nitrogen-fixing bacteria has limited establishment of legumes into novel islands, regions and continents. Results ======= The role of symbiosis in introduction success --------------------------------------------- Within our dataset, 21.6% of all legume species occur in at least one non-native region (that is, polygon) and 15.8% occur in two or more non-native regions, confirming that many symbiotic and non-symbiotic legume species have successfully invaded or been introduced into new regions, continents and islands ([Fig. 1](#f1){ref-type="fig"}). Our analysis of all species in the dataset at the regional level (see Methods section) show that symbiotic legumes have a significantly lower probability of occurring in non-native regions ([Table 1](#t1){ref-type="table"}), which translates into 49.7% fewer non-native regions per species ([Fig. 2a](#f2){ref-type="fig"}). Furthermore, for successfully introduced species with at least one non-native range, we found that symbiotic legumes retain a lower probability of occurring in multiple non-native regions ([Table 1](#t1){ref-type="table"}), translating into 37.1% fewer numbers of non-native regions compared to non-symbiotic legumes ([Fig. 2b](#f2){ref-type="fig"}). We excluded the possibility that non-native symbiotic legume species simply occurred in fewer yet much larger countries ([Supplementary Note 1](#S1){ref-type="supplementary-material"}) by confirming that each non-symbiotic species had, on average, a larger total non-native range area ([Supplementary Table 1](#S1){ref-type="supplementary-material"}) and no difference in the average size of individual regions that comprise the total introduced range area ([Supplementary Table 1](#S1){ref-type="supplementary-material"}). For species occurring in more than one non-native region we also measured the degree of geographic dispersion between non-native regions and found no difference in the degree of geographic dispersion between either legume group ([Supplementary Table 1](#S1){ref-type="supplementary-material"}), showing that introduced symbiotic legumes do not have more or less geographically widespread non-native regions than non-symbiotic legumes. Together, our analyses show that the contrast in introduction success between symbiotic and non-symbiotic legumes was characterised by differences in the number of non-native regions. These results combined support the hypothesis that non-symbiotic legumes have a higher chance of establishing and subsequently spreading to a greater number of geographic areas ([Fig. 3](#f3){ref-type="fig"}). Accounting for potentially confounding species traits ----------------------------------------------------- We found that latitude of origin, size of a species\' native range, plant life form (woody or not woody), life-history (annual or perennial), number of human uses, and the interaction between symbiosis and number of human uses were all significant predictors of introduction success. Non-symbiotic legumes tended to occur more frequently at or near the equator in their native range ([Fig. 3](#f3){ref-type="fig"}; ref. [@b14]). However, our analysis show that latitudinal bias in introduction success favours legume species naturally occurring away from the equator ([Table 1](#t1){ref-type="table"}), indicating that biased dispersal related to latitudinal effects would favour symbiotic rather than non-symbiotic legumes. Total native area was a significant factor in predicting the prevalence of establishment in non-native regions, ([Table 1](#t1){ref-type="table"}), but we found no difference in total native range areas between symbiotic and non-symbiotic species ([Supplementary Fig. 4](#S1){ref-type="supplementary-material"}). While woodiness was the dominant life form in non-symbiotic legumes ([Supplementary Fig. 4](#S1){ref-type="supplementary-material"}), being woody did not always predict introduction success. Specifically, while woody plants had a higher probability of occurring in non-native regions among all species in the dataset, when the analysis was restricted to introduced species, woody plants had a significantly lower probability of being introduced into multiple non-native regions ([Table 1](#t1){ref-type="table"}). Furthermore, annual species were more likely to establish in non-native regions ([Table 1](#t1){ref-type="table"}), but only 0.8% of non-symbiotic species are annual ([Supplementary Fig. 4](#S1){ref-type="supplementary-material"}). In total, our analysis show that while total native area, latitude, plant life form and life-history are important (as other studies have also shown[@b15]), their effects do not eliminate the symbiosis trait as a key determinant of establishment in novel ranges. The geographic area of a region had no effect on the prevalence of non-native species within it ([Table 1](#t1){ref-type="table"}). This likely reflects limited variation in area between our regions ([Supplementary Fig. 6](#S1){ref-type="supplementary-material"}), combined with other factors being much more important for successful introduction such as the amount of trade a nation receives (this variation due to unknown factors would be reflected in the region-level random effect). The role of human use in successful introductions ------------------------------------------------- We found that ∼30% of species in our dataset had at least one human use, and that legume species with more uses are much more likely to establish in non-native regions ([Table 1](#t1){ref-type="table"}). Species with human uses may be more likely to establish due to more frequent intentional introduction attempts (i.e., higher human-mediated propagule pressure), which may mask or confound differential establishment patterns driven by the symbiosis trait itself. Our analysis accounts for this potential bias by including it as a covariate (along with its interaction with symbiosis), and then statistically evaluating the main effect of symbiosis at no (that is, zero) human uses (this is important because of the presence of the interaction[@b16]). The main effect of symbiosis ([Table 1](#t1){ref-type="table"}) thus evaluates any differences in non-native establishment patterns that are least likely to be impacted by human-mediated propagule pressure. After accounting for the number of human uses in this manner, we found that non-symbiotic legumes are still much more likely to establish in non-native regions ([Figs 2](#f2){ref-type="fig"} and [4](#f4){ref-type="fig"}; [Table 1](#t1){ref-type="table"}). We also found a significant interaction between symbiosis and the number of human uses, suggesting that human use does have an influence on the successful introduction of symbiotic versus non-symbiotic legumes. We predicted that if human uses were exacerbating the spread of non-symbiotic legumes over symbiotic legumes (above the disparity observed at no human uses) that we should observe a negative interaction in our main model. However, we found a positive interaction, indicating that the disparity between symbiotic and non-symbiotic legumes decrease, rather than increase ([Fig. 4](#f4){ref-type="fig"}). These results combined provide evidence that human-mediated propagule pressure is not generating the pattern that non-symbiotic legumes are more prevalent in non-native regions. Influence of phylogenetic history on introduction success --------------------------------------------------------- The ability to symbiotically fix nitrogen has been gained and lost multiple times across the legume phylogeny, although the trait is more concentrated in certain clades ([Supplementary Fig. 1](#S1){ref-type="supplementary-material"}). If the pattern of successful introductions across species also shows strong phylogenetic structure, it is possible that our results could reflect a lack of independence among the species we used in this study due to their shared evolutionary history with respect to other predictive yet unmeasured traits. However, the probability of having a non-native range has no phylogenetic signal (phylogenetic parameter alpha=31.27; [Supplementary Note 2](#S1){ref-type="supplementary-material"}). When we incorporated phylogenetic structure into our analysis, which included all covariates from our main analysis ([Supplementary Note 2](#S1){ref-type="supplementary-material"}), non-symbiotic legume species still had a significantly higher probability of establishing in non-native regions ([Supplementary Table 2](#S1){ref-type="supplementary-material"}), consistent with results of our main analysis. Overall, these analyses indicate that the increased ability of non-symbiotic legume species to establish in a greater number of non-native ranges was not driven by phylogenetic dependence and makes it unlikely that our results can be explained by another trait that is evolutionarily correlated with symbiosis and also influences introduction success. Discussion ========== In summary, our findings clearly support the argument that nitrogen-fixing legumes are highly dependent on the symbiosis and that this dependency is sufficiently large to generate dispersal or establishment barriers at a global scale across multiple legume species, regions and continents ([Fig. 3](#f3){ref-type="fig"}). Inadequate population density of compatible rhizobia or appropriate environmental conditions for effective nitrogen fixation in introduced ranges are viable explanations for our results. This explanation would suggest that symbiotic legume hosts and their compatible symbionts would frequently need to be introduced simultaneously or that introductions would favour legumes that are able to form associations with a broad diversity of rhizobia[@b17][@b18], which is consistent with empirical findings from previous studies examining introduced *Acacia* species[@b18][@b19][@b20]. Our results are also consistent with the explanation that an evolutionary investment in the mutualistic interaction with rhizobia has resulted in reduced competitive ability of symbiotic nitrogen-fixing legumes[@b21], relative to non-symbiotic legumes. This explanation would require that the fitness cost of harbouring the symbiosis trait is higher in non-native ranges relative to the native range. Changes in soil resource availability have been proposed as a mechanism to alter the cost-benefit ratio of plant nitrogen-fixation[@b21] and human activity is often associated with increased nutrient deposition[@b22]. However, experimental evidence has shown that invasive nitrogen-fixing plants have greater growth in fertile soils compared to invasive co-occurring non-fixing plants[@b23], suggesting that symbiotic nitrogen-fixation is a net fitness benefit in non-native ranges, rather than a cost. Our study also highlights the prominent effects of propagule pressure, mediated by increased intentional introductions associated with human use attributes of species. The number of human uses for a species was a powerful predictor of the prevalence of successful introduced ranges. This is consistent with a number of other studies which found human use to be important in predicting establishment of plant species outside their native range[@b24][@b25], including legumes[@b4][@b26][@b27]. Among highly useful species, we found that the difference in introduction success between symbiotic and non-symbiotic legumes lessens and eventually reverses ([Fig. 4](#f4){ref-type="fig"}). This suggests that if species are highly useful, any natural establishment barriers among symbiotic legumes (that is, lack of mutualist partners) may be overcome through increased human effort to intentionally introduce a species (for example, increased effort to inoculate new sites lacking rhizobia). It is possible that after human intervention has allowed a symbiotic legume species to overcome its establishment barriers, the benefits of nitrogen fixation then allow it to be more successful. However, the majority of species looked at in this study have no recorded human uses, so that the effect prevailing at low human uses is of particular importance for legume species as a whole. Transplant trials have shown that many legumes rely on soils that are pre-inoculated with compatible microbial symbionts to establish[@b28][@b29] and that soils are highly variable in the abundance[@b30][@b31][@b32] and inter-continental genetic structure[@b33] of compatible rhizobia. However, symbiotic nitrogen-fixation has also been implicated in facilitating the invasion of some of the most widespread and problematic legume species of the world[@b30], giving the appearance that compatible rhizobia are cosmopolitan in their distribution. Based on these isolated observations and studies, it has been difficult to establish a general pattern with respect to the role of symbiosis in limiting or facilitating legume establishment[@b34]. Though the size of our analysis and its global extent is unprecedented, we acknowledge that only a fraction of the estimated ∼19,000 legume species have been characterized for their symbiosis ability (∼20% species and ∼60% of legume genera[@b35]). Nevertheless, based on an examination of ∼3,500 legume species, our study reveals that symbiotic mutualism traits are important in predicting the introduction success of legumes across multiple continents and islands. Our study further highlights likely ecological costs associated with being a nitrogen-fixing species, and the potential for plant species distributions to be influenced by soil microbial biogeography at a global scale. Methods ======= Experimental design ------------------- The objective of our study was to investigate whether symbiosis with nitrogen-fixing legumes had any predictive power with respect to legume introductions globally. We compiled symbiotic nitrogen-fixation data and geographic distribution data matched to as many legume species as possible (see below). We measured introduction success by counting the number of non-native regions of occurrence for each available species. We then analysed the predictive power of the symbiosis trait on introduction success in the presence of other potentially confounding or correlated covariates that might be important in predicting legume introductions. Once we verified that the significance and direction of response for symbiosis remained after the inclusion of other covariates, we analysed the predictive power of symbiosis, incorporating phylogenetic structure into our analyses. Symbiotic nitrogen-fixation data -------------------------------- Nitrogen-fixation status was extracted for all available legume species (members of the family *Fabaceae*) from the publicly available database compiled by Werner *et al*[@b36]. This database scores each species as either 'symbiotic\' or not, and has been compiled from a number of experimental and observational studies examining the presence or absence of rhizobial infections on 5,427 legume species (see Werner *et al*.[@b36] for more details). For the species we used in this study, information on nitrogen-fixation status covers ∼20% of all known legume species (3,530 species out of ∼19,700) and ∼60% of all known legume genera (440 genera out of ∼750). Since we aimed to determine whether symbiosis is a potential barrier or facilitator of initial legume establishment, we evaluated the presence or absence of the ability to symbiotically fix nitrogen as specified in Werner *et al*.[@b36]; see the original source reference list on figshare[@b37] for all references related to every species used in this study. Global introduced status data ----------------------------- For each legume species found in the nitrogen fixation database, we searched the International Legume Database and Information Service (ILDIS)[@b13] and extracted geographic distribution information using a webscraper written in R[@b38]. The ILDIS geographic data was compiled by experts who synthesized regional floristic data from the primary literature. Because ILDIS data are a synthesis of recorded locations of living observations and herbarium records by flora experts, they likely encapsulate established species in a given region. In total, 3,973 legume species were found in ILDIS (species not found in ILDIS were discarded from the analyses) that matched the nitrogen-fixation trait database. Species distribution data in ILDIS is coded using the names of hierarchically structured geographic regions (i.e., continent, region, area, and country where available), and indicates whether each region is native, introduced, or unknown, thus capturing dispersal events at both regional and continental scales. We scraped geographic data at all available hierarchies and excluded areas where the introduced status was considered unknown. The geographic names in ILDIS were based on the World Geographical Scheme for Recording Plant Distributions developed by the Taxonomic Database Working Group (TDWG)[@b39]. We eliminated species records from our dataset just containing 'unknown\' regions, giving a total sample size of 3,530 species. The final dataset used in this study contained 317 non-fixing legumes out of 3,530 (9%), whereas the full Werner *et al*.[@b36] database contained 482 non-fixing legumes out of a total 5,427 total legumes (8.9%); therefore, our dataset showed no bias with respect to the proportion of non-fixers relative to all known data on legume nitrogen fixation. To convert the geographic names in the ILDIS database into usable geographic coordinate data, we downloaded a shapefile containing the standardized TDWG geographic regions as polygons[@b40], then matched these polygons to ILDIS geographic names. Where TDWG polygons and ILDIS geographic names could not be matched, we used the next available polygon in the geographic hierarchy (for example, if we could not find a polygon corresponding to an area name, we used the region name instead). Moving up to the next geographic level was only necessary for less than 10% of recorded regions and did not exceed one level. Once all geographic areas were matched to polygons, we only retained those at the lowest available hierarchical scale. Before analysis we merged any species range polygons that were touching each other and had the same introduction status, to prevent biases in the number of ranges due to finer subdivision within some areas. This way, all final polygons represent non-contiguous ranges. We checked the accuracy of the polygon occurrences by comparing our polygon data to occurrence records in the Global Biological Information Facility (GBIF)[@b41]. GBIF gives higher resolution point occurrence records, but for far fewer species (1,830 species) and invasive or introduced status is not recorded for each GBIF record. We found that, on average, ILDIS polygons for each species captured 93% of GBIF points (see [Supplementary Fig. 2](#S1){ref-type="supplementary-material"} for some example species maps). For points that fell outside ILDIS polygons, we calculated the geographic distance to the nearest ILDIS polygon and found that most points were geographically very close to an ILDIS polygon, with no obvious bias in the distribution of geographic distances between symbiotic and non-symbiotic legumes ([Supplementary Fig. 3](#S1){ref-type="supplementary-material"}). There was no obvious differences in the presence or number of 'unknown\' introduction status polygons between symbiotic and non-symbiotic species ([Supplementary Fig. 4](#S1){ref-type="supplementary-material"}), indicating that any ambiguities in polygon status was not different between our legume comparison groups. Together, these indicate that ILDIS geographic range data was reliable. Other species trait data ------------------------ We also scraped plant life-form (woody or not woody), life-history (annual or perennial), and information on human uses from ILDIS, as previous invasion studies have found these to be important factors in predicting legume invasion success[@b15]. We were able to obtain plant life form for 3,500 legume species and life-history for 3,462 legume species. We converted life form and history data into two binary traits by coding life form as a 1 for species that were woody (trees or shrubs), and as 0 for non-woody (herbs). Likewise, for life history, species that had an annual life history were assigned a value of 1, and perennial species assigned a 0. Life history and life form data was unavailable for ∼500 species, which would lead to a fairly large reduction in our sample size when including this data as covariates. To maintain full power in our models (see next section for model details), while still accounting for the potentially confounding effects of life form and life history, we imputed the missing values in these traits. We did this using a simple taxonomic imputation. If there were other species in the same genus as a missing species, its value was assigned to be the mean of the life form or life history values for that genus (for example, a value of 0.9 for life form would be assigned to a species in a genus with 90% woody species). If the data were missing for an entire genus, we used the mean of all species found in the same tribe in the same manner. Therefore, the life form and life history variables can be interpreted as the probability that a species is woody or annual respectively, based on their taxonomic group. 83 and 84.6% of missing species were filled at the genus level, for life form and life history respectively, and the remaining missing species at the tribe level. Some genera contained a mixture of either trait value, but 87.5 and 91.2% of genera could be coded greater than 0.75 or less than 0.25 (for example, more than 75% of the genus was one trait or the other) for life-form and life-history respectively. Most genera that contained species with missing data were entirely of one life form (73%) or life history (77%) and could be coded unambiguously as 0 or 1 based on their genus grouping. We repeated our main analysis with two other trait imputation methods (see [Supplementary Note 3](#S1){ref-type="supplementary-material"}) and our results did not change qualitatively, with all model coefficients changing only superficially, and no change in levels of significance for any factor, indicating that our results are robust to several methods of imputation. Therefore, only the results using the first method described above (that is, taxonomic mean imputation) are reported. Human use data was recorded in ILDIS by specifying whether a species was known to have a use in any of 11 different use categories (Chemical products, Domestic, Environmental, Fibre, Food and Drink, Forage, Medicine, Miscellaneous, Toxins, Weed or Wood). If none of these categories was specified in ILDIS, we assumed the species had no known uses. We calculated a human use covariate by counting the number of known human use categories for each species to create a value which could range from 0 to 9 (no species in our study had all 11 use categories), which we refer to here as 'number of human uses\'. Statistical analysis -------------------- We modelled the prevalence of successfully introduced species in regions across the world using a generalised linear mixed model (GLMM). Under the TDWG scheme, there are four nested levels of geographic range specification, from smallest to largest: the country, area, region, and continent level (though the country level does not necessarily always correspond to political countries). In our data, after translating from ILDIS to TDWG, geographic range was determined to different degrees of resolution, depending on the species. Some but not all species were specified at the TDWG area level, for example. To make all species comparable, we analysed introduced ranges on a single level. All non-native ranges were specified to at least the TDWG region level or below, so we used this as our focal geographic unit for the range during our analysis. Henceforth, we will refer to what TDWG calls region level simply as 'regions\'. There were 51 regions in total (see [Supplementary Fig. 6](#S1){ref-type="supplementary-material"} for a map of the regions used in this part of the analysis). Our response data therefore is made up of a vector of zeroes and ones *y*~*ij*~, which could be arranged into a matrix of *n*~species~ rows, and *n*~region~ columns *Y*~*ij*~ containing a one if species *i* is found non-natively in region *j* and a zero if it is not (e.g., the species is coded as one if at least one of its non-native polygons fell in the region). We were interested in testing whether symbiosis affects non-native prevalence across the globe, while controlling for several potentially biasing factors. Our model is the following: The non-native presence of a species *i* in region *j* is modelled as a realization of a binomial process on the probability of species *i* in region *j*, *P*~*ij*~: The probability of species *i* in region *j* is a function of the symbiosis status of species *i* (*SS*~*i*~; equals 1 if symbiotic, 0 if non-symbiotic), a number of potential covariates (*x*~*ik*~ and *x*~*jk*~), and random effects for species (SP\[*i*\]) and region (RE\[*j*\]): Where *α* is an intercept term, *β*~SS~ is the fixed effect coefficient determining the effect of symbiosis, *β*~*k*~ is the fixed effect coefficient determining the effect of species-level covariate *k*, *β*~*z*~ is the fixed effect coefficient determining the effect of region-level covariate *z*, and *V*~species~ and *V*~region~ are the variance parameters for the species and region random effects, respectively. We also initially included the TDWG continent level region as a higher level random effect (within which region was nested), however, we removed it in our final analysis as it explained almost no variation in the model (that is, variance parameter was very close to zero). This mixed effects model with crossed random effects has a number of advantages over a simpler species-level analysis. First, it allows the inclusion of both region-level as well as species-level covariates. Second, the random effect for region accounts for spatial non-independence within regions of the world. In a species-level only analysis, we would not be able to say if our results were driven just by one or a few regions across the globe, whereas with the full mixed model, our inferences are applicable globally. Species-level covariates (*x*~*k*~) included in the model were: the absolute latitude of the centroid of a species\' native range polygons, the total area of the species\' native range polygons, the species\' two binary life history traits (woody or not woody, and annual or perennial), and the number of human uses of the species according to ILDIS. We also included a symbiosis by number of human uses interaction, because we hypothesized that the number of human uses would reflect the probability of a species being deliberately introduced (as opposed to unintentionally), and this may affect the strength of any biological factors on introduction probability. We included one region level covariate (*x*~*z*~): the area of the introduced regions. This was to control for the possibility that larger areas may be more likely to have more introduced species simply due to a sampling effect. To fit the model, we used the lme4 package in R[@b42]. When fitting the model, all continuous covariates were mean centred (subtracted the mean such that zero corresponds to the mean) and scaled by the standard deviation, except for the number of human uses, for which zero is a biologically meaningful value, corresponding to the state at which deliberate introduction attempts should be lowest, and thus acting as the best reference point at which to evaluate other effects (see Frasier[@b16] for useful discussion). We ran two versions of the above model. The first model included all data on all species. Given the excess number of zeroes present in our analysis (that is, ∼78% species only occur natively), we ran a second model only on the species occurring in at least one non-native region to confirm similar results when zeroes were removed from the dataset. All response variables were analysed in R[@b38] and included the symbiosis trait, latitude of origin, total native area, plant life-form (woody or not woody), plant life-history (annual or perennial), number of human uses, the interaction between symbiosis and number of human uses, and the area of the introduced region as predictors. We calculated the correlation between all of our species-level model predictors ([Supplementary Fig. 5](#S1){ref-type="supplementary-material"}) and the highest correlation occurred between being woody and annual in our first (r=−0.49) and second (*r*=−0.59) model ([Supplementary Fig. 5](#S1){ref-type="supplementary-material"}). Testing the model effects ------------------------- We tested whether symbiosis and covariates were significant predictors of the prevalence of successful species introductions using parametric bootstrapping. We simulated 1,000 new response vectors (*y*~*ij*~) from the fitted model, using observed values of the fixed effect variables, and fixing species and region random effects at their estimated values. For each bootstrapped response vector we refit the same model to it and collected the fixed effect coefficients. We then calculated the 95% confidence intervals by determining the 0.025 and 0.975 quantiles of each coefficient\'s bootstrapped sample ([Table 1](#t1){ref-type="table"}). A fixed effect was classified as significant if its 95% confidence interval did not overlap zero. We also calculated 99 and 99.9% confidence intervals ([Table 1](#t1){ref-type="table"}). Visualising the results ----------------------- To translate the model results into a set of more intuitive measures for display, we again used parametric bootstrapping. To show the effect of symbiosis on the prevalence of successful introduced ranges, after controlling for all covariates, we simulated 1,000 response vectors from the fitted model, but this time setting all covariates to a value of zero (symbiosis remained at its observed values). This procedure removes the variation explained by the covariates in a way analogous to least square means in a standard statistical analysis. We took these 1,000 sampled response vectors and calculated several summary statistics for plotting. For each bootstrap sample, we calculated the mean number of introduced ranges for symbiotic and non-symbiotic species. We then plotted the mean and 95% confidence intervals of these values based on the bootstrap samples ([Fig. 2](#f2){ref-type="fig"}). Data availability ----------------- The full data set, including cleaned up ILDIS data, nitrogen-fixation data, and all covariates, is available from the authors on request. All data in their original form are available from public repositories (see Methods). Additional information ====================== **How to cite this article:** Simonsen, A. K. *et al*. Symbiosis limits establishment of legumes outside their native range at a global scale. *Nat. Commun.* **8,** 14790 doi: 10.1038/ncomms14790 (2017). **Publisher\'s note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Material {#S1} ====================== ###### Supplementary Information Supplementary Figures, Supplementary Tables, Supplementary Notes and Supplementary References We kindly thank the curators of the data sources used for this study who are fully listed and acknowledged in the Methods section. We also want to thank Mathew Hill and Kylie Ireland for manuscript comments. The authors declare no competing financial interests. **Author contributions** A.K.S. conceived of the study design. R.D. contributed data. A.K.S. and R.D. analysed the data. A.K.S. wrote the manuscript. All authors discussed the analyses, results and commented on the manuscript. ![A visual representation of the network of successful legume introductions.\ (**a**) Non-symbiotic and (**b**) symbiotic legume species. To facilitate visual interpretation of global network patterns, each species was assigned a single native point (coloured in blue), drawn randomly from its native range, and one or more non-native points (coloured in orange) drawn randomly from each of its non-contiguous non-native ranges. Non-contiguous ranges were defined by merging all polygons whose distance was less than 5 degrees Latitude-Longitude from each other. Connecting light lines between corresponding native and non-native ranges for each species indicate that legume species have successfully established into novel regions, continents and islands. Note: Lines do not necessarily represent actual dispersal pathways, as some non-native ranges may have been colonized from an intermediate non-native range, rather than directly from the native range.](ncomms14790-f1){#f1} ![Non-native ranges between symbiotic and non-symbiotic legumes species.\ (**a**) Mean number of introduced ranges across all legume species studied (including those with no introduced ranges) \[(*n*~total~=180,030=(*n*~species~=3,530) × (n~regions~=51)\]. (**b**) Mean number of introduced ranges for legume species recorded from at least one non-native region \[(*n*~total~=41,412=(*n*~species~=812) × (*n*~regions~=51)\]. A species\' introduced range is defined by a list of geographic regions of non-native occurrence. Points and error bars represent the mean and 95% confidence intervals from parametric bootstraps, controlling for all covariates (absolute native latitude, total native area, life form, life history, area of non-native region and number of human uses).](ncomms14790-f2){#f2} ![Global proportional distribution of symbiotic legume species.\ Regions are coloured according to the proportion of symbiotic legume species in (**a**) native and (**b**) non-native ranges. Lighter colours indicate higher proportions of non-symbiotic legume species. Non-symbiotic legume species tend to primarily occur near the equator in their native range. Non-symbiotic legume species currently account for a higher proportion of species within introduced ranges compared to their proportion within native ranges. The figure shows that the increased spread of non-symbiotic legumes spans multiple continents and islands across the world. Grey areas indicate terrestrial ecoregions where legumes are not known to occur[@b43]. Regions are defined using the Taxonomic Distribution Working Group system[@b39].](ncomms14790-f3){#f3} ![The interaction between symbiosis and human uses.\ The size of the symbiosis effect on successful introductions at different numbers of human uses, using the full dataset. Dots represent the difference in the predicted number of non-native ranges between non-symbiotic and symbiotic species, standardised by dividing by the predicted number of non-native ranges for non-symbiotic species. The standardisation controls for the differences in the predicted number of non-native ranges between different human use levels (determined by the main effect of number of human uses), and makes the interaction easier to visualize. Error bars represent the 95% confidence interval obtained through parametric bootstrapping. The size of the dots is proportional to the number of species in the dataset with that number of human uses, showing that most species had no uses recorded in the dataset (∼70%). The dotted vertical line represents the mean number of human uses across all species in the dataset (0.77). The x-axis is on a square root scale. Negative values mean the symbiotic species are predicted to have a lower prevalence of non-native ranges relative to non-symbiotic species. Apparent non-linearity is due to logit back-transformation.](ncomms14790-f4){#f4} ###### Introduction success as predicted by the symbiosis trait. **Factor** **All species** **Non-native species Only** ------------------------------------- ----------------------------------- ----------------------------- -------- ----------------------------------- -------- -------- Intercept −8.055 --- --- −3.125 --- --- Symbiosis? −0.523[†](#t1-fn3){ref-type="fn"} −0.960 −0.437 −0.587[†](#t1-fn3){ref-type="fn"} −0.832 −0.416 Latitude 0.142[†](#t1-fn3){ref-type="fn"} 0.094 0.210 0.026 −0.013 0.063 Total native area 0.194[†](#t1-fn3){ref-type="fn"} 0.119 0.211 −0.058[†](#t1-fn3){ref-type="fn"} −0.092 −0.038 Annual? 1.092[†](#t1-fn3){ref-type="fn"} 0.915 1.267 0.198[†](#t1-fn3){ref-type="fn"} 0.065 0.329 Woody? 0.353[‡](#t1-fn4){ref-type="fn"} 0.084 0.404 −0.476[†](#t1-fn3){ref-type="fn"} −0.634 −0.408 Number of human uses 0.964[†](#t1-fn3){ref-type="fn"} 0.865 0.981 0.323[†](#t1-fn3){ref-type="fn"} 0.282 0.370 Area of introduced region 0.010 −0.049 0.046 0.019 −0.046 0.051 Symbiosis by human uses interaction 0.152[†](#t1-fn3){ref-type="fn"} 0.104 0.223 0.080[†](#t1-fn3){ref-type="fn"} 0.042 0.129 CI, 95% confidence interval. Symbiosis is incorporated into the model as the presence or absence of the trait, with the inclusion of other factors found to predict introductions in our legume dataset. A negative coefficient indicates lower introduction success for symbiotic legumes compared with non-symbiotic legumes. We excluded non-significant interaction terms. 95% confidence intervals were obtained from parametric bootstrapping. Each response variable is modelled at the species by region level, and the model included a species and a region random effect to account for non-independence of observations within species or regions. The estimated variance for the species and region random effects respectively were 4.83 and 0.85 for the all species model, and 0.9 and 1.43 for the non-native species only model. ^†^99.9% CI does not overlap zero. ^‡^99% CI does not overlap zero. [^1]: These authors contributed equally to this work.
Canucks Roberto Luongo, NOT Mr. October! So if this guy is under .500 for October (17-18-1) since he joined the Canucks, this year, when the points mean more than ever, why not play Andrew Raycroft at least once a week? He sure could not be any worse than the current play of Luongo. Let’s see, last night 4 goals on 12 shots. Colorado game, three goals on 27 shots and Calgary, 5 goals on 23 shots. That works out to a .820 save percentage, a 4.55 GAA, and a three game losing streak to start the season. Luongo has been a slow starter since he arrived in Vancouver, not sure why, it is what it is. What have I noticed in the past four seasons? That he is getting beaten early in the game, usually within the first six shots and then it seems to go downhill from there. It seems when Luongo gets a lot of shots early and makes the saves, it allows him to get into the game and he gets stronger. Maybe that's just the way it is, maybe it's his goaltending DNA, because it was like that on most nights when he played for the Florida Panthers. Whatever the case may be, these possible six points, even this early in the season, may be the difference between making the playoffs and not, never mind winning the division. Before you fans go planning the parade route, remember that this team will have 14 straight road games leading up to, and after the Olympics. We will not know until then how much of a cushion they may have to build up before this monster takes place, but a good start was imperative. I didn’t see this coming after attending training camp and salivating over the depth. With the promising undefeated preseason, the team looked solid right from the back end to the front. But here we are again with a one line team, no secondary scoring, no one looking like they can step forward, and a goalie who wishes he could skip October. Back to Lunongo—this was the first time in recent memory that Coach Alain Vigneault has given the hook to Roberto so early in a game, so early in the season, and the home opener to boot. Can’t say that you can blame him when your goaltender is struggling so badly. To put your club behind every game by three to four goals and expect them to come back is like raising the Titanic. Raycroft came in and was pitching a shut-out until Mikael Samuelsson took that penalty. The resulting power play goal put the game out of reach, but otherwise, Raycroft looked a whole lot sharper than Louie. So what to do? You have to play Luongo to try and get him up to speed, but you surely cannot afford to get on an early season losing streak. Somewhere in this, the coach has to allow Raycroft to steady the ship until November when Luongo can get on that All Star form that makes everyone forget October. I still think playing Raycroft once a week would accomplish that and isn’t that why he was brought in?
Q: Get index from a list where the key changes, groupby I have a list that looks like this: myList = [1, 1, 1, 1, 2, 2, 2, 3, 3, 3] What I want to do is record the index where the items in the list changes value. So for my list above it would be 3, 6. I know that using groupby like this: [len(list(group)) for key, group in groupby(myList)] will result in: [4, 3, 3] but what I want is the index where a group starts/ends rather than just then number of items in the groups. I know I could start summing each sucessive group count-1 to get the index but thought there may be a cleaner way of doing so. Thoughts appreciated. A: [i for i in range(len(myList)-1) if myList[i] != myList[i+1]] In Python 2, replace range with xrange. A: Just use enumerate to generate indexes along with the list. from operator import itemgetter from itertools import groupby myList = [1, 1, 1, 1, 2, 2, 2, 3, 3, 3] [next(group) for key, group in groupby(enumerate(myList), key=itemgetter(1))] # [(0, 1), (4, 2), (7, 3)] This gives pairs of (start_index, value) for each group. If you really just want [3, 6], you can use [tuple(group)[-1][0] for key, group in groupby(enumerate(myList), key=itemgetter(1))][:-1] or indexes = (next(group)[0] - 1 for key, group in groupby(enumerate(myList), key=itemgetter(1))) next(indexes) indexes = list(indexes)
Hi! I'm Tyler Longren, a freelance web developer. I'm a father to two beautiful daughters and a car stereo enthusiast. I like PHP, JavaScript, WordPress, Git, HTML5 & CSS3, and other neat things. I really love the open source community, too. You can find me on twitter or Google+, and Github. This is my personal blog and that's it! Include stylesheet, apply class, done. It’s been a while since Free Flat Buttons have seen any updates, but there’s really nothing to update in my opinion. They serve their purpose quite well. They look very nice when paired with FontAwesome, too. Free Flat Buttons can be found on GitHub, it’s repository is very simple and only includes the button stylesheet and the HTML and CSS associated with the freeflatbuttons.com website. They’re really simple to use, as they should be. All you have to do is include the CSS, <link rel=”stylesheet” href=”button.css”>, include the FontAwesome CSS if you want it, <link href=”http://netdna.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css” rel=”stylesheet”>, and then add a class to an anchor tag. For a normal sized button, something like this: <a class="btn color-1" href="#">Color 1</a> To make a button with rounded corners, use style-2, the other styles are listed on the freeflatbuttons.com site: Well, now what? Work with Me I'm available for hire and always taking new clients, big and small. Got a project or an idea you'd like to discuss? Startup plan but no developer to make it happen? Just get in touch, I'd love to see if I can help you out! Leave some Feedback Got a question or some updated information releavant to this post? Please, leave a comment! The comments are a great way to get help, I read them all and reply to nearly every comment. Let's talk. :) Related Hi! I'm Tyler Longren, a freelance web developer. I'm a father to two beautiful daughters and a car stereo enthusiast. I like PHP, JavaScript, WordPress, Git, HTML5 & CSS3, and other neat things. I really love the open source community, too. You can find me on twitter or Google+, and Github. This is my personal blog and that's it!
Q: Sort in alphabetical order a List in c Hi i am writing a program that manage a list of student, i am using a list, each element is described like this: struct student { char lastname[50]; char name[50]; char date_of_birth[50]; char height[50]; struct student *next; }; struct student *root=0; //this is the root node This is how i add element to the list: void add_element(struct student **root, char lastname[50], char name[50], char date_of_birth[50], char height[50]) { if(*root == 0) { *root = malloc(sizeof(struct student)); strcpy((*root)->lastname,lastname); strcpy( (*root)->name,name); strcpy( (*root)->date_of_birth,date_of_birth); strcpy( (*root)->height,height); (*root)->next = 0; } else { add_element(&(*root)->next,lastname,name,date_of_birth,height); } } I also wrote 2 function, one is for reading from a file, the other is to write the file, the file contains all the students, everything works but i need a function to sort all the element in alphabetical order by lastname, i tried to write one, but it doesn't work, it keeps crashing. I tried a lot of things and they didn't work, this is one attempt, and it doesn't work :-( please help me void sort(struct student *head) { struct student **current; struct student *tmp; for(current = &head ; *current !=NULL ; current = (*current)->next) { if((*current)->next == NULL) { break; } switch(strcmp((*current)->lastname,(*current)->next->lastname)) { case 0: { printf("user not valid"); break; } case 1: { tmp = *current; *current = (*current)->next; (*current)->next = tmp; break; } } } } A: After including remarks from comments to correct the proposed source code, the algorithm to sort the linked-list is missing some steps. At least, to sort a linked-list, it is necessary to have two loops. The choice used of a struct student **current access will be complex for a two nested loops. Here is another powerful sorting function using the optimized qsort() function. Step 1 - before showing the function, to sort a list, the root pointer shall be modified. First method, used for the add_element() by sending the address of the pointer. void sort(struct student **root) { ... } Second method, to return the modified root. struct student *sort(struct student *root) { ... return (root); } Step 2 - the function sort() using qsort() quicksort function. The method allocates a temporary array of pointers in order to have a fixed-size element to be sorted. The first loop is necessary to know the number of pointers to be sorted (if lower than 2, no sort is needed); After allocating the array of struct student *, fill the array by using a loop to each item of the linked-list; Call the qsort() function by using the customized comparison function node_compare() (see next step); Restore the linked-list with the sorted pointers by forcing the value of the struct student *next (the first is *root and the last is pointing to NULL). free the temporary array of struct student *; That's all. // void sort(struct student **root) { struct student *tmp; struct student **array; int i,icount; // number of nodes to be sorted for(tmp = *root,icount = 0;tmp!=NULL;tmp = tmp->next,icount++); if (icount<2) { // no sort needed return; } // allocate an array of pointers array = (struct student **)malloc(icount*sizeof(struct student *)); // push linked-list into array of pointers for(tmp = *root,icount = 0;tmp!=NULL;tmp = tmp->next,icount++) { array[icount]=tmp; } // quicksort using node_compare() customized function qsort(array, icount, sizeof(struct student *), node_compare); // pop linked-list from array of pointers *root = array[0]; (*root)->next = NULL; for(tmp = *root,i = 1;i<icount;i++) { tmp->next = array[i]; array[i]->next = NULL; tmp = tmp->next; } // free the allocated array of pointer free(array); } // Step 3 - the comparison function node_compare() needed to qsort(). The function shall return a signed comparison as strcmp() does. int node_compare(const void * a, const void * b) { // restore node pointers struct student *node_a = *((struct student **)a); struct student *node_b = *((struct student **)b); if (node_b==NULL) return (-1); // force 'node_a' if (node_a==NULL) return (+1); // force 'node_b' // use the strcmp() function return (strcmp(node_a->lastname,node_b->lastname)); } Enhancement - because the add_element() is using a recursive call not compliant with a long linked-list, here is a quite simple non-recursive algorithm. If the recursive algorithm limits the size to few-centuries of elements, the proposed one has been tested with 100,000 elements (40Mb linked-list), generated randomly and sorted. void add_element(struct student **root, char lastname[50], char name[50], char date_of_birth[50], char height[50]) { struct student **current; for(current = root; *current !=NULL ; current = &((*current)->next)); *current = (struct student *)malloc(sizeof(struct student)); strcpy((*current)->lastname,lastname); strcpy( (*current)->name,name); strcpy( (*current)->date_of_birth,date_of_birth); strcpy( (*current)->height,height); (*current)->next = NULL; }
From Left: Police Spokesperson Ruwan Gunasekara, Tourism Development Ministry Secretary Esala Weerakoon, Tourism Development and Christian Religious Affairs Minister John Amaratunga and The Hotels Association of Sri Lanka President Sanath UkwattePic by Kithsiri de Mel But urges tourists must also hold some responsibility for their actions THASL President concerned by thugs in coastal areas backed by local politicos Sri Lanka yesterday pledged to protect tourists visiting the island nation as much as possible following a spate of attacks on foreigners, but stressed that the responsibility of protecting tourists falls on the tourists themselves, as well as every Sri Lankan citizen. “We will try to protect tourists as much as possible,” Tourism Development and Christian Religious Affairs Minister John Amaratunga told a press briefing yesterday. He shared this measured view after earlier guaranteeing the Sri Lankan government’s ability to provide ‘absolute safety’ to tourists, for which he was questioned on how the government could protect tourists seeking authentic experiences off the beaten path and whether the government was over promising. Amaratunga in the past had claimed that Sri Lanka was the safest country in the world for tourists to visit, but recent incidents have proved otherwise. Two incidents of assault and sexual harassment on tourists were reported from Mirissa over the past two weeks, and the policing standards surrounding the sexual harassment case were cast under a spotlight yesterday, further questioning the ability of the police after it failed to control racial violence in the Kandy district last month which in turn drew international attention and may have adversely contributed to the country’s tourism potential going forward. Amaratunga, earlier this year, was condemned on social media for victim blaming, after he had said that a tourist who was raped had herself to blame, since she had gone hiking late in the evening alone. While defending his position with regard to this past statement yesterday, Amaratunga said that tourists must also hold some responsibility for their actions which may lead to troubling situations. “Tourists should not associate themselves with unscrupulous people, take unknown tuks or unregistered vans, or stay in unregistered accommodations. It is the responsibility of the tourists to ensure their safety too,” he said. While admitting that the ultimate responsibility of ensuring the safety of a tourist lies with the Sri Lankan government, he said that the government cannot protect tourists at every location they visit, but that 11 popular tourism spots are being monitored by Tourist Police, while a further 22 locations will be monitored in the near future. Not just foreigners, but local women too are harassed on a daily basis in Sri Lanka. Meanwhile, The Hotels Association of Sri Lanka (THASL) President Sanath Ukwatte said that it is the responsibility of all Sri Lankans to be moral and create a conducive environment for tourists to feel safe in. “But tourists should also be aware of the culture in Sri Lanka. We don’t let our daughters go out alone at night,” he said. Ukwatte went on to say that tourists have been experiencing harassment and assault from locals for years. “This is not a new thing. There’s a big mafia in the south coast. Our managers get harassed if they try to take action. Even the police are helpless because the offenders are backed by local politicians. The law must be enforced, irrespective of who it is,” he said. Police Spokesperson Ruwan Gunasekara ruled out political involvement in the two recent attacks in Mirissa. Amaratunga said that he is hoping that these incidents do not point towards a pattern of discrimination against foreigners. “I hope they are isolated incidents, but I don’t know whether there is a design to bring disrepute to Sri Lanka,” he said. Amaratunga expressed fears that if these types of incidents continue, tourists would not return for visits to Sri Lanka. He said that the government has set a target of US$ 4.5 billion in tourism revenue for 2018, and is working towards making tourism the leading foreign exchange earner for Sri Lanka. (CW)
General Multicarrier System A multicarrier system described herein indicates that one or more individual carriers are used as a group. FIGS. 1(A) and 1(B) show a signal transmission and reception method based upon a multiband radio frequency (RF). For efficient use of multiband (or multicarrier), a technique has been proposed in which one medium access control (MAC) entity handles multiple carriers (e.g., several frequency allocation (FA)). As shown in FIG. 1, one MAC layer in each of a transmitting end and a receiving end may manage several carriers for efficient multicarrier use. Here, for effective transmission and reception of the multicarrier, it is assumed that both the transmitting and receiving ends can transmit and receive multicarrier. Since frequency carriers (FCs) managed by one MAC layer do not have to be contiguous to one another, it may be flexible in view of resource management. That is, a contiguous aggregation and a non-contiguous aggregation are all available. Referring to FIGS. 1(a) and 1(b), PHY0, PHY1, . . . , PHY n−2, PHY n−1 denote multiband according to this technique, and each band may have a magnitude (bandwidth) of FA assigned for a specific service according to a predefined frequency policy. For example, PHY0 (RF carrier 0) may have a bandwidth of FA assigned for a typical FM radio broadcast, and PHY1 (RF carrier 1) may have a bandwidth of FA assigned for cellular phone communications. Each frequency band may have a different frequency bandwidth according to each frequency band characteristic. However, it may be assumed in the following description, for the sake of brief explanation, that each FA has A [MHz] magnitude. Also, each FA may be represented as a carrier frequency for using a baseband signal at each frequency band. Hereinafter, each FA is referred to as “carrier frequency band”or, if not ambiguous, simply as “carrier” representing each carrier frequency band. As shown in 3GPP LTE-A in recent time, to distinguish the carrier from a subcarrier used in a multicarrier technique, the carrier may be referred to as “component carrier.” In this regard, the “multiband” technique may be referred to as “multicarrier” technique or “carrier aggregation” technique. In order to send signals via multiband as shown in FIG. 1(a) and receive signals via the multiband as shown in FIG. 1(b), the transmitting and receiving ends are required to include RF modules, respectively, for transmission and reception of signals over the multiband. Also, in FIG. 1, the configuration of “MAC” may be decided by a base station regardless of downlink (DL) and uplink (UL). Briefly explaining, this technique indicates that one MAC entity (hereinafter, simply referred to as “MAC” if not obscure) manages/runs a plurality of RF carriers (radio frequencies) for signal transmission and reception. Also, the RF carriers managed by the one MAC may not have to be continuous to each other. Hence, in accordance with this technique, it is more flexible in view of resource management. In IEEE 802.16m system as one of wireless communication systems, the carriers may be divided into two carrier types from the perspective of a base station. For example, the carrier types may be divided into a fully configured carrier type (FCCT) and a partially configured carrier type (PCCT). The FCCT indicates a carrier by which every control information and data can be sent or received, and the PCCT indicates a carrier by which only downlink (DL) data can be sent or received. Here, the PCCT may be used for services, such as an enhanced multicast broadcast service (E-MBS), which usually provides DL data. From the perspective of a mobile terminal, assigned carriers may be divided into two types, for example, a primary carrier type and a secondary carrier type. Here, the mobile terminal may be allocated with one primary carrier and a plurality of sub-carriers from the base station. The primary carrier may be selected from the fully configured carriers. Most of essential control information related to the mobile terminal may be sent on the primary carrier. The subcarriers may be selected from the fully configured carriers or the partially configured carriers, and also additionally allocated in response to request of the mobile terminal or instruction of the base station. The mobile terminal may send and receive not only every control information but also control information related to the subcarriers over the primary carrier, and exchange (transceive) data with the base station over the subcarriers. Here, the subcarrier, as a fully configured carrier, allocated to a specific mobile terminal, may be set to a primary carrier of another mobile terminal. Multicarrier Switching Multicarrier switching indicates a multicarrier mode for a terminal to switch a physical layer connection from a primary carrier to a partially configured subcarrier or a fully configured subcarrier. Here, the carrier switching of the terminal may be performed based upon instruction (indication) from a base station in order to receive E-MBS service at a subcarrier. After being connected to the subcarrier for a specific time, the terminal may come back to the primary carrier. While the terminal is connected to the subcarrier for the specific time, the terminal does not have to maintain transmission or reception via the primary carrier. Basic Multicarrier (MC) Mode A basic multicarrier (MC) mode indicates a mode that a terminal operates using only one carrier. However, the terminal may support not only optimized scanning for carriers related to a multicarrier operation but also a primary carrier switching procedure. Carrier Switching Operation for E-MBS Service E-MBS service may be performed by a specific carrier (e.g., subcarrier) other than a primary carrier. In a connected state with a base station, an E-MBS terminal having only one transceiver (i.e., a terminal operating in a carrier switching mode) may perform carrier switching from a primary carrier to another carrier to receive E-MBS data burst, E-MBS configuration message and E-MBS MAP, and carrier switching from the another carrier to the primary carrier to receive a unicast service from the base station. The E-MBS terminal may perform a carrier switching operation based upon its E-MBS subscription information assigned from the base station to the terminal during a dynamic service addition (DSA) procedure. The E-MBS subscription information may be MSTIDs and FIDs, for example. In an actual E-MBS environment, basic (default) E-MBS channels may be assigned (allocated) to every terminals subscribed in the E-MBS service, and the number of default E-MBS channels may be much more than the number of specific E-MBS channels (e.g., premium channels). all the E-MBS terminals subscribe in all the default contents via default free channels. Additionally, some premium users may subscribe in premium contents. In other words, E-MBS terminals subscribed in the premium contents may stay longer in the E-MBS carrier than terminals merely subscribed in the default contents. FIG. 2 is a flowchart showing a carrier switching operation performed based upon terminal subscription information. As shown in FIG. 2, it is assumed that a terminal 1 merely subscribed in default contents and a terminal 2 subscribed in the default contents and a premium content 2. It is also assumed that E-MBS data bursts 1 and 3 are data for the default contents, E-MBS data burst 2 is data for a premium content 1, and E-MBS data bursts 4 and 5 are data for the premium content 2. Referring to FIG. 2, the terminal 1 may stay at a primary carrier while a base station sends E-MBS data bursts 2, 4 and 5 (S201), and the terminal 2 may stay at a primary carrier while the base station sends E-MBS data burst 2 (S202). That is, the base station may allocate unicast resources to terminals, which have subscribed in the premium contents having the lowest unicast scheduling efficiency. During free E-MBS service, a terminal may not need to perform a joining/leaving process at an upper layer. That is, in this case, when E-MBS terminal starts or ends E-MBS service reception, DSA/dynamic service deletion (DSD) process may not be performed. However, in the carrier switching mode, the base station must be known of whether a terminal is receiving the E-MBS service for efficient unicast scheduling. If the terminal is not receiving the E-MBS service, the base station may provide the unicast service to the terminal at the primary carrier at any time.
By continuing to use this site you consent to the use of cookies on your device as described in our Cookie Policy unless you have disabled them. You can change your Cookie Settings at any time but parts of our site will not function correctly without them. China has approved a USD 1 billion loan to revive a long-delayed expressway in central Sri Lanka, the island's government said today. Construction of the first phase of the road linking the capital Colombo with the hill resort of Kandy had been delayed for more than two years due to a lack of foreign funding, according to local media reports. China has emerged as the largest single lender to Sri Lanka in recent years, securing contracts to build roads, railways and ports under the former government of Mahinda Rajapakse. After Wickremesinghe came to power in January 2015, many projects were suspended pending investigations into corruption allegations, but construction work has recently restarted following renegotiations. Last August, China took over a loss-making deep-sea port in the island's south on a 99-year lease under a USD 1.1 billion deal. Colombo is a key hub for Indian cargo, and Beijing has been accused of seeking to develop facilities around the Indian Ocean to counter the rise of its rival and secure its own economic interests. Under Sri Lanka's former regime, China began a controversial $1.4 billion land reclamation project next to Colombo's harbour. There are plans to build a new city centre on the land, with Chinese firms set to invest another billion to construct three 60-storey buildings. The project was formally launched after a visit to Colombo by Chinese President Xi Jinping in 2014 but work was suspended by the new administration.
(Credit: ITV) ITV host Eamonn Holmes aired a clarification after suggesting that the Coronavirus pandemic is linked to 5G technology. Holmes previously suggested the connection was possible, saying “what I don’t accept is mainstream media immediately slapping that down as not true when they don’t know it’s not true,” and that “it’s very easy to say it is not true because it suits the state narrative.” During an April 14 broadcast, Holmes walked his comments back, saying he wanted to “make it clear no scientific evidence to substantiate any of those 5G theories.” According to Ireland’s The Journal, Holmes’ full statement read: “Both Alice Beer and myself agreed in a discussion on this very programme on fake news that it’s not true and there is no connection between the present national health emergency and 5G and to suggest otherwise would be wrong and indeed it could be possibly dangerous. “Every theory relating to such a connection has been proven to be false and we would like to emphasise that. However, many people are rightly concerned and looking for answers and that’s simply what I was trying to do to impart yesterday. But for the avoidance of any doubt I want to make it clear no scientific evidence to substantiate any of those 5G theories. I hope that clears that up now”. Eamonn Holmes full statement on 5G remarks: "Many people are rightly concerned and are looking for answers, and that was simply what I was trying to do. For the avoidance of any doubt … no scientific evidence to substantiate any of those 5G theories." pic.twitter.com/Gsu57yfkUB — Scott Bryan (@scottygb) April 14, 2020 UK broadcast regulator OfCom ruled earlier this month that Uckfield FM, a UK community radio station, broke press guidelines when it aired “unfounded claims” linking coronavirus to 5G. iMediaEthics has written to ITV to ask if it will air a formal correction or only Holmes’ clarification. Last summer, Holmes made news when he called Meghan Markle “uppity” for her denial of photographs when she appeared at Wimbledon; ITV denied having banned the word uppity after the incident but did apologize to a viewer.
Delta Force Raid On ISIS Prison: Helmet Cam Footage first published on October 25, 2015 by Funker 60.8k SHARES Share Tweet New footage out of northern Iraq shows the alleged raid on an ISIS prison compound near Hawija. Helmet cam footage from the raid shows the joint special forces team escorting prisoners out of the ISIS prison camp. You can read more on this story here. One thing that can most certainly be said about this team is that they look smooth and are in definite control of every action they make. They take the term always be cool to a whole different level. The first thing we see as the video opens is an operator holding security down a hallway. He utilizes Wall-Body-Weapon like a true professional, until he is bumped across to continue forward. At nine seconds in, an operator’s NOD gets caught on a low-hanging wire, without hesitation he drops his muzzle and turns about to usher the rescued prisoners past him, while simultaneously untangling himself. The only thing that could have made this entire operation any cleaner, would be tearing down the flag at 1:15. Props to a well executed raid by a truly professional group of individuals. To Master Sergeant Joshua Wheeler of Roland, Oklahoma who served 14 tours in Iraq: You will be missed, brother. Until Valhall.
Introduction ============ The medical nursing represents a field characterized by strong emotions, dominated by the care for the patient and the total attention they should always receive from the medical team, but mostly from the team's main representative in relation with the patient, the medical nurse. Learning to practice medical nursing means, among many others, to deal with one's own states of mind and emotions, in order to reach the personal and professional maturity the job requires. This, in turn, involves a high psychological effort which, sooner or later, can lead to depression, anxiety or stress. This occurs especially because learning to practice this job means, among others, to accept death, beyond its significance as a clinical effort, but as a natural phenomenon, part of everyday life \[[@B1]\], and learning to face the stress which such a situation and all other daily interactions generate is a continuous challenge. \[[@B2]\]. This is why the individual or organizational interventions meant to diminish the stress and its effects have to be based on the knowledge and identification of the stress-inducing factors which affect the professional's wellbeing \[[@B3]\]. The Aim of the Study ==================== This study aims at investigating the stress in first year students of the Medical Nursing Department within the Medical Midwife and Nurse Faculty from Craiova, by collecting answers to the following questions: Is stress higher in female subjects than in male ones? Are there any differences regarding the level of stress as far as the graduated high school courses are concerned? Is yes, are the subjects who graduated humanistic courses more stressed than those who graduated science courses? And are the women who graduated humanistic courses more stressed than those who graduated science courses? Are there any differences between those who are following courses of a second faculty than those who are students for the first time? Matherial and Method ==================== The questionnaire used for this study consists in 15 items and it was elaborated based on the responses provided by a sample group of 20 subjects, first year students of the Medical Midwives and Nurse Faculty, who answered to the question: "What causes would you identify so as to consider the first year of faculty difficult?" Based on the results of the frequency analysis, the items were elaborated and then pre-tested on a sample group of another 20 subjects, students at the same faculty, with an equal distribution of the two genders. The subjects were provided with a scale from 1 to 6, where 1 is totally disagree, 2 partially disagree, 3, rather disagree, 4, rather agree, 5 partially agree, 6 totally agree. The value of the alpha-Cronbach coefficient calculated by considering all 155 items was 0.76. The t test for independent sample groups was used and an alpha below .05 was considered significant. **The Sample Group** For this research, 100 subjects were used, of which 28 male subjects and 72 female subjects. From these, 44 graduated a humanistic high school and 56 a science high school; for 76 of them this is the first faculty whose courses they are following, while for 24, the second. Results ======= The effect of the biological gender variable on the stress level variable ------------------------------------------------------------------------- By applying the t test for independent sample groups, no statistically significant differences were recorded for the stress level (t = 1.863; p = 0.067; p \> 0.05), the average of male subjects is 5.11, while the average of female subjects is 4.86 (Table [1](#T1){ref-type="table"}). ###### The effect of the biological gender variable on the stress level variable ---------------------- ---------- ------------    **Male** **Female** Average 5.11 4.86 Standard Deviation 0.55 0.72 C.V. (%) 10.76% 14.81% t= 1.863   p - Student's t test 0.067 \- NS ---------------------- ---------- ------------ The effect of the graduated high school profile variable on the stress level variable ------------------------------------------------------------------------------------- The t test for independent sample groups was applied in order to verify the effect of the graduated high school profile variable on the level of stress. No significant differences were recorded (t=1.045; p=0.299, p \> 0.05), meaning that the subjects who graduated humanistic high school courses experience the same level of stress like the subjects who graduated science high school courses (Table [2](#T2){ref-type="table"}). ###### The effect of the graduated high school profile variable on the stress level variable ---------------------- ---------------- -------------    **Humanistic** **Science** Average 5.01 4.87 Standard Deviation 0.58 0.76 C.V. (%) 11.58% 15.61% t= 1.045   p - Student's t test 0.299 \- NS ---------------------- ---------------- ------------- The effect of the prior university studies variable on the stress level variable -------------------------------------------------------------------------------- The t test for independent sample groups was applied in order to verify the effect of the prior university studies variable on the level of stress. No significant differences were recorded (t=0.072; p=0.943, p \> 0.05), meaning that the subjects who are currently following courses of the first faculty experience the same level of stress like the students who are following the courses of a second faculty (Table [3](#T3){ref-type="table"}). ###### The effect of the prior university studies variable on the stress level variable ---------------------- ------------------- --------------------    **First faculty** **Second faculty** Average 4.93 4.94 Standard Deviation 0.73 0.54 C.V. (%) 14.81% 10.93% t= 0.072   p - Student's t test 0.943 \- NS ---------------------- ------------------- -------------------- There are no significant differences regarding the stress level of the female subjects who graduated a high school with a humanistic profile compared to the female subjects who graduated a high school with a science profile (t= 0.409; p = 0.685). There are no significant differences regarding the stress level of the male subjects who graduated another faculty compared to the male subjects who currently following the courses of the first faculty (t = 1.11; p = 0.289). Discussion ========== The majority of the studies performed on students, including on those who study the medical field, reveals that the women have the tendency to be more stressed than men \[[@B4],[@B5]\], and the studies which recorded equal levels of stress explained this through the different ways the individuals interpret the same stressing factors. For men, showing you are stressed is a sign of weakness, while for women, the stressing situations have a higher negative impact. \[[@B6]\] Such is the explanation for the current case. Furthermore, in a field in which the number of women is significantly higher and the job requirements are more easily fulfilled by members of this gender, considering the women in the medical field tend to be perceived as more communicative, more easily to approach and better handling the social skills which involve maintaining relations through communication \[[@B7]\], men can face new challenges which would stretch them even more. Also, other studies have shown there are no differences regarding the stress level in the men and women who study in a medical profile faculty, as the identified causes are diverse, ranging from food habits to learning strategies. \[[@B8]\]. One of the myths the teaching in medical faculties from Romania is founded on is that students who graduated high school science classes perform better during the faculty than those who graduated a humanistic high school. The basis of this assumption rested on the conviction that the number of curriculum topics which are currently classified as within the science field and are part of the program is higher than the number of humanistic topics and, implicitly, those who are more accustomed with them and previously obtained positive results at them will also perform better during faculty. Until now, there is no such research as to study the correlation between the two variables. If we assume that this belief is based on the experience the student has with this type of requirements and work style of the named academic topics, then we can use, for analysis purposes, the idea that coping strategies which were successfully used in the past form a general stable style of adapting to that sort of issues. \[[@B9]\] Moreover, such an optimistic and confident attitude ("I've been through this before and I know how to handle it" or "I know I'm good at this topic") diminishes the stress experienced during the current situation which seems familiar. \[[@B10]\] The results from this study refuted the myth, as the students experienced the same level of stress, regardless of the high school they had graduated. This aspect can also be explained by the higher difficulty level of the studied topics, as well as by the different work rhythm in the first year, much more alert than the one during high school. One can notice a fracture between the almost parentlike style the high school students are treated by the teachers and the rather independent one of the university professors, where the learning responsibility falls largely on the students. Besides, being a student also involves having the capacity to memorize large amounts of information in a relatively short time, without being able to practice the mechanical learning from high school. Starting from the same assumptions according to which prior experience in a certain field forms a valid and stable work strategy to be applied in similar situations, ne expected the students who graduated another faculty to have a lower level of stress in comparison with those who experience being students for the first time. The results showed there are no significant differences, as the prior experience had no influence whatsoever. If we are to take into consideration the economic and social factors, then we may be able to explain this phenomenon as result of the lack of concordances between educational offers, in general, and the requirements on the labor market in Romania, as there are still attempts to reconcile the two aspects. Thus, there are graduates in over-saturated filed, who, at a certain moment, decide to reorient themselves towards others where there are more job opportunities, in the country or abroad. This aspect can generate, along with the frustration of giving up the profession one initially desired, as frustration is one of the important causes of stress \[[@B10]\], the pressure of financial and social discomfort stemmed from the impossibility of finding a workplace or a better paid one. The new requirements, the high expectations from the more experienced students, coming from themselves or from those around, have been identified as a source for stress in other studies as well, which emphasize the idea that the level of stress will increase in the following years. \[[@B11]\]. Because the pressure is high and of a relatively new nature, it is mandatory to employ an additional preparation of the students, aimed at adapting them to both the current educational requirements and to the future professional ones, so that, besides the basic preparation, the nursing department students would benefit from strategies which will enable them to face challenges while preserving their health. In addition to this sort of pressure, previous research have shown that, for the nursing department students, learning is difficult due to the large amount of time spent in school, the large workload and the uncertainty regarding their own necessary competencies and abilities, but also due to the difficulty of coping with the patients' suffering or death. \[[@B11]\] Similarly, no differences were recorded as far as the subjects' genders were concerned, neither within the same gender category in these two results, which determine us to favor the external factors, especially the social and economic ones \[[@B12]\], some of them driving the students towards choosing a certain faculty. Conclusions =========== The stress equally affects the nursing department students, regardless of their gender or prior studies. The pressure of specific economic and social factors seem to be the most important in this respect, along with the pressure of the new tasks and abilities or competencies which have to be learned in an environment which allows for very few mistakes.
Plant catalases: peroxisomal redox guardians. While genomics and post-genomics studies have revealed that plant cell redox state is controlled by a complex genetic network, available data mean that catalase must continue to be counted among the most important of antioxidative enzymes. Plants species analyzed to date contain three catalase genes, and comparison of expression patterns and information from studies on mutants suggests that the encoded proteins have relatively specific roles in determining accumulation of H(2)O(2) produced through various metabolic pathways. This review provides an update on the different catalases and discusses their established or likely physiological functions. Particular attention is paid to regulation of catalase expression and activity, intracellular trafficking of the protein from cytosol to peroxisome, and the integration of catalase function into the peroxisomal antioxidative network. We discuss how plants deficient in catalase are not only key tools to identify catalase functions, but are also generating new insight into H(2)O(2) signalling in plants and the potential importance of peroxisomal and other intracellular processes in this signalling.
Background {#Sec1} ========== Dementia has become a global challenge for public health. Currently, over 40 million people worldwide live with this condition and this number would double by 2030 and more than triple by 2050 \[[@CR1]\]. Alzheimer's disease (AD) is the most prevalent cause of dementia, characterized by progressive cognitive and functional impairments and as well as memory deterioration. Although much effort has been made in the past several decades to uncover the mechanism of AD pathogenesis and to further translate these findings into the clinic, there are still no mechanism-based treatments approved for this devastating disease and the current therapies only provide transient symptomatic release. The two most well-known pathological hallmarks of AD are extracellular amyloid plaques comprised of aggregated Aβ, and intracellular neurofibrillary tangles (NFTs) generated by hyperphosphorylated microtubule-associated protein tau. Increasing evidence indicates that neuroinflammation can act as an independent factor at very early stage of AD, where the immune-related genes and cytokines are the key participants. Cytokines are a heterogeneous group of proteins with molecular weights ranging from 8 to 40 kDa. These multifunctional molecules can be synthesized by nearly all nucleated cells and generally act locally in a paracrine or autocrine manner. Many of them are referred to as interleukins (ILs), indicating that they are secreted by and act on leukocytes. Other important types of cytokines, such as tumor necrosis factors (TNFs), interferons (IFNs) and transforming growth factors (TGFs) that can cause cell death, activate natural killer cells and macrophages, and induce phenotypic transformation and act as a negative autocrine growth factor, respectively. Another member of the big cytokine family is the chemokines, which can attract and activate leukocytes. In view of their relatively exclusive functions, chemokines are usually discussed separately. As cytokines are rapidly changed in response to infections or trauma, they have been classified as either "pro-inflammatory" or "anti-inflammatory". The balance between the two types of cytokines guarantees immediate elimination of the invading pathogens and the timely withdraw of excessive reaction, which is the key to preventing many diseases including the neurodegenerative diseases. The expression of cytokine receptors is temporally and spatially regulated in the central nervous system (CNS) \[[@CR2]\], and they are closely involved in cell proliferation, gliogenesis, neurogenesis, cell migration, apoptosis, and synaptic release of neurotransmitters \[[@CR3], [@CR4]\]. Cytokines have attracted much attention towards their exact roles in different stages of AD and the possibility for therapeutics. However, cytokines levels detected in AD patients were inconsistent among different research groups, while regulating the expression of cytokines in AD animal models yielded unexpected results as well. Here, we will focus on the most extensively studied cytokines, including ILs, TNF-α, TGF-β and IFN-γ, looking for the commonness, reasoning the disagreement among recent studies and give suggestions about how to translate these precious findings from the laboratories to the clinic in AD. Evidence from AD patients {#Sec2} ========================= The postmortem analysis of the AD brains has provided pioneering evidence for involvement of inflammation in AD pathology. IL-1β \[[@CR5]\], IL-6 \[[@CR6]\] and TGF-β \[[@CR7]\] and many other cytokines have been found to accumulate around the amyloid plaques in the brain of AD patients, which led to numerous studies investigating the levels of pro-inflammatory and anti-inflammatory cytokines in the cerebral spinal fluid (CSF) or serum of patients with mild cognitive impairment (MCI) or AD. Although results are inconclusive, there appears a trend that pro-inflammatory (IL-1β, IL-6, TNF-α) and anti-inflammatory cytokines (IL-1 receptor antagonist (IL-1ra), IL-10) are both elevated in the CSF and plasma of AD patients \[[@CR8]\]. The alterations of cytokine levels reflect the disturbance of immune system in AD, however, the evidence from the body fluid is insufficient to decide whether these changes are a initiating or secondary event of the disease, thus more approaches should be adopted to illustrate a more reliable picture for the role of cytokines. Although the established genetic causes such as gene mutations encoding amyloid precursor protein (*APP*), presenelin 1 (*PSEN1*) and *PSEN2* are only dominant to a minority of familial type of AD, these risk genes have deepened our understanding of AD mechanisms in many aspects. For instance, the heterozygous rare variants in gene coding triggering receptor expressed on myeloid cells 2 (TREM2) increases risk of AD with an unfavorable inflammatory condition for Aβ clearance \[[@CR9]\], thus shedding a light on the possible initiating role of inflammation in AD pathogenesis. To date, at least 23 cytokine polymorphisms involving 13 types of cytokines have been identified to be associated with AD. Based on the following three conditions (1) having polymorphisms that are significantly associated with AD, (2) having corresponding genotype/phenotype data, and (3) having previous records of the changed levels in AD patients, these cytokines can be divided into five groups as follows: (i) Cytokines like IL-1β, IL-6, IL-18 and TNF-α have the above three conditions. (ii) Cytokines like IL-4, IL-12, IL-23 and IFN-γ have the first two conditions but have no level change or the related data in AD, demanding new strategies to measure the cytokine level in AD patients, especially in those with the polymorphisms. (iii) Cytokines like IL-10 have conditions 1 and 3, calling for future studies. (iv) Cytokines like IL-1ra and TGF-β only meet condition 3 but have numerous evidence from both in vivo and in vitro studies, indicating that the genetic factor may not be crucial for their actions in AD or need further studies. (v) Cytokines like IL-16, IL-15 and IL-17 that either only have condition 1 or lack all three conditions still needs more evidence to confirm their involvement in AD. Although many studies have discussed the polymorphism-related cytokine level changes, the data are mostly referenced from other research fields than AD, such as cancer. The direct evidence for the cytokine levels in different populations is also not convincing enough to draw a definite conclusion. In the subgroup meta-analysis of cytokine polymorphisms, many grouping factors could decrease heterogeneity and improve significance, such as races \[[@CR10]--[@CR14]\], apolipoprotein E (ApoE) *ε4* allele and time of AD onset. As for races, it is rare to find significance in Asian and Caucasian populations, and in more extreme cases, a polymorphism indicates higher risk of AD in a population while shows lower risk in the other \[[@CR12], [@CR13]\]. This may be a result of the different frequencies of the polymorphism between different races and interplay of the variant with other unknown race-specific genes, or even with the environment. Of course, the influence from limited sample size of certain population cannot be excluded \[[@CR15]\]. ApoE *ε4* allele, the widely recognized late-onset AD triggering factor, is associated with at least 5 cytokine polymorphisms \[[@CR16]--[@CR20]\], indicating a potential synergistic interaction between them. ApoE *ε4* can modify AD risk in patients with diabetes or cardiovascular disease, which could be attributable to related hyperlipidemia and hypercholesterolemia. ApoE4 could also independently cause neurovascular dysfunction through triggering inflammatory cascades \[[@CR21]\]. Thus, it would be necessary to know whether cytokines play the initiating or secondary role in the interaction with ApoE. In addition, although few studies clarified the time of AD onset in their samples, it seems from the present studies the alteration cytokine levels have more influence on the late-onset AD (LOAD). As many cytokines have a close interaction with ApoE, whether this potential synergistic effect is the sole reason to the onset-time association deserves further investigation. When single polymorphism of a cytokine does not always guarantee significance, the haplotype of one or more different types of cytokines may show associations with AD \[[@CR20], [@CR22], [@CR23]\]. There is also a positive and linear relation between the numbers of the pro-inflammatory cytokine polymorphisms and AD risk \[[@CR24]\], which suggests that their corresponding proteins might interact with each other in a cumulative manner \[[@CR25]\]. Compared with the widely recognized genetic risk factors like TREM2 or CD33 \[[@CR26]\], the genetic evidence from cytokines may be insufficient to prove that cytokines levels imbalance alone is able to trigger AD. However, a polymorphism of IFN-γ is associated with fast progressing AD makes it certain that cytokines could play an active role in exacerbating the AD course \[[@CR27]\]. Together, the cytokine polymorphisms may not markedly assist in predicting AD risk, but they have irreplaceable value in identifying pathways involved in the disorder and potential drug targets. The relationship between cytokines with races and ApoE and some of the AD-related cytokine polymorphisms are summarized in Table [1](#Tab1){ref-type="table"}.Table 1Cytokine polymorphisms and levels in serum or CSFCytokines^a^Levels^b^Polymorphisms/haplotypesCorresponding cytokine expression^c^Results^d^MethodsRef.MCIADPlasma/serumCSFPlasma/serumCSFIL-1 family \[[@CR144]\]IL-1α=\\= or ↓\\-889 C/T (rs1800587)T: ↑\* ↑ in EOADMeta-analysis\[[@CR145]\]No \* in LOAD.\* ↑ in CaucasiansMeta-analysis\[[@CR14]\]No \* in Asians.IL-1β↑=↑or == or ↑-511 C/T (rs16944)?no\*Meta-analysis\[[@CR15]\]-31 T/C (rs1143627)C: ?\*↓ in ItaliansCase--control\[[@CR25]\]−511C/−31 T/IL1RN2?\*↓ in elderly group of BraziliansCase--control\[[@CR23]\]−511C/−31C/IL1RN1?\* ↑ in BraziliansCase--control+3953 C/T (rs1143634)T: ↑\* ↑ in non-AsiansMeta-analysis\[[@CR15]\], \[[@CR14]\]Exon 5 E1/E2\\No\* in Taiwan populationCase--control\[[@CR146]\]IL-1ra\\\\-↓ \[[@CR139]\]Intron 2 I/II/IV\\IL-18 \[[@CR71]\]=\\= or ↑↑-607 A/C (rs1946518)C: ↑\* ↑ in LOAD of Han Chinese.Case--control\[[@CR20]\]\*↑↑ in ApoE ε4 carrier.-137 C/G (rs187238)G: ↑-607 C or -137 G↑\* ↑ in LOAD of Han Chinese.IL-33 \[[@CR147]\]\\\\\\\\rs11792633 C/TT: ↑\*↓ in non-ApoE ε4 carrier in both Caucasians and Han Chinese.Case--control\[[@CR19]\],\[[@CR148]\]IL-4 \[[@CR149]\]\\\\\\\\-1098 T/G (rs2243248)G: Possibly ↓\*↑ in Han ChineseCase--control\[[@CR150]\]T: Possibly ↓\*↑ in Caucasians\[[@CR151]\]-590 C/T (rs2243250)C: ↓\*↑ in Han Chinese and Caucasians\[[@CR150]\], \[[@CR151]\]IL-6 familyIL-6 \[[@CR152], [@CR153]\]=\\↑or == or ↑-174 G/C (rs1800795)C: ↓\*↓ in Asians, No\* in Caucasians.Meta-analysis\[[@CR10]\].\*↓ in Italians.Case--control\[[@CR25]\]-572 C/G (rs1800796)C: ?\*↑ in ApoE ε4 carriers.Case--control\[[@CR16]\]IL-11 \[[@CR154]\]=\\= or ↑↑\\\\\\\\\\IL-10 \[[@CR155], [@CR156]\]=↑= or ↑=-1082 A/G (rs1800896)A: ?\*↑ in Caucasians, No\* in Asians.Meta- analysis\[[@CR11]\]G: ↑↓ in CaucasiansMeta- analysis\[[@CR22]\]-819 T/C (rs1800871)?no \*-592 A/C (rs1800872)-1082G/-819C/-592C?↓IL-12 family \[[@CR157]\]IL-12A=\\==rs2243115 T/GG: ↓\*↓ in LOAD in ApoE ε4 carrier of Northern Han ChineseCase--control\[[@CR17]\]rs568408 G/AA: ↓\*↓ in LOAD of Northern Han ChineseIL-12B=\\==rs3212227 A/CC: ↓\*↓ in LOAD of Northern Han ChineseIL-23\\\\\\\\rs10889677 A/C (L-23R)C: ↓\*↓ risk in Northern Han Chinese.Case--control\[[@CR18]\]rs1884444 T/G (IL-23R)G: ↓\*↑ in ApoE ε4 carrier of Northern Han Chinese.Case--controlIL-15 \[[@CR158]\]\\\\==\[[@CR159]\]\\\\\\\\\\IL-16 \[[@CR160]\]\\\\\\\\rs4072111 C/TT: ?\*↓ in LOAD of Iranians.Case--control\[[@CR161]\]IL-17 \[[@CR162]\]\\\\\\\\\\\\\\\\\\TNF-α \[[@CR163]\]= or ↑↓= or ↑ or ↓= or ↑ or ↓-308 G/A (rs1800629)A: ↑\*↑ in East AsianMeta-analysis\[[@CR12]\], \[[@CR13]\]\*↓ in Northern European population.No\* in ItaliansCase--control\[[@CR25]\]TGF-β \[[@CR164]\]\\↓↑ or ↓ or =↑ or ↓ or =\\\\\\\\\\IFN-γ \[[@CR165]\]\\\\==-874 T/A (rs62559044)A: ↓\*↑ in fast progressing ADCase--control\[[@CR27]\]*Abbreviation*: *IL-1 ra* IL-1 receptor antagonist, *EOAD* early-onset Alzheimer's disease, *LOAD* late-onset Alzheimer's disease^a^Each cytokine or cytokine family was supplemented with a latest review for detailed information of physiological parameters^b^↑: up-regulated, ↓: down-regulated, =: no change, \\: no data. Unless otherwise noted, all the data of cytokine levels is from Brosseron et al. 2014 \[[@CR8]\]^c^↑: enhance the cytokine expression, ↓: attenuate the cytokine expression, ?: unknown yet^d^\*: significant, ↑: higher risk of AD onset, ↓: lower risk of AD onset Cytokines related to AD-like Aβ abnormalities {#Sec3} ============================================= As one of the most well-known hallmarks of AD, Aβ is actively involved in the neuroinflammation. It is believed that Aβ has a predominant role in launching the detrimental self-exaggerated inflammation process that is responsible for the disease progression. The Aβ peptide is derived from amyloid precursor protein (APP) by sequential cleavages of two membrane-bound proteases. Aβ of different length, especially Aβ~1-42~ then form soluble oligomers and fibrils, the latter is the major component of extracellular amyloid plaques. Soluble Aβ can be degraded by various extracellular proteases, while fibrillary Aβ is phagocytosed by microglia, the resident phagocytes of CNS, then enter the endolysosomal pathway \[[@CR28]\]. Astrocytes are also capable of degrading Aβ, primarily the cerebrovascular Aβ \[[@CR29]\]. The dysregulation of Aβ clearance process resulted from the skewing of microglia or astrocytes to pro-inflammatory state, characterized by elevated levels of pro-inflammatory cytokines and compromised ability in Aβ clearance, will lead to Aβ accumulation and a sustained immune activation. Several environmental factors, including diabetes, obesity, aging that are associated with immune disturbance could trigger the phenotype transformation of glial cells \[[@CR28]\] through either direct modulation of the relevant mediators \[[@CR30]\] or epigenetic modification \[[@CR31]\]. Then, elicited by a self-propagating circle through the interaction between Aβ and pro-inflammatory cytokines \[[@CR32]--[@CR34]\], the chronic inflammation state is ultimately independent of the primary stimulus, which is a possible explanation to the failure of anti-amyloid treatment strategies in late stage of AD \[[@CR35]\]. Several anti-TNF-α biologic medications have rescued Aβ deposition, behavioral impairments and inflammation in AD animal models \[[@CR36]--[@CR39]\], suggesting that TNF-α is a detrimental factor in AD course and can serve as a reliable AD target. However, hippocampal expression of TNF-α in APP transgenic mice at early stage induced robust glial activation that attenuate Aβ plaques without altering the APP levels \[[@CR40]\]. Although there was only a suspicious infiltration of peripheral immune cells, increased major histocompatibility complex class II (MHC-II) cells were detected in the TNF-α expressing mice, indicating an enhanced antigen-presenting efficiency and more frequent communication with infiltrating T cells, which may facilitate Aβ removal. Several studies indicate that overexpression of IL-1β in APP/PS1 mice reduces Aβ plaque accompanied by an activated population of microglia with greater phagocytosis \[[@CR41], [@CR42]\]. It is proposed that this group of microglia might be endogenous Arg-1+ M2a phenotype induced by Th2 cytokines, such as IL-4, secreted by a group of cells recruited to the Aβ plaques during the sustained IL-1β neuroinflammation \[[@CR42]\]. The mice deficient in IL-1R had lower recruitment of microglia to amyloid plaques, implying that IL-1β can mediate microglial chemotaxis \[[@CR43]\]. Moreover, IL-4 can down-regulate TNF-α and up-regulate MHC-II, insulin-like growth factor (IGF)-1 and CD36 in microglia \[[@CR44]\], and thus not only decrease the neurotoxicity but also promote the ability of presenting antigen to T cells \[[@CR45]\]. Similar results were also seen in IL-6 \[[@CR46]\]. These studies indicate that overexpressing pro-inflmmatory cytokines in CNS may generate Aβ-clearance-promoting effect with a peripheral responses involved. However, it is noteworthy that none of these studies have relevant behavioral results (see Table [2](#Tab2){ref-type="table"}), thus we cannot assess the overall result of this type of cytokine modulation. It is reported that chronic neuronal TNF-α expression in 3xTg AD mice led to large amount of neuronal death \[[@CR47]\]. Whether the enhanced local inflammation and direct neurotoxicity or periphery-mediated Aβ reduction has larger impact on the cognitive performance needs further studies. Moreover, as the expressions of human APP or tau in AD animal models are driven by various unnatural transgene promoters, the possibility that some anti-cytokine molecules may act through interacting with these regulatory elements cannot be ruled out \[[@CR37]\]. Therefore, a critical verification with alternative AD models is needed.Table 2Methods and results from in vivo studies of cytokinesCytokines^a^AnimalsMain AD-like Pathology and initiating timeCytokines Expression SystemExpression DurationResults^b^Ref.Delivery MethodAdministration RoutesImmuno-histochemistryBehaviors**IL-1β**3xTg AD mice (9 months old)Aβ plaque: 6 mo. Tau: 12 -15 mo.anti-IL-1R blocking antibodyPeritoneal Injectionevery 8-9 days for 6 monthsAβ deposition ↓; Tau phosphorylation ↓Cognition ↑\[[@CR83]\]Rats adult-IL-1β injectionsCerebral ventricles1 dTNF-α, IL-10 ↑No significance\[[@CR34]\]8 dTNF-α, IL-1β ↑ IL-10 ↓; APP mRNA ↑Memory ↓3xTg AD mice (8 months old)Aβ plaque: 6 mo. Tau:12 -15 mo.IL-1β-XAT cassetteSubiculum1 and 3 mo.Aβ deposition ↓; Tau phosphorylation ↑\\\[[@CR41]\]APPswe/PSEN1dE9 mice (8 months old)Aβ plaque: 6 mo.rAAV2-IL-1βHippocampi1 mo.Aβ deposition ↓\\\[[@CR42]\]IL-6TgCRND8 mice (0 -12 h old (P0)/36 -48 h old (P2))Early Aβ plaque: 3mo. Dense-cored plaques: 5 mo.rAAV2/1-IL-6Cerebral ventricles5 mo.Aβ deposition ↓\\\[[@CR46]\]TgCRND8 mice (4 mo.)rAAV2/1-IL-6Hippocampi1-1.5 mo.Aβ deposition ↓\\Tg2576 mice (P0)Numerous Aβ plaques:11-13 mo.rAAV2/1-IL-6Hippocampi3 mo.Aβ deposition ↓\\**IL-4**Tg2576 + PS1 mice (3 months old)Aβ plaques: 6 mo.rAAV2/1-IL-4Hippocampi5 mo.Aβ↓; Gliosis ↓; Neurogenesis ↑Spatial learning ↑\[[@CR54]\]TgCRND8 mice (4 months old)Early Aβ plaque: 3 mo.rAAV2/1-IL-4Hippocampi1.5 mo.Aβ↑; Gliosis ↑\\\[[@CR55]\]APPswe/PSEN1dE9 mice (3 months old)Aβ plaque: 6 mo.rAAV2/1-IL-4Frontal cortex, Hippocampi43 d.Aβ↓ with no significance; Enhanced M2a phenotype of microglia\\\[[@CR56]\]**IL-10**APPswe/PSEN1dE9 mice (3 months old)Aβ plaques: 6 mo.rAAV2/1-IL-10Hippocampi5 mo.Aβ =; Gliosis ↓; Neurogenesis ↑.Spatial learning ↑\[[@CR52]\]TgCRND8 mice (P0/P2)Early Aβ plaque: 3mo.rAAV2/1-IL-10Cerebral ventricles6 mo.Aβ deposition ↑Cognition ↓\[[@CR51]\]Tg2576 mice (8 months old)Numerous Aβ plaques: 11-13 mo.rAAV2/1-IL-10Hippocampi5 mo.Aβ deposition ↑Cognition ↓APPswe/PSEN1dE9 miceAβ plaque: 6 mo.Bred with IL-10 KO miceThe whole body12-13 mo.Aβ deposition ↓Cognition ↑\[[@CR53]\]IL-12/IL-23APPswe/PSEN1dE9 miceAβ plaque: 6 mo.Bred with p40 (IL-12 and IL-23 shared) KO, p35 (IL-12) KO or p19 (IL-23) KO miceThe whole body4 mo.Aβ deposition ↓ (especially with p40 KO)Cognition ↑\[[@CR65]\]Senescence accelerated mouse (SAMP8) mice (6 months old)Accelerated aging.siRNA KO of p40Dorsal third ventricle1 mo.Aβ deposition ↓Cognition ↑\[[@CR66]\]**TNF-α**TgCRND8 mice (4 months old)Early Aβ plaque: 3 mo.rAAV2/1-TNF-αHippocampi1.5 mo.Aβ deposition ↓\\\[[@CR40]\]3xTg AD mice (10, 17 months old)Aβ plaque: 6 mo. Tau: 12 -15 mo.TNF-α-lowering agent (3,6\'-dithiothalidomide)Peritoneal Injection1.5 mo.APP, Aβ peptide and Aβ deposition ↓; Tau phosphorylation ↓Cognition ↑\[[@CR37]\]3xTg AD mice (6 months old)TNF-α-lowering agent (IDT)Oral administration10 mo.Fibrillar Aβ↓; PHF-tau ↓Cognition ↑\[[@CR39]\]TGF-βhAPP J9 line miceAβ plaques :5-7 mo.Bred with transgenic expressing astrocytes-induced TGF-β1 miceBrain12-15 mo.Aβ deposition ↓; Perivascular Aβ deposition ↑\\\[[@CR57]\]Transgenic mice with inducible neuron-specific expression of TGF-β1 (3 months old)-The heterologous tTA systemNeocortex, hippocampi, striatum54 dPerivascular Aβ deposition ↑\\\[[@CR62]\]24 dDeath of neurons induced by 3-nitropropionic acid ↓\\SD rats with Aβ1-42 injection in bilateral hippocampusAβTGF-β1 injection 7 d after Aβ injectionLeft cerebral ventricles3dAPP ↓Cognition ↑\[[@CR166]\]SD rats with Aβ1-42 injection in bilateral hippocampusAβTGF-β1 administration1h prior to Aβ injectionCerebral ventricles7 dAPP ↓; PP2A ↑; TNF-α, IL-1β, iNOS, IFN-γ, IL-2, IL-17 and IL-22 ↓.Cognition ↑\[[@CR58]\]TGF-β1 administration7 d after Aβ injectionNares7 dPP2A ↑; IL-1β, iNOS, IFN-γ, IL-2 and IL-17 ↓.Cognition ↑**IFN-γ**APP Tg J20 miceAβ plaques : 5-7 mo.Bred with Tg SJL mice expressing IFN-γThe whole body9 mo.Oligodendrogenesis ↓\\\[[@CR167]\]3xTg AD mice (2 months old)Aβ plaque: 6 mo. Tau:12 -15 mo.rAAV2/1- IFN-γHippocampi10 mo.Aβ deposition ↑; Tau phosphorylation ↓\\\[[@CR168]\]TgCRND8 mice P2Early Aβ plaque: 3mo. Dense-cored plaques: 5 mo.rAAV2/1- IFN-γCerebral ventricles5 mo.Aβ deposition ↓; Gliosis ↑; Complement expression ↑; Peripheral monocytes infiltration ↑\\\[[@CR63]\]TgCRND8 mice (4 months old)Hippocampi1.5 mo.JNPL3 mice (P2), rTg4510 mice (P2)Tau:4 mo.rAAV2/1- IFN-γCerebral ventricles3 mo.Soluble tau phosphorylation ↑\\\[[@CR87]\]*Abbreviation*: *PHF-tau* Paired helical filament tau, *KO* knockout^a^Cytokines with controversial results are in bold.^b^↑: increase or improve, ↓: decrease or exacerbate, =: no change, \\: no dataFor more detailed information for the model animals mentioned above, please refer to <http://www.alzforum.org/research-models> \[[@CR169]\] On the other hand, the typical anti-inflammatory cytokines such as IL-4 and IL-10 suppress the inflammation through inhibiting the secretion of IL-1β, IL-6, TNF-α by microglia \[[@CR48]--[@CR50]\] in vitro. In contrast to IL-4 that triggers M2a activation state associated with development of an anti-inflammatory environment and enhanced phagocytosis, IL-10 drives M2c polarization that is associated with deactivation of microglia. Overexpressing IL-10 in several AD animal models weakened the phagocytosis of soluble Aβ by microglia and exacerbeted Aβ deposits with cognitive impairment \[[@CR51]--[@CR53]\]. Although inconsistent outcomes do exist, a recent study using IL-10 knockout mice supports the benefit of IL-10 removal. Considering that the IL-10 level increased in AD patients \[[@CR53]\], it appears that the imbalance of pro- and anti- inflammatory activity co-exist in AD. Whether there is a corresponding, sequential transfer of microglia from M1 to M2c or mixed phenotype is unclear. It is also interesting to know whether this kind of transformation indicate exacerbation of the disease and "a point of no return" of the disease. As the previous in vivo studies of IL-10 all gave intervention before the formation of typical AD pathology (see Table [2](#Tab2){ref-type="table"}), more data of the IL-10 impact on the late stage of AD is required. The in vivo IL-4 studies generated more controversial results: One shows that overexpression of IL-4 in pre-deposition phase of AD animal models resulted in attenuation of Aβ pathology and improved behavior \[[@CR54]\], while another one with short-term IL-4 expression in mice exacerbated amyloid deposition \[[@CR55]\]. The acute suppression of glial clearance activity due to the relative short duration of IL-4 exposure is a possible explanation to the inconsistency. IL-4 expression initiating time is another major difference of the two studies that worth further investigation. It worth mentioning that a IL-4 study has to be terminated prematurely due to the increased animal death after the intervention \[[@CR56]\]. One possible interpretation for the death was the multiple cortex injection sites and resultant higher virus and cytokine load. TGF-β, an immunosuppressive cytokine which protects neurons against damages, has a complex role in modulating Aβ pathology. Long-term overexpressing TGF-β by astrocytes in transgenic mice led to increased clearance of Aβ plaque by activated microglia \[[@CR57]\] and improvement of Aβ-induced behavior impairment \[[@CR58]\]. However, TGF-β can also promote astrocytes aggregating around brain microvessels and Aβ deposits on the vascular basement membranes \[[@CR59]--[@CR62]\]. Therefore, TGF-β can reduce Aβ pathology of brain parenchyma while at the same time cause the blood perfusion impairment in the associated regions. IFN-γ is a pleiotropic cytokine which has a similar but weaker function to IL-4 in upregulating glial MHC class II \[[@CR44]\], implying an immunosuppressive feature of the cytokine. The level change of IFN-γ in AD patients has not been reported, however, overexpressing IFN-γ results in a significant decrease of Aβ deposits and infiltration of peripheral monocytes \[[@CR63]\], which is consistent to the observations that IFN-γ increases Aβ uptake by microglia and activates microglia to facilitate T cell motility and synapse formation in vitro \[[@CR64]\]. The microglia-derived IL-12 and IL-23 is up-regulated in APP/PS1 transgenic mice and blocking these cytokines reverses the Aβ burden and the cognitive impairment \[[@CR65]\]. Another study using accelerated senescence mice (SAMP8) reproduced the results \[[@CR66]\]. In addition, a linear correlation of cognitive performance and CSF levels of p40, the common unit of IL-12 and IL-23, in AD subjects further supports the role of IL-12 and IL-23 in AD pathogenesis. IL-18, a member of IL-1 family, was elevated in LPS-stimulated blood mononuclear cells and brains of AD patients, and a significant correlation between IL-18 production and cognitive decline was observed \[[@CR67], [@CR68]\]. IL-18 promotes APP processing \[[@CR69]\], tau phosphorylation \[[@CR70]\] and can modulate the production of other cytokines \[[@CR71]\]. Similarly, another IL-1 family member, IL-33 and its receptor ST2, showed strong expression in the AD brains, and incubation with Aβ increased astrocytic IL-33 expression \[[@CR72]\]. The in vivo evidence of IL-18 and IL-33 in AD pathogenesis is currently missing and further studies may also explore whether these cytokines are detectable in CSF or serum of AD. Cytokines related to AD-like tau abnormalities {#Sec4} ============================================== Abnormal post-translational modification of tau proteins plays a crucial role in AD neurodegeneration, and hyperphosphorylation is one of them that has been most extensively studied \[[@CR73], [@CR74]\]. Accumulating studies suggest that targeting the down-regulated protein phosphatase-2A (PP2A) \[[@CR75], [@CR76]\] or up-regulated glycogen synthase kinase-3β (GSK-3β) \[[@CR77]--[@CR80]\] or modulating the upstream membranous receptors may attenuate tau hyperphosphorylation \[[@CR81], [@CR82]\]. Currently, the role of tau in the neuroinflammation process of AD remains poorly understood and is far less studied compared to Aβ. However, the interplay between tau and cytokines has shed a light on the relevant mechanisms. Pro-inflammatory cytokines have shown a consistent impact on tau pathology. Overexpression of IL-1β in 3xTg AD mice exacerbated tau hyperphosphorylation within one month \[[@CR41]\], while blocking IL-1β signaling via IL-1 receptor antagonist (IL-1ra) or anti-IL-1β antibody reversed the cognitive impairment with a diminished tau pathology \[[@CR83], [@CR84]\]. The decreased activity of IL-1β-dependent tau kinases, such as cyclin-dependent kinase-5 (CDK5)/p25, GSK-3β and p38-mitogen activated protein kinase (MAPK) contributed to the reduction of phosphorylated tau \[[@CR41], [@CR83]\]. Additionally, a recent study showed that microglia can drive tau pathology, pathological tau spreading and memory impairment in the human tau40 mice through a IL-1β-dependent pathway since the inclusion of IL-1ra significantly reduced microglia-induced tau pathology \[[@CR85]\]. 3, 6\'-dithiothalidomide, a TNF-α-lowering agent, had no effect on total tau levels, but reduced phosphorylated tau in 3xTg AD mice \[[@CR37]\]. Another study used a different TNF-α modulator, IDT in the same animal models also reduced paired helical filament tau (PHF-tau) and improved the cognition \[[@CR39]\]. Treating hippocampal neurons with physiologic dose of IL-6 exhibited an increase in the amount of hyperphosphorylated tau of AD type, which may be attributed to an increased activity of CDK5/p35 complex \[[@CR86]\]. In primary glial cultures, recombinant adeno-associated virus (rAAV)-mediated expression of IFN-γ did not alter endogenous tau production or phosphorylation. However, IFN-γ increased hyperphosphorylation and conformational changes of soluble tau in two animal models with tauopathy \[[@CR87]\]. In turn, overexpressing tau40 increased secretion of TNF-α, IL-1β, IL-6, IL-10 and NO in rat microglia, which show greater phagocytosis of microspheres \[[@CR88]\]. However, the phenotype of the microglia and how this phenotype would influence the Aβ pathology need further studies. Moreover, upregulating PP2A in astrocytes stimulates astrocytes migration via inhibiting p38-MAPK in Tg2576 mice \[[@CR89]\], indicating that the tau-associated pathology may be involved in the impaired Aβ clearance. It seems that tau pathology can be consequence of the deregulated inflammation, or serve as an inflammation promoter like Aβ to exacerbate inflammation. Nevertheless, to what extent tau may influence the inflammation, and what will be the sum effects of tau, Aβ and inflammation are mostly unknown. Besides, no related studies so far have examined the influence of anti-inflammatory cytokines on tau pathology. Cytokines also have important influence on neuron survival \[[@CR90]--[@CR93]\], blood brain barrier (BBB) integrity \[[@CR94]\] and other normal physiological events in the CNS \[[@CR3], [@CR4]\], which cannot be reflected in animal models of single type of pathology. Thus, a more careful examination of the current animal models \[[@CR95]\] and developing novel models more close to the real pathology of AD are needed \[[@CR28]\]. The adaptive immune system in AD {#Sec5} ================================ The most recent evidence has shown presence of a classical lymphatic system in the CNS \[[@CR96]\], suggesting a frequent communication of the immune activities between periphery and the CNS on a regular basis. Over 80 % of the T cells in the CSF are CD4+ that can be classified into four subsets, including type 1 helper-inducer T (Th1) cells and Th17 cells defined as pro-inflammatory; and Th2 cells and regulatory T (Treg) cells defined as anti-inflammatory. The activating state and subtype of T cells in the circulation, CSF and parenchyma are modified in AD patients \[[@CR97], [@CR98]\]. In an immune-deficient AD mouse model, lack of T, B, and natural killer cells exhibits an increased Aβ with decreased phagocytic efficiency of microglia and significant elevation of several key pro-inflammatory cytokines including IL-1β, IL-6 and TNF-α \[[@CR99]\]. These findings strongly suggest the active involvement of the adaptive immune system in AD pathogenesis. Previous studies have highlighted the importance of cytokines in mediating the activity of peripheral immune cells in AD. Cytokines can facilitate the peripheral immune cells infiltration into the brain, resulting in direct Aβ phagocytosis by recruiting immune cells or inducing phagocytic activity of other cell types, such as microglia. The choroid plexus (CP) stroma is enriched with CD4+ T cells that are able to produce IL-4 and IFN-γ \[[@CR98]\], and the IFN-γ plays an essential role in assisting leukocyte trafficking \[[@CR100]\]. Decreased IFN-γ level in both 5XFAD and APP/PS1 mice were reversed by transient depletion of Treg cells at intermediate stage of AD, which at the same time led to increased leukocyte infiltration and recruitment to Aβ plaques, and attenuation of the AD pathology \[[@CR101]\]. However, amplification of Treg cells at early disease stages through peripheral low-dose IL-2 treatment increased numbers of plaque-associated microglia, and restored cognitive functions in APP/PS1 mice \[[@CR102]\]. Therefore, a more careful examination of Treg cells in different stages of the disease may help determining the proper therapeutic strategies. Furthermore, when co-cultured with Aβ-treated microglia, the secretion of Th1 and Th17 cells increases, which then up-regulates MHC II, co-stimulatory molecules and pro-inflammatory cytokines in microglia \[[@CR103], [@CR104]\], thus improving the efficiency of presenting antigens to the T cells of microglia and enhancing Aβ clearance by both. However, IL-17 and IL-22, which are exclusively produced by Th17 cells, can also cause BBB disruption and infiltration of Th17 cells, but led to a direct injury to the neurons by Th17 cells via Fas/FasL pathway in Aβ-induced AD model rats \[[@CR105]\]. In addition, respiratory infection of APP/PS1 mice increased infiltration of IFN-γ + and IL-17+ T cells into the brains of older mice and this was correlated with an increased Aβ level \[[@CR106]\]. Together, these studies indicate that future studies should consider the complex interplay among many participants as seen in the real situation of AD. The basal level of anti-inflammatory cytokines in CSF may help skewing the infiltrating T cells to the Th2 or Treg phenotype in physiological condition \[[@CR98]\]. In AD patients, the pro-inflammatory cytokines in CSF increases, which induces more Th1 or Th17 cells that can be detrimental. Several in vivo studies via cerebral ventricles or systemic administration to examine the impact of cytokines or the relevant antibodies on AD pathology (see Table [2](#Tab2){ref-type="table"}), the concomitant influence on the transformation of T cells phenotypes and following effects should be taken into consideration for a more reasonable interpretation of the outcomes. Cytokines as potential biomarkers for AD diagnosis {#Sec6} ================================================== So far, a CSF signature of low Aβ~1-42~ and high tau concentrations and significant retention in PET imaging with amyloid tracers are suggested as the standard diagnostic criteria, with the highest specificity and accuracy \[[@CR107]\]. However, lumbar puncture required for CSF has limited its application. Thus, novel biomarkers based on more accessible materials, such as plasma, are attractive in improving AD diagnosis. Several cytokines have shown disease progression-dependent manner, which suggests that cytokines may serve as potential disease predictors. For instance, data collected from a 20-years cohort study demonstrate greater possibility of cognitive impairment in individuals with increased IL-6 \[[@CR108]\]. After reviewing 118 research articles and comparing 66 cytokines in plasma or CSF obtained from MCI and AD, it was found that the cytokines increased steadily or had peak level upon the transformation from MCI to AD. This may help predicting the risk of suffering from AD and recognizing AD subgroups, such as IL-1β, IL-6, TNF-α, IL-18, monocyte chemotactic protein (MCP)-1 and IL-10 \[[@CR8]\]. However, in the latest meta-analysis, no significant differences in cytokines such as IL-1β, IL-6, IL-8, IL-10 or TNF-α were found between subjects with MCI and healthy controls, while significant heterogeneity was observed in some comparisons \[[@CR109]\]. Considering the unstable outcome of single cytokine level, combinational use of multiple proteins is a more reasonable approach. However, since the first AD predicting model made up of 18 plasma biomarkers containing multiple cytokines has been proposed \[[@CR110]\], few biomarker sets have shown stable performance and good reproducibility \[[@CR111], [@CR112]\]. Nevertheless, by using multiplex assays, two research groups have independently set up a panel of plasma proteins recently. These two panels are of high reproducibility and diagnostic accuracy, which were strongly associated with severity and progression of AD \[[@CR113], [@CR114]\]. Although no cytokines were involved in neither of the panels, one of the studies found positive correlations between the biomarkers and some cytokines altered in AD \[[@CR114]\]. In addition, after screening 120 inflammatory molecules in CSF and serum of AD, MCI and healthy controls through protein-array analysis, a combination of soluble IL-6 receptor (sIL-6R), tissue inhibitor of metalloproteinases-1 (TIMP-1) and soluble TNF-α receptor I (sTNFR-I) in CSF was found to provide the best prediction to AD among other molecules \[[@CR115]\]. Certainly, these results still need further verification by other research groups, while the heterogeneity in BBB integrity, physical state and disease stage of patients should be taken into consideration at the same time \[[@CR8], [@CR116]\]. Besides, the lack of standardization of sample collections or detections remains the dominant cause of failure of developing serum-based AD biomarkers. To address this problem, many organizations raise guidelines for standardization of blood-based biomarker studies in AD, covering the pre-blood draw, blood collecting, processing and storage \[[@CR117]\]. Furthermore, longitudinal sampling over years \[[@CR8]\] is a better approach to eliminate heterogeneity but needs optimization of its feasibility. Although no evidence supports a direct association of systemic infections with AD \[[@CR118]--[@CR120]\], some specific pathogens have been identified as potential risks for AD, such as Herpes simplex virus type 1, *Chlamydophila pneumoniae*, *Helicobacter pylori* and periodontal bacteria \[[@CR121]\]. A recent study shows that the infection burden (IB) consisting of common pathogens is associated with AD after adjusted for ApoE genotype and various comorbidities. AD patients or healthy controls with more seropositivities have significantly higher serum levels of IFN-γ, TNF-α and IL-6 \[[@CR122]\]. As IB is a relatively stable indicator of systemic inflammation burden, the practical value of combinational use of IB with other biomarkers worth further investigations. Overall, single type of biomarker is far from enough to classify all phenotypes and stages of AD, the combination of plasma cytokines and other factor is the most realistic and promising approach to develop convenient and practical plasma biomarkers for AD. Cytokines as potential targets for AD therapy {#Sec7} ============================================= The anti-inflammatory therapies using non-steroidal anti-inflammatory drugs (NSAIDs) were once considered promising. However, after the positive reports from the pioneering randomized trial of indomethacin \[[@CR123], [@CR124]\], the followed trials have not reached a definitive conclusion \[[@CR28]\]. Lately, two meta-analyses have been conducted to reevaluate the role of NSAIDs in AD. Although it supported the use of NSAIDs for prevention of AD, there were no positive results from the randomized control trials (RCTs) \[[@CR125], [@CR126]\]. Moreover, in a follow-up evaluation study of the randomized AD anti-inflammatory prevention trial (ADAPT) and its follow-up study (ADAPT-FS) that treatment for 1 to 3 years with naproxen, a nonselective cyclooxygenase (COX) inhibitor, or celecoxib (a selective COX-2 inhibitor), the results show no prevention for the onset of dementia or no attenuation for the cognitive functions in older adults with a family history of AD \[[@CR127]\]. Many reasons to the failure have been proposed, including duration of treatment \[[@CR127]\], ApoE *ε4* allele \[[@CR128], [@CR129]\], ages \[[@CR127]\], disease stages \[[@CR130]\] and disease progressing speeds \[[@CR131]\]. Therefore, long-term and large-scale RCTs based on more tolerable novel NSAIDs are needed for understanding the positive findings from molecular and epidemiologic studies. In the absence of such RCTs, indirect treatment comparisons or mixed treatment comparisons may also help to reach more robust conclusions \[[@CR125]\]. As the broad anti-inflammatory medications are not promising, more specific immune pathways or molecules that are not affected by NSAIDs may be targeted. Etanercept is a TNF-α inhibitor originally used in the treatment of rheumatoid arthritis (RA). A noticeable clinical improvement was observed in AD patients minutes after perispinal administration of etanercept \[[@CR132]\]. To explain the rapid effect of etanercept, the authors propose that the vertebral venous system may be an anatomical route to bypass the BBB and to deliver high molecular drugs to the CNS \[[@CR133]\]. However, a recent study has challenged this claim, as three radio iodinated drugs including etanercept, were perispinally injected but the drug was not visualized in all but one of the rats using PET \[[@CR134]\]. Recent studies indicate that intravenously-administered etanercept has no apparent clinical benefit to AD patients, although good tolerability of subcutaneous etanercept over a 24-week period was observed \[[@CR133], [@CR135]\], suggesting better effects by perispinal administrating compared to peripheral route. Together, these studies confirm the pathogenic role of TNF-α in AD and show great potential of anti-TNF-α therapies through various administration routes. Although targeting cytokines is a relatively new approach compared to other anti-inflammatory therapies in AD, it is noteworthy that a great number of cytokine inhibitors have already been successfully used in the treatments of autoimmune diseases and cancers \[[@CR136], [@CR137]\], and more biologics are under development \[[@CR138]\]. Repurposing these drugs in AD treatments could be a reasonable approach. For instance, IL-1ra is decreased in CSF of AD patients \[[@CR139]\] and its protecting effect towards AD has been confirmed in animal models \[[@CR84], [@CR85]\]. Although there is still no clinical evidence supporting the use of IL-1ra in AD patients, the success in treating RA and cortical infarcts \[[@CR140]\] makes it a very promising target in AD treatments. Similarly, p40-neutralizing antibodies, which block the IL-12/IL-23 signaling pathway, have been approved by Food and Drug Administration (FDA) for the treatment of psoriasis, thus may be ideal for the initiation of clinical trials \[[@CR65]\]. Besides, indirect approaches such as targeting upstream regulators of the cytokine expression seem also attractive. For instance, the Aβ-dependent induction of IL-1β requires two sequential signals. The first signal is triggered by Aβ binding to the toll-like receptors (TLRs) and leads to the production of IL-1β precursor. The second signal occurs via NLRP3 (NACHT, LRR and PYD domains-containing protein 3) inflammasome activation, which requires cathepsin B leakage from phagolysosomes or mitochondrial damage, and the subsequent reactive oxygen species (ROS) production. Then the NLRP3 inflammasome can activate caspase-1, which processes the pro-IL-1β into its bioactive form \[[@CR141]\]. Although there are no FDA-approved drugs that exclusively and specifically target NLRP3, a small molecule inhibitor of NLRP3 has been identified \[[@CR142]\]. Therefore, more initiative attempts of repurposing anti-cytokine drugs in AD treatments and more careful assessments of the results may lead to unexpected cheerful outcomes. Conclusions {#Sec8} =========== The cytokines are involved in various physiological and pathological pathways, therefore, inconsistent results have been observed in AD pathologies and treatment. The present evidence strongly indicates that dysregulation of the cytokines drives pathogenic process primarily through influencing the phenotype of microglia, and co-existence of both pro-inflammatory cytokines and the suppressing state of microglia may represent an irreversible point of the disease. Future studies on AD should extend to more pathogens than Aβ, and investigate the interplay between cytokine and other participators. The genome-wide association studies and the online database analysis will provide continuously updated polymorphism information associated with AD, while development of brain banks is critical for identification of new genes and proteins \[[@CR143]\]. Given that increasing studies have proven the role of adaptive immune system in AD, the impact of peripheral T cells and relevant cytokines cannot be ignored in future studies. As the immune events may change during the disease course and the heterogeneity in AD, it is not necessarily that all individuals with AD exhibit neuroinflammation, or at all-time points in the course of the disease. To learn from the existing therapy strategies of other related inflammatory diseases or to develop novel cytokine inhibitors could be reasonable approaches to making progress in AD anti-inflammatory therapies. AD : Alzheimer's disease ADAPT : anti-inflammatory prevention trial ADAPT-FS : anti-inflammatory prevention trial-follow-up study ApoE : apolipoprotein E APP : amyloid precursor protein Aβ : beta amyloid BBB : blood brain barrier CDK5 : cyclin-dependent kinase-5 CNS : central nervous system COX : cyclooxygenase CP : choroid plexus CSF : cerebral spinal fluid FDA : food and drug administration GSK : glycogen synthase kinase htau40 : human tau40 protein IDE : insulin-degrading enzyme IFNs : interferons IGF-1 : insulin-like growth factor-1 IL-1ra : IL-1 receptor antagonist ILs : interleukins MAPK : mitogen activated protein kinase MCI : mild cognitive impairment MCP-1 : monocyte chemotactic protein -1 MHC-II : major histocompatibility complex class II NFTs : neurofibrillary tangles NSAIDs : non-steroidal anti-inflammatory drugs PHF-tau : paired helical filament tau PP2A : protein phosphatase-2A PSEN : presenelin RA : rheumatoid arthritis rAAV : recombinant adeno associated virus ROS : reactive oxygen species SAMP8 : senescence accelerated mouse sIL-6R : soluble IL-6 receptor sTNFR-I : soluble TNF-α receptor I TGFs : transforming growth factors Th1 cells : type 1 helper-inducer T cells TIMP-1 : tissue inhibitor of metalloproteinases-1 TLRs : toll-like receptors TNFs : tumor necrosis factors Treg cells : regulatory T cells TREM2 : triggering receptor expressed on myeloid cells 2 α1-ACT : α1-antichymotrypsin **Competing interests** The authors declare that they have no competing interests. **Authors' contributions** All authors read and approved the final manuscript. This work was supported in parts by grants from Natural Science Foundation of China (91132305, 81261120570, 81528007 and 81171195) and The National Key Technology Research and Development Program of the Ministry of Science and Technology of China (2013DFG32670, 2012BAI10B03).
Q: JPA/Hibernate cascade remove not working I have these entities: @Entity public class Item extends Unit { // @Id is in superclass @OneToMany(mappedBy = "parent", cascade = CascadeType.ALL, orphanRemoval = true) private Set<ItemRelation> lowerItemRelations = new LinkedHashSet<>(); @OneToMany(mappedBy = "child", cascade = CascadeType.ALL, orphanRemoval = true) private Set<ItemRelation> higherItemRelations = new LinkedHashSet<>(); // this too is in superclass @OneToMany(mappedBy = "unit", cascade = CascadeType.REMOVE, orphanRemoval = true) @OrderBy("date") protected Set<UnitRegistration> registrations = new LinkedHashSet<>(); ... } @Entity @Table(name = "ITEM_RELATION", indexes = @Index(columnList = "PARENT_ID, CHILD_ID", unique = true)) public class ItemRelation extends AbstractEntity { // @Id is in superclass @ManyToOne(optional = false) @JoinColumn(name = "PARENT_ID") private Item parent; @ManyToOne(optional = false) @JoinColumn(name = "CHILD_ID") private Item child; @NotNull @Min(0) @Column(nullable = false, columnDefinition = "INT DEFAULT 1 NOT NULL") private int quantity = 1; ... } Now I just want to perform a simple em.remove(item), but Hibernate does not issues the related DELETE statements for lowerItemRelations/higherItemRelations. Conversely, for all other fields annotated with @OneToMany(mappedBy = "...", cascade = CascadeType.ALL/REMOVE, orphanRemoval=true) it issues the statements. Here is a little MySQL log snippet: 2016-09-28T08:47:52.090453Z 13 Query update UNIT set CODE='CE13000003167', ... where ID=132241 and version=1 2016-09-28T08:47:52.094971Z 13 Query delete from UNIT_ACTION where PARENT_ID=132241 2016-09-28T08:47:52.134999Z 13 Query update AUTHORIZATION set UNIT_ID=null where UNIT_ID=132241 2016-09-28T08:47:52.158014Z 13 Query delete from UNIT_DOCUMENT where PARENT_ID=132241 2016-09-28T08:47:52.248074Z 13 Query delete from UNIT_PRODUCT where UNIT_ID=132241 2016-09-28T08:47:52.315641Z 13 Query delete from UNIT_PROJECT where UNIT_ID=132241 2016-09-28T08:47:52.586008Z 13 Query delete from ITEM_ALTERNATIVE where ITEM_ID=132241 2016-09-28T08:47:52.853350Z 13 Query delete from AUTHORIZATION where ID=714491 2016-09-28T08:47:52.910835Z 13 Query delete from UNIT_REGISTRATION where ID=173505 2016-09-28T08:47:52.980887Z 13 Query delete from UNIT where ID=132241 and version=1 2016-09-28T08:47:53.133290Z 13 Query rollback As you can see, there's no line for deleting from ITEM_RELATION, and I'm expecting something like: 0000-00-00T00:00:00.000000Z 13 Query delete from ITEM_RELATION where PARENT_ID=132241 0000-00-00T00:00:00.000000Z 13 Query delete from ITEM_RELATION where CHILD_ID=132241 Obviously the transaction is rolled back, because of: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Cannot delete or update a parent row: a foreign key constraint fails (`edea2`.`item_relation`, CONSTRAINT `FK_ITEM_RELATION_CHILD_ID` FOREIGN KEY (`CHILD_ID`) REFERENCES `unit` (`ID`)) Another strange thing is that Hibernate performs an (unnecessary?) UPDATE as first statement. However, is this different behavior related to the fact that lowerItemRelations/higherItemRelations references the same entity type (although on different fields/columns AND different rows)? Is this a bug or there's a reason for such behavior? What I tried: initialize the collections initialize and clear the collections (to trigger orphanRemoval) em.remove() each collection element prior to em.remove(item) without success. The only working way I found till now is to issue: CriteriaBuilder builder = em.getCriteriaBuilder(); CriteriaDelete<ItemRelation> delete = builder.createCriteriaDelete(ItemRelation.class); Root<ItemRelation> rel = delete.from(ItemRelation.class); delete.where(builder.or( builder.equal(rel.get(ItemRelation_.parent), managedItem), builder.equal(rel.get(ItemRelation_.child), managedItem))); em.flush(); em.remove(managedItem); I'm using Hibernate 5.2.2.Final on Wildfly 10.1.0.Final Thanks As requested here is where em.remove() is called: @Stateless @Local public class PersistenceService implements Serializable { @PersistenceContext private EntityManager em; ... @TransactionAttribute(TransactionAttributeType.REQUIRED) public <T> void delete(T entity) { T managed; if(!em.contains(entity)) { Class<T> entityClass = EntityUtils.getClass(entity); Object entityId = EntityUtils.getIdentifier(entity); managed = em.find(entityClass, entityId); } else { managed = entity; } em.remove(managed); // em.flush(); // just for debugging } } A: Ok, it's a bug. I found this behavior is coupled with lazy initialization of both collections, so I submitted the issue HHH-11144 and produced a simple test case (also available on GitHub). In short, this happens when EntityManager em = emf.createEntityManager(); EntityTransaction tx = em.getTransaction(); tx.begin(); Item item = em.createQuery("select x from Item x where x.code = 'first'", Item.class).getSingleResult(); Set<ItemRelation> lowerItemRelations = item.getLowerItemRelations(); Hibernate.initialize(lowerItemRelations); // initializing 'higherItemRelations' prevents orphanRemoval to work on 'lowerItemRelations' Set<ItemRelation> higherItemRelations = item.getHigherItemRelations(); Hibernate.initialize(higherItemRelations); lowerItemRelations.clear(); tx.commit(); em.close();
Much of this interest has been spurred by the Obama administration announcing that it has requested China’s help in fighting ISIS in Iraq. Even among the Chinese analysts supportive of Beijing playing a direct role in the war on ISIS, many have suggested that China should do it at least in part to earn goodwill from the United States. This is preposterous. Not only does ISIS pose a greater threat to China and Chinese interests, but Beijing has had a large role in ISIS’s rise as well. As I’ve noted before, ISIS did not directly threaten the United States before America began conducting airstrikes against it last month. The same cannot be said of ISIS’s stance toward China. For instance, in a speech he made back in July, ISIS leader Abu Bakr Al-Baghdadi noted to his followers that “Muslim rights are forcibly seized in China, India, Palestine” and elsewhere around the world. A five-year expansion map released at the same time showed ISIS’s aspirations to swallow up Xinjiang province. Then, after returning from a trip to the region later that same month, Wu Sike — then China’s Middle East envoy — revealed that at least 100 Chinese citizens were training with ISIS in the Middle East. He said most were members of Uyghur separatist groups who have stepped up their own terrorist attacks against the Chinese state over the past year. “After being immersed in extremist ideas, when they return to their home country they will pose a severe challenge and security risk to those countries,” Wu said at the time. The past week has seemed to offer confirmation of this, as the Iraqi Ministry of Defense announced the capture of a Chinese national fighting with ISIS. This is a serious threat to China given that, largely unlike the United States, China actually experiences frequent terrorist attacks from its disenfranchised Muslim population. Besides the security threat ISIS poses to China, the group also threatens Beijing’s energy security. It’s no secret that while the U.S. fought the Iraq War, it was China and Iran who won it. Since the 2003 U.S. invasion, Chinese energy companies have invested some $10 billion in Iraq’s nascent oil industry. In recent years, China has been the destination for around half of Iraq’s oil exports. This is not insignificant from China’s perspective either. China’s oil imports from Iraq have doubled since 2011 and grew by 50 percent in 2013 alone, the largest growth of any country last year. This made Iraq China’s fifth largest oil supplier after Russia, with Iraqi oil accounting for roughly 10 percent of China’s imports (the U.S. imports far less of its oil and only 4 percent of its imports came from Iraq last year). Moreover, China is the largest importer of Middle Eastern crude, with over 50 percent of its imports coming from the region last year. Although China clearly has more at stake in countering ISIS, some may charge that the U.S. and its allies should bear the full burden since they helped fuel ISIS’s rise by invading Iraq in 2003. These same observers might also rightly point out that China opposed this invasion. You won’t find an argument with me about this. I’ve already written that the U.S. did more than any other outside power to fuel ISIS’s rise. This point seems indisputable to me. That being said, China is also directly culpable for ISIS’s rise. Although America’s 2003 invasion directly contributed to the creation of al-Qaeda in Iraq (AQI) — the predecessor of ISIS — the group was largely a spent force after the U.S. surge and the Anbar Awakening. Even after it refashioned itself as the Islamic State of Iraq, the group remained a marginal force at best. It was the Syrian civil war that fueled ISIS’s revival. The sectarian nature of the Assad regime and its brutal crackdown, which played into ISIS’s own strategy, was what helped the group revive itself. That war, as well as the Nouri al-Maliki government’s sectarian nature, gave ISIS the chance to rise from the ashes of history. Of course, it was China who joined Russia and Iran in propping up the Assad regime over America’s strident objections. That policy has now backfired, as China itself has implicitly admitted. Had China, Russia, and Iran listened to the U.S. and not continued to back Assad, it’s unlikely his regime could’ve stayed in power. And without the outside threat presented by the Alawites and Shia, Syrian and Iraqi Sunnis would’ve never gotten behind ISIS.
In his heyday, Georgia’s former president, Mikheil Saakashvili, made a hostile statement impacting Armenia during an official visit to Baku, the Azeri capital. He said, “Azerbaijan’s enemy is Georgia’s enemy.” No official retraction has ever been issued by any Georgian representative, at least, none publicly. And yet, that antagonism continues towards Armenia, if not in word, certainly in deed. Rocky relations between Armenia and Georgia have continued throughout the independence years, mainly because of latent jealousy of Georgians towards Armenians, but also because of Georgian hostility towards Moscow, Armenia’s primary strategic ally. After Armenia’s Velvet Revolution, Yerevan’s overtures towards Tbilisi yielded nothing but some cosmetic changes. In 2018, the ministers of defense of three countries — Turkey, Azerbaijan and Georgia — signed a military pact as a prelude for Georgia’s ambitions to join NATO. That pact has placed Georgia squarely in the enemy camp. Thus far, that military pact has been kept on the back burner. However, economic cooperation and treaties between the three countries in the Caucasus are equally threatening towards Armenia, as Ankara and Baku intend to isolate Armenia in the region. Turkey has not abandoned yet its pan-Turanic ambitions, extending from Ankara to Central Asia. There are two Christian nations in that virtual empire’s path: Armenia and Georgia. The latter has willingly given up its historic mission in the region, leaving Armenia to bear the brunt of Azeri President Ilham Aliyev’s and Turkish President Recep Tayyip Erdogan’s fury. Aliyev’s recent outburst against Garegin Nzhdeh’s memory stems from the fact that the Armenian hero fought tooth and nail in 1921 to keep Zangezur as an integral part of Armenian territory, blocking the Turkic drive towards Central Asia. The same resentment was expressed by President Erdogan at a recent conference of Turkish-speaking nations in Baku. Another milestone was marked in the economic cooperation between the three nations, when the foreign ministers of the three countries met in Tbilisi on December 23 to sign agreements on trade and transportation. The agreements have also a political component which concerns the settlement of outstanding disputes in the region.
Hydrogen is an important feedstock in the manufacture of ammonia, methanol, and a variety of other chemicals; but its largest market is the crude oil processing industry. In crude oil refineries, hydrogen is used in a number of processes including hydrodesulfurization where hydrogen is reacted with sulfur containing compounds over a catalyst to form hydrogen sulfide. Hydrogen sulfide itself is already produced in great quantities during the drilling and processing of natural gas and oil. A process that can economically extract hydrogen from low value feedstocks or wastes such as hydrogen sulfide would bring tremendous benefits to the petroleum sector as this sector consumes large amounts of hydrogen. Many processes exist for the production of hydrogen. The production of hydrogen is currently dominated by the steam reforming process where a relatively light hydrocarbon is reacted with steam inside a bed of reforming catalyst, usually nickel. Since steam reforming of hydrocarbon is endothermic, the energy to drive the reactions must be provided from an external source. In the steam reforming process, the hydrocarbon-containing stream must be free of sulfur or other contaminants such as carbon particles that can poison and deactivate the catalyst. Another hydrogen production method is partial oxidation. In a partial oxidation reaction, a hydrogen-containing feed is reacted with an oxidizer, such as oxygen or air, in substoichiometric proportion normally referred to as a rich mixture where the equivalence ratio spans from one 1 to the upper flammability limit of the fuel being utilized as the feedstock. The equivalence ratio, defined as the stoichiometric oxidizer to fuel ratio divided by the actual oxidizer to fuel ratio, is shown in equation R1. EquivalenceRatio = ( fuel Oxidizer ) actual / ( fuel Oxidizer ) stoichimetry R1 An equivalence ratio less than unity is considered lean, also referred to as fuel-lean, since a portion of the oxidizer is leftover after all of the fuel is consumed by the oxidation reaction. Where the fuel content of the mixture lies below the lower flammability limit of the fuel used as the feedstock, the fuel and oxidizer mixture is considered ultra-lean. Conversely, fuel and oxidizer mixtures of equivalence ratio greater than unity are considered rich, also referred to as fuel-rich, since a portion of the fuel is leftover after the oxidation reaction is complete. Mixtures of equivalence ratios greater than rich mixtures, normally taken to be higher than the upper flammability limit of the fuel being utilized as the feedstock, are considered ultra-rich. Ultra-rich mixtures do not normally produce self-sustained flames without the aid of external energy sources or preheating the mixture. Although the partial oxidation process does not need an external source of heat since it is exothermic, it is still less common than steam reforming since it is generally less efficient than steam reforming particularly at large scale. As a normally non-catalytic process, partial oxidation can utilize any hydrocarbon feeds. The steam reforming and partial oxidation processes can be combined into a single process normally referred to as an autothermal process. In the autothermal process, the energy for the reforming reactions is provided by oxidizing a small portion of the fuel inside the bed of a reforming catalyst. Due to its catalytic nature, the autothermal process falls under the same constraints as the steam reforming process in that the catalyst bed is susceptible to poisoning and deactivation by sulfur, carbon, and other poisons in the feed stream. The hydrocarbon stream must be desulfurized in a first step prior to entering the autothermal reactor. During reforming, whether by the steam reforming or autothermal process, water must be provided in excess of the stoichiometric quantity to prevent carbon formation. Additionally, excessive temperature must be prevented in the reactions to avoid sintering the reforming catalyst. Steam reforming, partial oxidation, and the autothermal process are well known methods in the industry that are practiced on industrial scales. The invention disclosed herein can be an economical process for producing hydrogen from hydrocarbons and various other hydrogen containing fuels. U.S. Pat. No. 6,517,771 to Li, incorporated herein by reference, disclosed a reverse flow inert porous media reactor for the purpose of heat-treating metals. Li limited the reactant stream to methane and oxygen or air, and the preheater to initiate the process is located inside the porous bed. Drayton et. al 27th, International Symposium on Combustion, 27, pp. 1361-1367, 1998, incorporated herein by reference, disclosed an application of the reverse flow reactor for fuel reforming, producing synthetic gas from methane in a reactor similar to Li's. None of the disclosed references above include an external energy source for the reverse flow reactor or are applied to the reformation of hydrogen sulfide. A number of studies in reverse flow inert porous media reactors are carried out in applications not intended for hydrogen production from hydrocarbons. Hoffman et al, Combustion and Flame, 111, pp. 32-46, 1997, incorporated herein by reference, operated a reverse flow reactor with ultra-lean air and methane mixtures for the purpose of heating fluids. Barcellos et. al. Clean Air 2003, Seventh International Conference on Energy for a Clean Environment; Lisbon, Portugal, Jul. 7-10, 2003, incorporated herein by reference, tested a reactor similar to Hoffman's for the production of saturated steam through heat exchangers protruding directly through the inert porous media and fitted at the extremities of the reactor. Production of hydrogen from both light and heavy hydrocarbons as well as other hydrogen containing wastes such as hydrogen sulfide is not addressed in the prior art. Hydrogen is a much more valuable commodity then sulfur. A process that can economically recover the hydrogen as well as other compounds could have significant impact on the petroleum and other industries. The reformation of hydrogen sulfide (H2S) to hydrogen and sulfur presents certain challenges not encountered in hydrocarbon reformation. For example, the low heat content of H2S precludes obtaining very high temperature in the partial oxidation regime. More importantly, H2S reforming requires the reaction to reach near equilibrium conditions at high temperature to obtain high yield. In the current invention, the intrinsic heat recuperating mechanism of the inert porous media matrix and the reactor's ability to create an isothermal high temperature volume render it a cost effective option for the reformation of H2S and other hydrocarbons by providing the necessary residence time and temperature without the requirement of an external energy source to be used continuously throughout the reactions. Specifically, all of the reforming reactions in these above-mentioned prior art references occur inside a hollow chamber. None of these references disclose an apparatus and process where the reaction zone may be located in any portion of a reactor chamber, where the reaction zone is allowed to freely propagate through the reactor chamber filled with a porous media matrix and where the reforming reactions occur directly in a heated inert porous media matrix, or packed bed. Therefore, there has developed a need for a reactor which can efficiently reform both hydrocarbon and hydrogen sulfide fuels to pure hydrogen while not requiring continuous external energy to produce a viable hydrogen yield.
Q: Body represents a Section when related to child footer or header elements? Very simple and fast good practice doubt. As W3C states, the header and footer tags should be equivalent to their parent section, where as for every header or footer elements there should be only one parent section, so the browser understands that those specific header and footer tags as the header and footer of that specific section of the page. My question is, with this in mind, can the body be considered a section, letting do something like this correctly: <body> <header></header> <footer></footer> <section> <header></header> <footer></footer> </section> </body> Or should a section be always an actual section, where the correct coding practice would be like this: <body> <section> <header></header> <footer></footer> </section> <section> <header></header> <footer></footer> </section> </body> Any suggestion? A: The documentation on sectioning content is slightly unclear about this. The only elements that are sectioning content content that defines the scope of headings and footers are article, aside, nav, and section. However, blockquote, body, details, fieldset, figure, and td are sectioning roots, which can have their own outlines. There is also an example on the page <body> <header> ... </header> <footer> ... </footer> </body> ...so all of this indicates that you are good to go with <header> and <footer> in <body>, and in point of fact this is different than if you had another section because those on the body would be higher up in the outline than sibling sections. Another thing to keep in mind based on the spec is that those "sectioning root" elements do not affect the outline of their ancestors (although they will be lower than ancestor roots). For example: <section> <header>head</header> <fieldset> <header>head2</header> </fieldset> </section> In this case "head" and "head2" are on the same level in the document outline because <fieldset> is a sectioning root. If it were <section> instead, it would be nested in the "head" node. You can confirm this with this handy web utility
Characterization of the rhodobacter sphaeroides 5-aminolaevulinic acid synthase isoenzymes, HemA and HemT, isolated from recombinant Escherichia coli. The hemA and hemT genes encoding 5-aminolaevulinic acid synthase (ALAS) from the photosynthetic bacterium Rhodobacter sphaeroides, were cloned to allow high expression in Escherichia coli. Both HemA and HemT appeared to be active in vivo as plasmids carrying the respective genes complemented an E. coli hemA strain (glutamyl-tRNA reductase deficient). The over-expressed isoenzymes were isolated and purified to homogeneity. Isolated HemA was soluble and catalytically active whereas HemT was largely insoluble and failed to show any activity ex vivo. Pure HemA was recovered in yields of 5-7 mg x L-1 of starting bacterial culture and pure HemT at 10 mg x L-1 x HemA has a final specific activity of 13 U x mg-1 with 1 unit defined as 1 micromol of 5-aminolaevulinic acid formed per hour at 37 degrees C. The Km values for HemA are 1.9 mM for glycine and 17 microM for succinyl-CoA, with the enzyme showing a turnover number of 430 h-1. In common with other ALASs the recombinant R. sphaeroides HemA requires pyridoxal 5'-phosphate (PLP) as a cofactor for catalysis. Removal of this cofactor resulted in inactive apo-ALAS. Similarly, reduction of the HemA-PLP complex using sodium borohydride led to > 90% inactivation of the enzyme. Ultraviolet-visible spectroscopy with HemA suggested the presence of an aldimine linkage between the enzyme and pyridoxal 5'-phosphate that was not observed when HemT was incubated with the cofactor. HemA was found to be sensitive to reagents that modify histidine, arginine and cysteine amino acid residues and the enzyme was also highly sensitive to tryptic cleavage between Arg151 and Ser152 in the presence or absence of PLP and substrates. Antibodies were raised to both HemA and HemT but the respective antisera were not only found to bind both enzymes but also to cross-react with mouse ALAS, indicating that all of the proteins have conserved epitopes.
1/09/2015 Visiting Aichi Prefectural Ceramic Museum. A couple of months ago, I went to Aichi Prefectural Ceramic Museum, located in Seto, Aichi Prefecture. It’s been quite a long time since I visited last time. Growing up near the museum, my parents sometimes brought me to see and experience how to make ceramics. But, for a child like me, there was no fun being there. Hundreds of resemble ceramic crafts are displayed all the way, and I hardly knew how to enjoy it. Absolutely, I had had no interest in it until I became familiar with Japanese tea. This tea culture includes not only tea itself, but also the way of taking tea, sweets, and cups and other things. Gradually, it reminds me of a famous museum just 30 minutes away from my hometown. So I headed my car toward it in a cold day of November, 2014. Surprisingly, we can take pictures of most of their collection. In addition, there were few people inside. Several, I remember, no more than 10 people. So I began to take a close look at one by one, and found easily that “No two things are alike” . Started pots made thousands years ago, there are many pots collected around the globe. No mention that they have famous ceramics around Japan, I particularly found so nice to see pots from a variety parts of the world. Korean, Chinese, and Khmer :) A half day isn’t enough to see them all. When you feel tired, there is a cool tea room you just pay $5 to enjoy drinking. You can choose your favorite pot displayed in the room to take. They also have quite valuable ones. (Of course you can have a drink with it if you want :) 2 comments: ええ、いいねその美術館!It looks amazing! For as long as I remember I have appreciated this kind of art. Drinking tea from such nice cups, or eating from the most beautiful ceramic plates. It takes the experience of having drinks or something to eat to a whole new level.今度、一緒にお茶を飲もう (: Suzanne-chanAbsolutely, it's so nice to have dishes with beautiful plates.After experiencing it, you can't go back!And, my collection are gradually growing up.It takes years, or even my whole lifetime to complete, however, living with something beautiful is so important to me!ぜひ、お茶を一緒に飲みましょう!!ヽ(=´▽`=)ノ
public struct PrimeAlert: Equatable, Identifiable { public let n: Int public let prime: Int public var id: Int { self.prime } public init(n: Int, prime: Int) { self.n = n self.prime = prime } public var title: String { return "The \(ordinal(self.n)) prime is \(self.prime)" } } public func ordinal(_ n: Int) -> String { let formatter = NumberFormatter() formatter.numberStyle = .ordinal return formatter.string(for: n) ?? "" }
Evaluation of the reproducibility of the World Health Organization classification of common ovarian cancers. With emphasis on methodology. Seven pathologists independently classified 50 slides of ovarian tumors using category I of the World Health Organization classification (WHO I), each case being seen twice under different random code numbers. Intraobserver reproducibility and interobserver reproducibility, based on consistent interpretations, were both suboptimal. However, scrutiny suggested that no pathologist was a source of excessive variability, nor was suboptimal interobserver reproducibility simply due to intraobserver variability. Neither could excessive variability be attributed to skewing of results by a subgroup of unclassifiable cases. However, clearcut sources of variability were identified among the categories of WHO I, namely, mixed epithelial, unclassified epithelial, and undifferentiated carcinoma. There was also considerable variability in distinguishing serous and endometrioid neoplasms, and in identifying tumors of low malignant potential. These findings should not be misconstrued as implying that pathologists in routine practice cannot diagnose common ovarian cancers reproducibly for patient care purposes. Availability of clinical and macroscopic data, extensive sampling, histochemistry, and consultation combine, in an uncontrolled and highly individualistic fashion, to render routine service work very different from this highly controlled formal exercise. Furthermore, at the current state of the therapeutic art, many of the taxonomic problems identified in this study may have little clinical significance. Nonetheless, this study has strengthened the evidence that there may be important problems in classifying common ovarian cancers reproducibly using WHO I, and that WHO I may require greater clarity to enhance reproducibility. Current emphasis on quality assurance dictates reconsideration of the literature on reproducibility of histopathologic taxonomy, which has tended to inculpate pathologists as sources of variability. Virtually all of this literature is subject to some degree of skepticism due to deficiencies in methodology. Consideration of the question of how to measure reproducibility in anatomic pathology leads us to suggest that the community of pathologists should address the need to decrease ambiguity in classification systems as an important step toward optimizing reproducibility.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html lang="en"> <head> <title>Source code</title> <link rel="stylesheet" type="text/css" href="../../../../stylesheet.css" title="Style"> </head> <body> <div class="sourceContainer"> <pre><span class="sourceLineNo">001</span>/*<a name="line.1"></a> <span class="sourceLineNo">002</span> * Copyright (c) 2016-2017 Daniel Ennis (Aikar) - MIT License<a name="line.2"></a> <span class="sourceLineNo">003</span> *<a name="line.3"></a> <span class="sourceLineNo">004</span> * Permission is hereby granted, free of charge, to any person obtaining<a name="line.4"></a> <span class="sourceLineNo">005</span> * a copy of this software and associated documentation files (the<a name="line.5"></a> <span class="sourceLineNo">006</span> * "Software"), to deal in the Software without restriction, including<a name="line.6"></a> <span class="sourceLineNo">007</span> * without limitation the rights to use, copy, modify, merge, publish,<a name="line.7"></a> <span class="sourceLineNo">008</span> * distribute, sublicense, and/or sell copies of the Software, and to<a name="line.8"></a> <span class="sourceLineNo">009</span> * permit persons to whom the Software is furnished to do so, subject to<a name="line.9"></a> <span class="sourceLineNo">010</span> * the following conditions:<a name="line.10"></a> <span class="sourceLineNo">011</span> *<a name="line.11"></a> <span class="sourceLineNo">012</span> * The above copyright notice and this permission notice shall be<a name="line.12"></a> <span class="sourceLineNo">013</span> * included in all copies or substantial portions of the Software.<a name="line.13"></a> <span class="sourceLineNo">014</span> *<a name="line.14"></a> <span class="sourceLineNo">015</span> * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,<a name="line.15"></a> <span class="sourceLineNo">016</span> * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF<a name="line.16"></a> <span class="sourceLineNo">017</span> * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND<a name="line.17"></a> <span class="sourceLineNo">018</span> * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE<a name="line.18"></a> <span class="sourceLineNo">019</span> * LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION<a name="line.19"></a> <span class="sourceLineNo">020</span> * OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION<a name="line.20"></a> <span class="sourceLineNo">021</span> * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.<a name="line.21"></a> <span class="sourceLineNo">022</span> */<a name="line.22"></a> <span class="sourceLineNo">023</span><a name="line.23"></a> <span class="sourceLineNo">024</span>package co.aikar.commands;<a name="line.24"></a> <span class="sourceLineNo">025</span><a name="line.25"></a> <span class="sourceLineNo">026</span>import co.aikar.commands.annotation.Conditions;<a name="line.26"></a> <span class="sourceLineNo">027</span><a name="line.27"></a> <span class="sourceLineNo">028</span>public class BukkitParameterConditionContext &lt;P&gt; extends ParameterConditionContext&lt;P, BukkitCommandExecutionContext, BukkitCommandIssuer&gt; {<a name="line.28"></a> <span class="sourceLineNo">029</span> BukkitParameterConditionContext(RegisteredCommand cmd, BukkitCommandIssuer issuer, BukkitCommandExecutionContext execContext, Conditions conditions) {<a name="line.29"></a> <span class="sourceLineNo">030</span> super(cmd, issuer, execContext, conditions);<a name="line.30"></a> <span class="sourceLineNo">031</span> }<a name="line.31"></a> <span class="sourceLineNo">032</span>}<a name="line.32"></a> </pre> </div> </body> </html>
op { name: "ConcatenateDataset" input_arg { name: "input_dataset" type: DT_VARIANT } input_arg { name: "another_dataset" type: DT_VARIANT } output_arg { name: "handle" type: DT_VARIANT } attr { name: "output_types" type: "list(type)" has_minimum: true minimum: 1 } attr { name: "output_shapes" type: "list(shape)" has_minimum: true minimum: 1 } is_stateful: true } op { name: "ConcatenateDataset" input_arg { name: "input_dataset" type: DT_VARIANT } input_arg { name: "another_dataset" type: DT_VARIANT } output_arg { name: "handle" type: DT_VARIANT } attr { name: "output_types" type: "list(type)" has_minimum: true minimum: 1 } attr { name: "output_shapes" type: "list(shape)" has_minimum: true minimum: 1 } }
Adidas to shift some production from Asia in quest for speed By Emma Thomasson 5 Min Read The Adidas logo is pictured in a pop-up store in Berlin December 2, 2014.Hannibal Hanschke HERZOGENAURACH, Germany (Reuters) - Adidas (ADSGn.DE) plans to speed up production and allow shoppers to customize more shoes and clothes, aiming to accelerate sales and profit growth over the next five years. The German sportswear firm, which has been losing ground for years to fast-growing rival Nike (NKE.N), said it was testing automated production units that would allow it to shift manufacturing from Asia closer to consumers. "We will bring production back to where the main markets are," said Chief Executive Herbert Hainer, adding the current six weeks it took to ship from Asia to Europe was too long. "Robots can be everywhere." Manufacturing closer to consumers should allow it to react more quickly to fast changing trends -- like floral prints this spring -- as it seeks to challenge market leader Nike as well as fashion retailers like H&M (HMb.ST), which are already much more responsive and are moving into sportswear. Adidas said it would extend innovations pioneered by its NEO teen fashion brand which gets new products into store in 45 days, compared with a sports industry standard of 12-18 months. Global brand chief Eric Liedtke, seen as the strongest internal candidate to succeed under-pressure Hainer, said Adidas was seeking to shake up the market in the same way that Nike did when it moved production to Asia in the 1980s. Liedtke, a former American football player, said the German company's priorities included reversing falling sales in North America, retaining its global leadership in soccer, doubling sales in running and doing more to appeal to female consumers. Nike has been taking market share from Adidas and is seen as doing a better job at setting trends. Adidas said it expects sales to grow by almost half to above 22 billion euros ($24 billion) by 2020 and net income to rise around 15 percent per year on average, which Hainer conceded would not translate into a double-digit operating margin. Hainer faced calls to step down last year after he was forced to abandon the company's previous five-year targets -- including a 2015 goal for a 11 percent operating margin. "Their main problem is weak profitability and we are hearing very little about that," said Ingo Speich, fund manager for Union Investment which has a 1.3 percent stake in Adidas, who has repeatedly criticized Hainer in the last year. "The strategy will only be credible under new management as Hainer could not deliver on what he promised." CEO SEARCH The company said last month the board had launched a formal search to replace Hainer, in the job since 2001, although the CEO has said that did not mean his departure was imminent. Adidas shares, which have risen 20 percent so far this year in part due to high expectations for the strategy presentation, were down 0.15 percent at 1030 EDT but still outperformed a 1.3 percent weaker German blue-chip index .GDAXI. Adidas, founded by German shoe maker Adi Dassler in 1949, laid out its new strategy in its innovation center, where it demonstrated machines which allow shoppers to print their own names and logos on to its popular "Superstar" sneakers, tapping into a broader trend for personalized products. It said it was working with German companies and the government on innovations in robotics and machines which can "knit" sneakers rather than having them sewn by hand, which could allow it to move production away from hubs in China, Cambodia, Laos and Vietnam. To help it get closer to consumers, Adidas, which until a decade ago only sold its products wholesale, plans to open another 500-600 stores so that its own retail business accounts for above 60 percent of sales, from about half in 2014. Adidas expects its cash flow to grow at a faster rate than operating profit in the next five years, allowing it to raise its range for future dividend payments to 30-50 percent of net income from a previous 20-40 percent.
"In terms of positives, it's what we're doing right now. It's you and I having this conversation. It's saying 'my gosh where does this come from,' trying to understand that. And saying that we can't let it get to this level," Smith said.
Effect of CPS14217C>A genotype on valproic-acid-induced hyperammonemia. In order to clarify the factors causing hyperammonemia and to predict occurrences during treatment with valproic acid (VPA), we investigated the effect of the genetic polymorphism of carbamoyl-phosphate synthase 1 (CPS14217C>A) on susceptibility of hyperammonemia, together with the effect of coadministration of other anticonvulsants. Seventy-nine patients with epilepsy were enrolled, and five of them had hyperammonemia. Univariate and multivariate logistic regression analyses were performed. The aspartate aminotransferase level in the patients with hyperammonemia was significantly higher than that in those without hyperammonemia. The risk of hyperammonemia was significantly influenced by the number of anticonvulsants concomitantly administered with VPA. Also, the distribution of the CPS14217C>A genotype differed depending on whether the patients had hyperammonemia or not. No significant effects of CPS14217 genotypes and the number of anticonvulsants coadministered with VPA on the serum concentrations of VPA were observed. The multivariate logistic regression analysis showed that the concomitant administration of two or more anticonvulsants with VPA and the heterozygous or homozygous carrier state of the A allele of the CPS14217C>A polymorphism were independent risk factors for developing hyperammonemia. These findings suggested that in epileptic patients undergoing VPA therapy, CPS14217A polymorphism and the number of coadministered anticonvulsants would be considered as risk factors for hyperammonemia, even if the serum VPA concentrations were controlled.
South Florida Aquatic Club South Florida Aquatic Club is a swim club based in Pembroke Pines, Florida, United States. Founded in 2010, it offers training for beginner and elite swimmers. The aquatics club is best known for developing a number of Olympic swimmers. Notable swimmers Alia Atkinson Claire Donahue Marc Rojas References Category:Swimming clubs Category:Sports clubs established in 2010 Category:2010 establishments in Florida Category:Pembroke Pines, Florida Category:Sports teams in Florida
Q: 2D bubble sort java program that have string and integer in array I don't know where the problem is. When compiling, it doesn't sort the numbers. What should I do? Badly needed in output in our school. Thanks in advance! public class Qwerty { static void bubbleSort(String arr[][]) { int n = arr.length; int temp = 0; int []myAge = new int[n]; for(int i = 0; i < n; i++) { for(int j=1; j < n-i-1; j++) { myAge[j] = Integer.parseInt(arr[j][1]); if(myAge[j-1] > myAge[j]) { temp = myAge[j-1]; myAge[j-1] = myAge[j]; myAge[j] = temp; } } } } public static void main(String[] args) { Qwerty bs = new Qwerty(); String arr [][] = {{"Ace","10"}, {"Ben","8"}, {"Cid","20"}, {"Dan","5"}, {"Eve","12"}}; bs.bubbleSort(arr); for(int i = 0; i < arr.length; i++) { System.out.println(arr[i][0]+arr[i][1]); } } A: This should work. I'll go through the few mistakes you had in your code. public class Qwerty{ static int[] bubbleSort(String arr[][]) { int n = arr.length; int temp = 0; int []myAge = new int[n]; for (int i = 0; i < n; i++) { myAge[i] = Integer.parseInt(arr[i][1]); } boolean sorted = false; while (!sorted){ sorted = true; for(int i = 1; i < n; i++) { if(myAge[i-1] > myAge[i]) { temp = myAge[i-1]; myAge[i-1] = myAge[i]; myAge[i] = temp; sorted = false; } } } return myAge; } public static void main(String[] args) { Qwerty bs = new Qwerty(); String arr [][] = {{"Ace","10"}, {"Ben","8"}, {"Cid","20"}, {"Dan","5"}, {"Eve","12"}}; int myAge[] = bs.bubbleSort(arr); for(int i = 0; i < arr.length; i++) { System.out.println(arr[i][0]+ myAge[i]); } } Firstly, when you did Integer.parseInt, you were only doing it for j, and not j-1, which means that in the first iteration you would be comparing 8 with 0 (which is the default value for an int array element) when you did if(myAge[j-1] > myAge[j]). I made this more clear by just parsing the array in a separate loop beforehand. Secondly, in your bubble sort, the way you know that the sorting is finished is when you are able to go through the array without making any swaps. That's what I did with the sorted boolean. Thirdly, in your main method, when you just do bs.bubbleSort(arr), nothing will happen because the method returns void. So, I changed the method to return an int array and saved it to the myAge variable in the main method. I hope this helps. EDIT: OOPS, after reading your comment I see that you also want the names to correspond with each number, my bad. Here's the new code, and I'll explain what I changed. public class Qwerty{ static String[][] bubbleSort(String arr[][]) { int n = arr.length; int tempAge = 0; String tempName = ""; int []myAge = new int[n]; String myName[] = new String [n]; for (int i = 0; i < n; i++) { myAge[i] = Integer.parseInt(arr[i][1]); myName[i] = arr[i][0]; } boolean sorted = false; while (!sorted){ sorted = true; for(int i = 1; i < n; i++) { if(myAge[i-1] > myAge[i]) { tempAge = myAge[i-1]; myAge[i-1] = myAge[i]; myAge[i] = tempAge; tempName = myName[i-1]; myName[i-1] = myName[i]; myName[i] = tempName; sorted = false; } } } for (int i = 0; i < arr.length; i++) { arr[i][0] = myName[i]; arr[i][1] = Integer.toString(myAge[i]); } return arr; } public static void main(String[] args) { Qwerty bs = new Qwerty(); String arr [][] = {{"Ace","10"}, {"Ben","8"}, {"Cid","20"}, {"Dan","5"}, {"Eve","12"}}; arr = bs.bubbleSort(arr); for(int i = 0; i < arr.length; i++) { System.out.println(arr[i][0]+ arr[i][1]); } } I made the bubbleSort method return the sorted String[][] instead of just an int array like before. I created a separate array for the names which is the myName array. This is so that in the bubble sort, whenever a swap is made between 2 ages, the names are swapped as well, using the tempName variable. After all swaps are made to the myName and myAge arrays, I use a loop to put them back into the String arr[][], and then return it. Sorry about that, I should have read your question more carefully. Thanks.
La Guajira Desert The La Guajira Desert is a desert located in northern Colombia and Venezuela, approximately north of Bogotá, covering most of the La Guajira Peninsula at the northernmost tip of South America. Most of the region is within Colombia's La Guajira Department, though a small portion is in Venezuelan territory. The area holds immense coal reserves which are exploited in a zone known as El Cerrejón. It is also home to the indigenous Wayuu people. The Wayuu are mostly herders but also master deep-sea divers, known for collecting pearls from the Caribbean Sea. The peninsula is populated chiefly by xeric scrubland, which is home to a large variety of flora and fauna. The National Natural Park of Macuira, established in 1977, is a tropical oasis located in the La Guajira Desert. The park covers in La Guajira’s only mountain chain and ranges in altitude from sea level to . It has a warm climate that averages about . References External links Category:Deserts of Colombia Category:Deserts of Venezuela Category:Geography of La Guajira Department
Melanie Beddie Melanie Beddie is a professional actor, director, dramaturg and acting teacher. She has been working in Australia for over thirty years. She completed a BA in English and Philosophy at Sydney University from 1981-1984. It was here, through an association with Sydney University Dramatic Society (SUDS) and she acted in and directed numerous productions with the society. She was President of SUDS in 1983. Melanie moved from Sydney to Melbourne to train as an actor at the VCA School of Drama from 1985-1987. Since graduating she has worked nationally as an actor in film and TV, and across a wide range of professional companies such as Melbourne Theatre Company, Playbox, Arena, Going Through Stages, Hothouse, and Playworks and at the Australian National Playwrights Conferences. In addition to working as an actor, Melanie has established two independent theatre companies and was an artistic director of the $5 Theatre Company and also the Branch Theatre Company. Both companies have commissioned a large number of new works with Australian writers. Melanie has directed productions at The Melbourne Theatre Company, Playbox, Hothouse, La Mama, Playworks, and Hit Productions as well as in smaller companies and in drama schools. She is the recipient of industry awards including the Director’s Choice Award for Infectious, which was a cross art form cabaret, and a Green Room Award for Best Director for Traitors by Stephen Sewell that also received four other Green Room nominations. In 2001, she directed an award-winning production of Raindancers at MTC and in 2009, she co-directed, with Rachael Maza, the highly acclaimed Sisters of Gelam which was nominated for a Deadly Award which are National Indigenous Arts and Community Awards. Melanie works widely as a dramaturg in professional theatre and was the resident dramaturg at the Melbourne Theatre Company in 1998-1999. She received the Inaugural Dramaturgy Fellowship from the Australia Council and a Gloria Award from NIDA. References Category:Australian stage actresses Category:Australian theatre directors Category:Living people Category:Year of birth missing (living people)
Research in contextEvidence before this studyThe incidence and mortality of cholangiocarcinoma, a primary hepatic malignancy, are rising in the world. The absence of accurate diagnostic marker restricts early detection and treatment choices. Elevated levels of serum carbohydrate antigen 19--9, a widely used biomarker for cholangiocarcinoma, are also known to occur with other forms of tumors and benign liver disease, which restricts the sensitivity and specificity of diagnosis. We found the expression of peptidase inhibitor 15 (PI15), a secretory trypsin inhibitor, was significantly upregulated in cholangiocarcinoma by analyzing the microarray and TCGA database. Previous study indicated that PI15 could not be detected in a range of healthy human tissues (heart, brain, placenta, lung, liver, skeletal muscle, kidney, and pancreas), suggesting it may be a potential diagnostic marker for cholangiocarcinoma with high specificity.Added value of this studyWe demonstrated the PI15 was highly expressed in cholangiocarcinoma tumor tissues, and could not be detected in normal liver tissues. We detected high levels of plasma PI15 in cholangiocarcinoma patients, but low levels in patients with hepatocellular carcinoma, benign liver disease, chronic hepatitis B patients, and healthy individuals. Moreover, plasma PI15 levels in cholangiocarcinoma patients were obviously reduced after surgery. Altogether, PI15 holds potential diagnostic and follow-up value for patients with cholangiocarcinoma.Implications of all the available evidenceThis study suggests plasma PI15 holds significant value for predicting diagnosis for cholangiocarcinoma patients, and the combination of PI15 and carbohydrate antigen 19--9 improves diagnostic performance for cholangiocarcinoma.Alt-text: Unlabelled Box 1. Introduction {#s0025} =============== Cholangiocarcinoma (CCA) is the second most common liver cancer (10%--15%) and is associated with high levels of invasiveness and a poor prognosis \[[@bb0005]\]. In recent years, the incidence of CCA has been increasing worldwide \[[@bb0010],[@bb0015]\]. Unfortunately, CCA is frequently diagnosed at an advanced stage, which restricts the treatment options to only radical surgery or liver transplantation \[[@bb0020],[@bb0025]\]. Serum carbohydrate antigen 19--9 (CA19--9) is the most widely used biomarker for CCA \[[@bb0030]\]. However, 10% of the general population are negative for Lewis-antigen, meaning that CA19--9 levels are undetectable in the serum; furthermore, elevated levels of serum CA19--9 are also known to occur with other forms of tumors and benign liver disease \[[@bb0035],[@bb0040]\]. Therefore, a sensitive and specific biomarker is urgently required to facilitate the detection of CCA. Secretory proteins may serve as diagnostic markers for a variety of tumors \[[@bb0045], [@bb0050], [@bb0055], [@bb0060], [@bb0065]\]. Peptidase inhibitor 15 (PI15), a secretory trypsin inhibitor, was originally identified and purified from the serum-free conditioned medium of human glioblastoma T98G cells as a novel 25-kDa trypsin-binding protein \[[@bb0070]\]. PI15 belongs to the cysteine-rich secretory proteins, antigen 5, and pathogenesis-related 1 proteins (CAP) superfamily. CAP superfamily proteins are frequently secreted with an extracellular endocrine or paracrine function \[[@bb0075]\]. Northern blotting analysis previously indicated that PI15 could not be detected in a range of healthy human tissues (heart, brain, placenta, lung, liver, skeletal muscle, kidney, and pancreas), and could only be found in glioblastoma and neuroblastoma cell lines among multiple cancer cell lines (5 glioblastoma lines, 7 neuroblastoma lines, 8 gastric carcinoma lines, 6 squamous cell carcinoma lines, 5 hepatocellular carcinoma lines, 2 bladder carcinoma lines, and 1 fibrosarcoma line) \[[@bb0080]\]. In the present study, we demonstrated that PI15, a secretory trypsin inhibitor, was highly expressed in CCA tumor tissue compared to matched normal tissue. In addition, plasma PI15 levels in CCA patients were higher than that in patients with hepatocellular carcinoma (HCC), benign liver disease, chronic hepatitis B (CHB), and healthy individuals, thus indicating its potential diagnostic value for CCA. Receiver operating characteristic (ROC) curve analysis suggested that PI15 had a high diagnostic value for CCA, especially for iCCA patients, an important subtype of CCA. Furthermore, the combination of plasma PI15 and serum CA19--9 improved diagnostic performance. Additionally, plasma PI15 level was significantly reduced after surgery in CCA patients, which further illustrated that the origin of the elevated plasma PI15 concentration was the CCA tumor. Collectively, these results suggest that PI15 is a potential diagnostic and follow-up biomarker for CCA patients. 2. Materials and methods {#s0030} ======================== 2.1. Patient samples {#s0035} -------------------- Fresh samples of CCA patients (n = 67), HCC patients (n = 83), benign liver disease patients (n = 33; 13 as hepatic hemangioma; 20 as intrahepatic stones), CHB patients (n = 45), and healthy individuals (n = 45) were collected from the First Affiliated Hospital of Anhui Medical University. The pre- and postoperative plasma and serum of CCA, HCC, and benign liver disease patients were included. In addition, fresh normal liver tissues, tumor tissues, and matched normal tissues were collected. Healthy controls were matched to the CCA patients by age. CCA and HCC patients were diagnosed as primary cases by histological and clinical examination. Plasma and tissue samples were stored at −80 °C until they were used. HCC and CCA patients did not receive radiotherapy, chemotherapy or targeted therapy prior to surgery. The clinical characteristics of CCA, HCC, benign liver disease, CHB, and healthy individuals are shown in Supplementary Tables S1--4, respectively. This research was approved by the Ethics Committee of the First Affiliated Hospital of Anhui Medical University (Quick-PJ 2018-07-22), and all patients provided signed informed consent for the use of their samples for biomedical research. 2.2. Gene expression profile assay {#s0040} ---------------------------------- The gene expression profiles of tumor tissues and matched normal tissues were analyzed using a whole human genome oligo microarray (G4112F; Agilent). Agilent\'s Feature-Extraction software (version 9.1.3; Agilent Technologies) was used for microarray image analysis. The gene expression values were log2-transformed, and the following analysis was performed using online SAS statistical software (<http://sas.ebioservice.com/>). Cluster 3.0 (Complete Linkage Clustering) was used to accomplish hierarchical clustering. Heat maps and green-red scale schemes were constructed using MultiExperiment Viewer (MEV). The microarray data were deposited into the National Center for Biotechnology Information Gene Expression Omnibus (GEO) repository under accession number GSE117361. 2.3. Analysis of The Cancer Genome Atlas (TCGA) datasets {#s0045} -------------------------------------------------------- RNA-seq data from multiple tumors was obtained from The Cancer Genome Atlas (TCGA, <http://cancergenome.nih.gov>/), including data for cholangiocarcinoma (CHOL/CCA, 36 cancer and 9 normal), liver hepatocellular carcinoma (LIHC/HCC, 374 cancer and 50 normal), pancreatic adenocarcinoma (PAAD, 178 cancer and 4 normal), stomach adenocarcinoma (STAD, 375 cancer and 32 normal), colon adenocarcinoma (COAD, 480 cancer and 41 normal), rectum adenocarcinoma (READ, 167 cancer and 10 normal), lung squamous cell carcinoma (LUSC, 502 cancer and 49 normal), lung adenocarcinoma (LUAD, 535 cancer and 59 normal), kidney renal papillary cell carcinoma (KIRP, 289 cancer and 32 normal), kidney renal clear cell carcinoma (KIRC, 539 cancer and 72 normal), kidney chromophobe (KICH, 65 cancer and 24 normal), breast invasive carcinoma (BRCA, 312 cancer and 36 normal), and prostate adenocarcinoma (PRAD, 499 cancer and 52 normal). The analysis of differentially expressed genes (DEGs) between tumor tissue and normal tissue was conducted using the Edger package in R \[[@bb0085]\]. The criteria for defining DEGs was as follows: false discovery rate (FDR) \< 0.05 and \|log2(FC)\| \> 1, where FC represents the fold change. 2.4. Quantitative polymerase chain reaction {#s0050} ------------------------------------------- Total RNA was isolated from tissue specimens using TRIzol (Invitrogen), and reverse transcribed into cDNA with Moloney Murine Leukemia Virus (M-MLV; Invitrogen). Quantitative polymerase chain reaction (qPCR) analysis was then performed on a Roche LightCycler 96 using SYBR premix Ex Tap II (Takara). The data analysis involved the ΔΔ*C*t method. All primers were synthesized by Sangon (Shanghai, China). The tumor markers were AFP, CEA, CA125, PSA, and GH, and the corresponding genes were *AFP*, *CEACAM5*, *MUC16*, *KLK3*, and *GH1*, respectively. We designed two *PI15* primers for PCR, which were designated *PI15*--1 and *PI15*--2, respectively. Supplementary Table S5 shows detailed information relating to the PCR primers used for *PI15*, *AFP*, *CEACAM5* \[[@bb0090]\], *MUC16*, *KLK3* \[[@bb0095]\], and *GH1* \[[@bb0100]\]. 2.5. Enzyme-linked immunosorbent assay {#s0055} -------------------------------------- Plasma PI15 level was measured using a Human-PI15 ELISA kit (QY-E01315; China) in accordance with the manufacturer\'s instructions. We first prepared the reagents, samples, and standards. We then incubated each prepared sample and standard with HRP-Conjugate Reagent for 60 min at 37 °C. Each plate was then washed five times, chromogen solution A and B were added, and the mixture was incubated for 5 min at room temperature away from light. Finally, the stop solution was added and the optical density (OD) at 450 nm was measured within 15 min. A standard curve linear regression equation was then estimated based on standard concentrations and the corresponding OD values. The OD value for each sample was then added into the regression equation to calculate the sample\'s concentration. Each sample was analyzed in duplicate. 2.6. Electrochemiluminescence {#s0060} ----------------------------- The concentration of serum carbohydrate antigen 19--9 (CA19--9) was measured using Electrochemical luminescence kit (Roche; Switzerland) in accordance with the manufacturer\'s instructions. The Roche Cobas e601 was used to analyze the detection data. 2.7. Statistical analysis {#s0065} ------------------------- Data were summarized and represented as mean ± standard error of the mean (SEM). Statistical analysis was conducted using SPSS (version 22.0) and GraphPad Prism (version 6.0) software programs. The student\'s *t*-test and paired *t*-test were used to analyze the statistical significance between independent groups and paired data, respectively. ROC curve analysis was used to evaluate the diagnostic value of the different markers. The area under the ROC curve (AUC) was used to assess the accuracy of each marker. Univariate and multivariate logistic regression models were used to consider the diagnostic value of PI15 alone and PI15 combined with CA19--9 \[[@bb0105]\]. *P* \< .05 was considered statistically significant. 3. Results {#s0070} ========== 3.1. Identification of a potential marker for CCA {#s0075} ------------------------------------------------- In order to identify a potential diagnostic marker for CCA, we analyzed the gene expression profiles of CCA and HCC tissue samples by gene expression profile assays. The detection of secretory protein biomarkers in plasma is a non-invasive diagnostic method, which is imperative in the clinic due to the need for duplicate tests and low costs. Thus, we screened a range of secretory proteins and found that *PI15*, a secretory trypsin inhibitor, was overexpressed in CCA tumor tissues compared to matched normal tissues ([Fig. 1](#f0005){ref-type="fig"}a, Left). To further demonstrate our finding, we analyzed CCA (CHOL, n = 36) mRNA expression data accessed by RNA-seq from TCGA using Edger analysis to identify differentially expressed genes (DEGs). We found that *PI15* expression was elevated by 3.8-fold in CCA tumor tissues relative to normal tissues, which was consistent with our initial finding.Fig. 1Discovery of a candidate marker for CCA.(a) Gene expression profile assay of CCA tumor tissues and matched normal tissues for screening candidate diagnostic markers. Heat map shown differentially expressed secretory proteins and tumor markers in CCA. The left heat map was based on the gene expression profile of CCA tumor tissues and matched normal tissues. The right heat map was based on CCA (CHOL, n = 36) RNA-seq data from the TCGA database. (b) Gene expression profile assay of HCC tumor tissues and matched normal tissues. Heat map shown selected secretory proteins and tumor markers in HCC. The left heat map was based on microarray data of HCC tumor tissues and matched normal tissues. The right heat map was based on HCC (LIHC, n = 374) RNA-seq data from the TCGA database. Each column depicts an individual sample. Blue squares represent normal tissues, yellow squares represent tumor tissues. RNA-seq data were normalized with MultiExperiment Viewer. (c) The positive rate of PI15 expression in various tumors. (d) The fold change of PI15 expression in various tumors. CCA (CHOL), cholangiocarcinoma (n = 36); HCC (LIHC), hepatocellular carcinoma (n = 374); PAAD, pancreatic adenocarcinoma (n = 178); STAD, stomach adenocarcinoma (n = 375); COAD, colon adenocarcinoma (n = 480); READ, rectum adenocarcinoma (n = 167); LUSC, lung squamous cell carcinoma (n = 502); LUAD, lung adenocarcinoma (n = 535); KIRP, kidney renal papillary cell carcinoma (n = 289); KIRC, kidney renal clear cell carcinoma (n = 539); KICH, kidney chromophobe (n = 65); BRCA, breast invasive carcinoma (n = 312); and PRAD, prostate adenocarcinoma (n = 499). Tumor samples are from the TCGA database. Red columns indicate positive expression of PI15, blue columns indicate negative expression of PI15.Fig. 1 To further illustrate whether *PI15* possessed potential as a diagnostic marker for CCA, we analyzed the sensitivity and specificity of the expression of *PI15* and other tumor markers in CCA. We selected tumor markers used in the clinic, consisting of AFP, CEA, CA125, GH, and PSA \[[@bb0110], [@bb0115], [@bb0120], [@bb0125], [@bb0130]\]. The corresponding genes were *AFP*, *CEACAM5*, *MUC16*, *GH1*, and *KLK3*, respectively. We characterized the expression of *PI15*, other secretory proteins, and tumor markers in CCA ([Fig. 1](#f0005){ref-type="fig"}a). We found that the sensitivity and specificity of *PI15* expression was superior to those of other tumor markers. Furthermore, we performed the same analysis in HCC samples; we analyzed the expression of *PI15*, other secretory proteins, and 5 tumor markers in HCC samples ([Fig. 1](#f0005){ref-type="fig"}b), and found that *PI15* was upregulated in a fraction of HCC. Thus, PI15 might be used as a novel diagnostic marker for CCA, with better sensitivity and specificity than other markers. After the analysis of PI15 expression in cases with CCA and HCC, we focused on the positive rate and fold change of PI15 expression in various human tumors ([Fig. 1](#f0005){ref-type="fig"}c and d). The highest expression of PI15 in normal tissue was regarded as the upper limit of normal expression. When PI15 expression exceeded this upper value in tumor tissue, we defined such cases as positive-expression. The positive rate and fold change analysis were performed in various human tumors, accessed by RNA-seq deposited in TCGA. Consequently, we observed that the positive rate of PI15 expression was the highest (83.3%) in CCA (CHOL, n = 36) among various tumors ([Fig. 1](#f0005){ref-type="fig"}c), and that the fold change of PI15 was 3.8 in CCA (CHOL, n = 36) and 4.1 in HCC (LIHC, n = 374), which were higher than that in other tumors ([Fig. 1](#f0005){ref-type="fig"}d). Therefore, PI15 expression had higher specificity in CCA, indicating utility as a diagnostic marker. 3.2. Increased expression of the secretory protein PI15 in CCA patients {#s0080} ----------------------------------------------------------------------- To further investigate the sensitivity and specificity of PI15 expression in CCA, we measured the expression of PI15 and tumor markers (AFP, CEA, CA125, PSA, and GH) in 10 pairs of CCA tumor tissues and matched tumor adjacent normal tissues, 11 pairs of HCC tumor tissues and matched tumor adjacent normal tissues, and 5 normal liver tissues. As shown in [Fig. 2](#f0010){ref-type="fig"}a and b, the positive rate of PI15 expression was higher in CCA cases (70%, 7/10) than in HCC cases (9.1%, 1/11), and PI15 could not be detected in tumor adjacent normal tissues and normal liver tissues. Additionally, the positive rates of tumor markers expression in CCA were lower than that of PI15, including for AFP (40%, 4/10), CEA (50%, 5/10), and CA125 (20%, 2/10), GH and PSA could not be detected in CCA. Therefore, the sensitivity and specificity of PI15 expression was higher than those of other tumor markers in CCA. Moreover, we evaluated the relative expression levels of PI15 and tumor markers by qPCR in CCA and HCC, and found that only PI15 expression was higher in CCA tumor tissues compared with normal tissues (*p* \< .05, paired *t*-test), but this was not the case in HCC, and no significant differences of other tumor markers expression were observed in CCA and HCC ([Supplementary Fig. 1](#ec0005){ref-type="supplementary-material"}a and b). Next, we focused on the positive rate of PI15 and tumor marker expression in CCA (CHOL, n = 36) and HCC (LIHC, n = 374) ([Supplementary Fig. 1](#ec0005){ref-type="supplementary-material"}c), and observed that the positive rates of *PI15* and tumor markers in CCA (CHOL, n = 36) were 83.3% (*PI15*), 69.4% (*MUC16*), 69.4% (*CEACAM5*), 44.4% (*KLK3*), 11.1% (*GH1*), and 2.8% (*AFP*). In HCC (LIHC, n = 374), the positive rates were 41.7% (*PI15*), 36.4% (*AFP*), 5.4% (*MUC16*), 6.7% (*KLK3*), 1.3% (*GH1*), and 0.5% (*CEACAM5*). Thus, PI15 expression in CCA showed a higher positive rate than other tumor markers, and the positive rate of PI15 expression was much higher in CCA than in HCC. ROC curve analysis was then conducted to determine the diagnostic value of PI15 and tumor markers at the mRNA level in CCA (CHOL, n = 36) and HCC (LIHC, n = 374). We found that *PI15* showed an AUC of 0.981 (95% confidence interval \[CI\], 0.943 to 1.000) for discriminating CCA tumor tissue from normal tissue, which was superior to the diagnostic performance of other tumor markers ([Fig. 2](#f0010){ref-type="fig"}c). In addition, *PI15* exhibited an AUC of 0.806 (95% CI, 0.760 to 0.853) for discriminating HCC tumor tissue from normal tissue, which was higher than *AFP* with an AUC of 0.705 (95% CI, 0.653 to 0.758) ([Fig. 2](#f0010){ref-type="fig"}d). Thus, PI15 expression was significantly upregulated in CCA, and the diagnostic sensitivity and specificity of PI15 at the mRNA level were superior to those for other tumor markers in CCA.Fig. 2Expression of secretory protein PI15 was significantly upregulated in CCA.(a) The expression of PI15 and tumor markers (AFP, CEA, CA125, PSA, and GH) were determined by reverse transcription PCR (RT-PCR) in CCA tumor tissues (n = 10) and matched normal tissues (n = 10). Each band represents a different patient sample. (b) The expression of PI15 and tumor markers (AFP, CEA, CA125, PSA, and GH) were determined by RT-PCR in normal liver tissues (n = 5), HCC tumor tissues (n = 11), and matched normal tissues (n = 11). Each band represents a different patient sample. (c-d) ROC curve analysis of the expression of PI15 and tumor markers (assessed by RNA-seq) for discriminating tumor tissue from normal tissue in CCA (CHOL, n = 36) and HCC (LIHC, n = 374). The genes that encode the tumor markers (AFP, CEA, CA125, PSA, and GH) were *AFP*, *CEACAM5*, *MUC16*, *KLK3*, and *GH1*, respectively. PI15--1 and PI15--2 were the two primers used to amplify *PI15*. CCA/CHOL, cholangiocarcinoma; HCC/LIHC, hepatocellular carcinoma; Tumor adjacent, matched normal tissue; Normal liver, normal liver tissue.Fig. 2 3.3. PI15 as a potential diagnostic blood marker for CCA patients {#s0085} ----------------------------------------------------------------- To determine the potential diagnostic value of PI15 in CCA, we examined the plasma PI15 level in CCA patients (n = 61), HCC patients (n = 72), benign liver disease patients (n = 28), CHB patients (n = 45), and healthy individuals (n = 45) using a quantitative ELISA assay. Consequently, we found that the plasma PI15 concentration was upregulated in CCA patients ([Fig. 3](#f0015){ref-type="fig"}a). Specifically, the plasma PI15 concentration was significantly increased in HBV negative CCA patients (60.64 ± 20.78 ng/ml) but not in HBV positive CCA patients (2.34 ± 0.39 ng/ml) ([Fig. 3](#f0015){ref-type="fig"}a). In addition, the plasma PI15 mean concentration was only 4.91 ± 0.50 ng/ml in HCC patients, which was significantly lower than in CCA patients (*p* \< .0001, unpaired *t*-test), and the plasma PI15 mean concentration was 20.26 ± 9.13 ng/ml in benign liver disease patients, 10.81 ± 3.84 ng/ml in healthy individuals, and 1.83 ± 0.24 ng/ml in CHB patients ([Fig. 3](#f0015){ref-type="fig"}a).Fig. 3PI15 as a potential diagnostic marker for CCA.(a) Plasma PI15 levels were measured by quantitative ELISA in CCA (HBV-) patients (n = 51), CCA (HBV+) patients (n = 10), HCC patients (n = 72), benign liver disease patients (n = 28), CHB patients (n = 45), and healthy individuals (n = 45). (b) Plasma PI15 levels in iCCA (n = 26), pCCA (n = 12), and dCCA (n = 13) patients. (c) ROC curves for PI15 levels in plasma samples from patients with CCA (HBV-) patients (n = 51) *versus* HCC patients (n = 72), benign liver disease patients (n = 28), CHB patients (n = 45), and healthy individuals (n = 45). HBV-, HBV negative; HBV+, HBV positive; Benign, benign liver disease; CHB, chronic hepatitis B; Normal, healthy individuals; iCCA, intrahepatic cholangiocarcinoma; pCCA, perihilar cholangiocarcinoma; dCCA, distal cholangiocarcinoma. Unpaired *t*-test; Data are presented as mean ± SEM.Fig. 3 We further compared the plasma PI15 level in different CCA subtypes categorized by anatomical location as intrahepatic cholangiocarcinoma (iCCA), perihilar cholangiocarcinoma (pCCA), and distal cholangiocarcinoma (dCCA). The mean plasma PI15 concentration was 78.5 ng/ml, 53.48 ng/ml, and 31.51 ng/ml in the iCCA (n = 26), pCCA (n = 12), and dCCA (n = 13) patients, respectively ([Fig. 3](#f0015){ref-type="fig"}b). Thus, the plasma PI15 level in iCCA patients was higher than in pCCA (*p* = .6631, unpaired *t*-test) patients and dCCA patients (*p* = .4441, unpaired *t*-test) ([Fig. 3](#f0015){ref-type="fig"}b). ROC curve analysis was performed to further illustrate the diagnostic value of plasma PI15 for CCA patients. PI15 exhibited an AUC of 0.735 (95% CI, 0.632 to 0.838) for CCA samples compared to HCC controls ([Fig. 3](#f0015){ref-type="fig"}c; [Table 1](#t0005){ref-type="table"}). Additionally, the AUC of PI15 was 0.678, 0.692, and 0.875 for discriminating CCA patients from benign liver disease patients, healthy individuals, and CHB patients, respectively ([Fig. 3](#f0015){ref-type="fig"}c; [Table 1](#t0005){ref-type="table"}). In conclusion, plasma PI15 was able to discriminate effectively between CCA cases and other controls, suggesting great potential as a diagnostic marker.Table 1AUC calculations of ROC analysis for patients with CCA and iCCA *versus* HCC, benign liver disease, CHB, and healthy individuals.Table 1PI15CA19--9PI15 + CA19--9nAUC95% CIAUC95% CIAUC95% CICCA *versus* Controls CCA *versus* HCC51/720.7350.6320.8380.8750.8050.9460.9080.8460.97 CCA *versus* Benign51/280.6780.5550.80.7370.6240.850.750.6420.858 CCA *versus* CHB51/450.8750.7930.9570.8880.8130.9640.9620.9151.000 CCA *versus* Normal51/450.6920.580.8040.8810.8030.960.8780.7990.958

iCCA *versus* Controls iCCA *versus* HCC26/720.750.6140.8860.8490.7350.9630.9210.8411.000 iCCA *versus* Benign26/280.6990.5580.840.7120.5730.850.7610.6340.888 iCCA *versus* CHB26/450.8990.80.9980.8550.7290.980.9630.8971.000 iCCA *versus* Normal26/450.7090.5640.8550.8540.7280.980.850.7240.977 3.4. 3.4 Use of the PI15/CA19--9 marker panel improved diagnostic performance for CCA {#s0090} ------------------------------------------------------------------------------------- To investigate whether a combination of plasma PI15 and serum CA19--9 could constitute a combined diagnostic panel with higher discriminatory ability than each alone, we performed logistic regression to evaluate the diagnostic capacity of the combination of PI15 and CA19--9. A combination of PI15 and CA19--9 for CCA cases *versus* HCC controls yielded an AUC of 0.908 (95% CI, 0.846 to 0.97), outperforming either of the markers alone ([Fig. 4](#f0020){ref-type="fig"}a; [Table 1](#t0005){ref-type="table"}). In addition, the AUC of the two-marker panel discriminating CCA from benign liver disease was 0.750 (95% CI, 0.642 to 0.858), which was higher than 0.678 (PI15 alone) and 0.737 (CA19--9 alone) ([Fig. 4](#f0020){ref-type="fig"}a; [Table 1](#t0005){ref-type="table"}). The PI15/CA19--9 panel was able to discriminate CCA cases *versus* CHB patients with an AUC of 0.962 ([Fig. 4](#f0020){ref-type="fig"}a; [Table 1](#t0005){ref-type="table"}). Moreover, the PI15/CA19--9 panel for CCA *versus* healthy individuals exhibited an AUC of 0.878, which was close to 0.881, the AUC of CA19--9 alone ([Fig. 4](#f0020){ref-type="fig"}a; [Table 1](#t0005){ref-type="table"}). Thus, the PI15/CA19--9 panel was able to distinguish CCA from CHB patients and helped to distinguish CCA from HCC and benign liver disease compared to CA19--9 alone.Fig. 4The combination of PI15 and CA19--9 improves diagnostic performance for CCA.(a) ROC curves for PI15, CA19--9, and PI15 + CA19--9 levels in patients with CCA (HBV-) patients (n = 51) *versus* HCC patients (n = 72), benign liver disease patients (n = 28), CHB patients (n = 45), and healthy individuals (n = 45). (b) ROC curves for PI15, CA19--9, and PI15 + CA19--9 levels in patients with iCCA (HBV-) patients (n = 26) *versus* HCC patients (n = 72), benign liver disease patients (n = 28), CHB patients (n = 45), and healthy individuals (n = 45). HBV-, HBV negative; CHB, chronic hepatitis B; Normal, healthy individuals; iCCA, intrahepatic cholangiocarcinoma.Fig. 4 In the clinic, iCCA is usually diagnosed as a hepatic mass, frequently similar to the imaging performance of HCC with cirrhosis; thus, the differential diagnosis of HCC and iCCA can be difficult. In our present study, the plasma PI15 level in iCCA patients was highest among the different CCA subtypes tested, suggesting better diagnostic value for iCCA. Therefore, we further investigated the diagnostic performance of PI15 alone and the PI15/CA19--9 panel for iCCA. PI15 exhibited an AUC of 0.750 (95% CI, 0.614 to 0.886) for iCCA samples compared to HCC controls ([Fig. 4](#f0020){ref-type="fig"}b; [Table 1](#t0005){ref-type="table"}). In the same sample set, CA19--9 had a comparable AUC of 0.849 for iCCA samples compared to HCC controls ([Fig. 4](#f0020){ref-type="fig"}b; [Table 1](#t0005){ref-type="table"}). Furthermore, the PI15/CA19--9 panel displayed an AUC of 0.921 (95% CI, 0.841 to 1.000), indicating the superiority of the two-marker panel ([Fig. 4](#f0020){ref-type="fig"}b; [Table 1](#t0005){ref-type="table"}). When considering iCCA samples *versus* benign liver disease, the AUC including CA19--9 increased from 0.712 (alone) to 0.761 (with PI15) ([Fig. 4](#f0020){ref-type="fig"}b; [Table 1](#t0005){ref-type="table"}). The PI15/CA19--9 panel yielded the AUC of 0.963 for discriminating iCCA samples *versus* CHB controls ([Fig. 4](#f0020){ref-type="fig"}b; [Table 1](#t0005){ref-type="table"}). For healthy individuals, the combination of PI15 and CA19--9 was not able to increase the ability to discriminate between iCCA samples and healthy individuals ([Fig. 4](#f0020){ref-type="fig"}b; [Table 1](#t0005){ref-type="table"}). In conclusion, our results indicated that the PI15/CA19--9 marker panel exhibited better performance in diagnosing iCCA patients. 3.5. Establishing a cutoff concentration for plasma PI15 in iCCA patients {#s0095} ------------------------------------------------------------------------- In order to determine a plasma PI15 concentration which could act as a diagnostic cut off value with which to distinguish between iCCA and HCC. We firstly analyzed the concentration distribution of plasma PI15 in HCC patients to obtain cut-off values corresponding to the false-positive rates (FPRs) of 0%, 3%, and 5%. Subsequently, these cut off values were further analyzed and evaluated for their sensitivity and specificity in diagnosing iCCA patients. As seen in [Table 2](#t0010){ref-type="table"}, plasma PI15 could detect approximately 57.7% of iCCA patients (sensitivity) with 94.4% specificity when the cut off value was set to 13 ng/ml. Furthermore, when we used CA19--9 \> 98.5 U/ml, and a cut off value of 13 ng/ml for plasma PI15, the two-marker panel yielded 84.62% sensitivity and 94.44% specificity ([Table 2](#t0010){ref-type="table"}).Table 2PI15 concentration cut off values for iCCA and CCA based on percentiles of distribution in HCC plasma controls.Table 2MarkerCutoffiCCA *versus* HCCCCA *versus* HCCSensitivitySpecificitySensitivitySpecificityCA19-9 (\>98.5)69.2098.6066.7098.60PI15 (ng/ml) 95%1157.7093.1054.9093.10 97%1357.7094.4054.9095.80 100%25.238.5098.6033.3098.60CA19-9 (\>98.5) and PI15 (ng/ml) 95%1184.6293.0680.3993.06 97%1384.6294.4480.3994.44 100%25.280.7798.6174.5198.61 3.6. Evaluation of the postoperative recovery of CCA patients using plasma PI15 level {#s0100} ------------------------------------------------------------------------------------- Due to our results suggested that plasma PI15 could be used as a diagnostic marker for CCA patients, we further investigated whether plasma PI15 level could evaluate the postoperative recovery of CCA patients. We measured the pre- and postoperative plasma PI15 levels in CCA patients (n = 27), HCC patients (n = 30), and benign liver disease patients (n = 20); Meanwhile, we also detected the pre- and postoperative serum CA19--9 levels in CCA patients (n = 12), all of patients underwent curative hepatectomy. We observed that PI15 and CA19--9 were significantly reduced after surgery in CCA patients ([Fig. 5](#f0025){ref-type="fig"}a). The pre-operative mean plasma PI15 concentration in CCA patients reached 80.42 ng/ml. However, on the 4th and 7th days after surgery, the plasma PI15 mean concentration decreased to 53.17 ng/ml and 59.75 ng/ml, respectively ([Fig. 5](#f0025){ref-type="fig"}a). In the pre-operative plasma of HCC patients, the mean PI15 concentration was only 3.98 ng/ml, and the mean concentration was 4.12 ng/ml and 4.05 ng/ml on the postoperative 4th and 7th days, respectively ([Fig. 5](#f0025){ref-type="fig"}b). For benign liver disease patients, the mean pre- and postoperative mean plasma PI15 concentrations were 27.02 ng/ml and 26.79 ng/ml, respectively ([Fig. 5](#f0025){ref-type="fig"}c). Thus, plasma PI15 level was obviously reduced after surgery in CCA patients ([Fig. 5](#f0025){ref-type="fig"}a), whereas there was no significant change in HCC and benign liver disease patients ([Fig. 5](#f0025){ref-type="fig"}b and c). The dynamic change of plasma PI15 level in CCA patients confirmed that the elevated plasma PI15 concentration originated from the CCA tumor, demonstrating outstanding potential diagnostic and follow-up value for CCA patients. Consequently, the detection of plasma PI15 was able to evaluate the outcomes of surgical treatment, and could be used as potential follow-up marker for CCA patients after surgery.Fig. 5Determination of plasma PI15 level for the prediction of postoperative recovery in CCA patients.(a) The levels of plasma PI15 and serum CA19--9 in preoperative (day "-1") and postoperative (days "4" and "7") CCA patients (n = 27) were measured by quantitative ELISA and electrochemiluminescence respectively. The PI15 and CA19--9 concentration of representative CCA cases were shown. (b) Plasma PI15 levels in preoperative (day "-1") and postoperative (days "4" and "7") HCC patients (n = 30) were measured by quantitative ELISA. The PI15 concentration of representative HCC cases were shown. (c) Plasma PI15 levels in preoperative (day "-1") and postoperative (days "4" and "7") benign liver disease patients (n = 20) were measured by quantitative ELISA. The PI15 concentration of representative benign liver disease cases were shown. The abscissa of the coordinate axis is the number of days after surgery (postoperative days), and "-1" refers to the preoperative day. Before, before surgery; After, after surgery. Paired *t*-test; Data are presented as mean ± SEM.Fig. 5 4. Discussion {#s0105} ============= In this study, our present results indicated that PI15 was highly expressed in CCA tumor tissues, and could not be detected in normal liver tissues. PI15 was also increased in the plasma of CCA patients, demonstrating that PI15 represents a potential diagnostic marker for CCA. The differentially methylated CpG of *PI15* was previously found to be a potential novel prognostic marker capable of distinguishing prostate cancer patients with metastatic-lethal tumors from nonrecurrent tumors \[[@bb0135]\]. The *PI15* gene had also been identified as a candidate oncogene in colorectal cancer \[[@bb0140]\]. However, no previous study has investigated the expression of PI15 in CCA. Human PI15 is situated on chromosome 8q21.11, and is adjacent to Cysteine Rich Secretory Protein LCCL Domain Containing 1 (*CRISPLD1*), another mammalian CAP superfamily gene. Most of the CAP superfamily proteins are structurally conserved, which leads to members with a CAP domain exhibiting similar fundamental functions. CAP superfamily proteins are frequently secreted with an extracellular endocrine or paracrine function. N-terminal sequencing of PI15 derived from the serum-free conditioned medium of glioblastoma cells indicated that the predicted secretory signal peptide was active \[[@bb0075]\], suggesting that the secretory protein PI15 has the potential to be a marker for detection in the peripheral blood. In our present study, plasma PI15 levels in CCA patients were significantly higher compared to patients with HCC, benign liver disease, CHB patients, or healthy individuals, indicating that PI15 could be regarded as a diagnostic marker for CCA patients. Importantly, the combination of PI15 and CA19--9 exhibited superior diagnostic performance. Moreover, the PI15 concentration in postoperative plasma was significantly decreased compared with that in the preoperative plasma in CCA patients, thus confirming that the elevated plasma PI15 concentration was related to CCA tumor tissue, which may help us to judge whether tumor tissue has been completely removed. However, there was no significant change in the plasma PI15 levels of benign liver disease patients, ruling out surgery, health care interventions, and other factors as contributors to the change in plasma PI15 in CCA patients. Meanwhile, PI15 and CA19--9 showed same decreasing trendency after surgery in CCA patients, which further illustrated the potential diagnostic value of PI15. Thus, we propose that the plasma PI15 is able to evaluate the outcomes of surgical treatment. Further work needs to be performed to investigate how PI15 is released into the plasma of CCA patients. Further research still needs to be performed in the future. Long-term follow-up data collection and multi-center study are necessary to further validate the diagnostic and follow-up value of PI15 for CCA. Moreover, we will determine plasma PI15 levels at different stages of CCA to further investigate the early diagnostic value, thereby potentially enhancing its value for clinical application. Additionally, the combination of plasma PI15 and serum CA19--9 improved the diagnostic performance for CCA. However, for serum CA19--9 positive patients with liver diseases including cholangitis and duct obstruction \[[@bb0145]\], or serum CA19--9 negative CCA patients, the diagnostic performance of plasma PI15 needs to be further explored. In conclusion, PI15 has the potential to act as a novel blood diagnostic marker for CCA, and could also be used as an indicator to evaluate postoperative recovery in CCA patients. The combination of plasma PI15 and serum CA19--9 improves the diagnostic performance for CCA. The following are the supplementary data related to this article.Supplementary Fig. 1Evaluation of PI15 and other tumor markers expression in CCA and HCC. (a-b) The relative expression level of PI15 and other tumor markers accessed by real-time qPCR in CCA and HCC; (c) The positive rate of PI15 and other tumor markers expression in CCA (CHOL, n = 36) and HCC (LIHC, n = 374). The genes that encode the tumor markers (AFP, CEA, CA125, PSA, and GH) were *AFP*, *CEACAM5*, *MUC16*, *KLK3*, and *GH1*, respectively. Red columns indicate positive expression of PI15, blue columns indicate negative expression of PI15. CCA/CHOL, cholangiocarcinoma; HCC/LIHC, hepatocellular carcinoma; Normal, normal tissue; Paired *t*-test; ns, nonsignificant; \*, *p* \< .05.Supplementary Fig. 1Supplementary Table 1-5Image 1 Funding sources {#s0110} =============== This work was supported by the Natural Science Research Foundation of Anhui Province (1508085MH173, KJ2015A137), and this work was also supported by the Natural Science Foundation of China (81602491). Declaration of interests {#s0115} ======================== The authors declare no potential conflict of interest. Authors\' contributions {#s0120} ======================= Conception and design: Yong Jiang, Xiaohu Zheng, Yeben Qian, Haiming Wei. Acquisition of data: Yong Jiang, Xiaohu Zheng, Defeng Jiao, Peng Chen. Data analysis and interpretation: Yong Jiang, Xiaohu Zheng, Defeng Jiao, Yechuan Xu. Manuscript writing: Yong Jiang, Xiaohu Zheng, Yeben Qian, Haiming Wei. Final approval of manuscript: All authors. We thank Mr. Wang Dongyao, Mr. Chen He and Ms. Ren Chunxia for their assistance in collecting the specimens involved in this study. [^1]: These authors contributed equally to this work.
The new insights from DPP-4 inhibitors: their potential immune modulatory function in autoimmune diabetes. Dipeptidyl peptidase-4 (DPP-4) inhibitors are a new class of anti-diabetic agents that are widely used in clinical practice to improve glycemic control and protect β-cell function in patients with type 2 diabetes. DPP-4 is also known as lymphocyte cell surface protein CD26 and plays an important role in T-cell immunity. Autoimmune diabetes, a T-cell mediated organ-specific disease, is initiated by the imbalance between pathogenic and regulatory T-lymphocytes. DPP-4 inhibitors can suppress pathogenic effects of Th1 and Th17 cells and up-regulate Th2 cells and regulatory T cells, which play a critical role in ameliorating autoimmune diabetes. This provides a basis for the potential use of DPP-4 inhibitors in the treatment of autoimmune diabetes. Recent studies suggest that DPP-4 inhibitors improve β-cell function and attenuate autoimmunity in type 1 diabetic mouse models. However, there are few clinical studies on the treatment of autoimmune diabetes with DPP-4 inhibitors. Further studies are warranted to confirm the therapeutic effects of DPP-4 inhibitors on autoimmune diabetes in humans.
In the next few years, citizens will be able to text photos and videos to public-safety teams. Instant communication with the public will likely make emergency response faster. But response teams also need to prepare for new challenges, including the risk they’ll receive too much content, or inaccurate tips. There are several separate efforts to upgrade emergency communication including the ongoing development of FirstNet, a public safety broadband network, and Next Generation 911, the internet protocol that would let the public send content to emergency responders through the 911 network. » Get the best federal technology news and ideas delivered right to your inbox. Sign up here. But how might the influx of content change public safety officers’ jobs? The impact on training could be significant, Homeland Security Department Director of the Office of Emergency Communications Ronald Hewitt said at Tuesday's APCO Public Broadband Safety Summit 2016 in Washington. When call centers start receiving information from the public, and crowdsourcing information, “It’s going from [being] a communications officer to really being a CIO,” he said. For instance, they’ll need to ensure data collected is passed along to the first responders who need it. They’ll also likely need to prepare to see some upsetting images, he said. “It’s one thing just taking that call of a citizen in distress … but what is it when you suddenly now get a picture of ... a severed arm?” Hewitt said. In the past, a deluge of phone calls might clog phone lines—but soon, it could “be actually just every citizen with a cellphone camera sending that picture in, and they could actually take down your network,” Hewitt said. “Bad actors” could also find new ways to simulate disasters to distract public safety officers, he added. Hewitt's office at DHS is meeting with public safety officials “so they understand the magnitude" of the challenges associated with upgrading communication. "Even though it provides significantly more capability, it also provides more vulnerability," he said.
Impact of mental operation instructions. Experiments were conducted to test the impact of embedding mental action verbs within instructions. Experiment 1 examined the instructional effects of these verbs on response time to a visual stimulus. Significant response time differences resulted from instructing participants to engage in different mental actions. Using Multidimensional Scaling, Experiment 2 explored how people understand the relationships amongst mental action verbs, resulting in a single "level of processing" dimension. Experiment 3 was designed to further explore the relationship of these verbs to cognition and behaviour. Signal detection analysis was used to determine if participants were shifting their criterion depending on the level of processing suggested in the instruction. Results showed an effect of instruction on response time, but not on criterion, sensitivity, or accuracy. Response time effects were found that were consistent with differences in word characteristics, including meaning.
New Cellphone App Helps Illegals Find and Win U.S. College Scholarships Alex Wong/Getty Images18 Apr 2016 A new app inspired by an illegal immigrant who couldn’t afford college tuition in the United States is intended to help other illegals find and win at least 10,000 college scholarships per year. Sarahi Espinoza Salamanca – who was brought to California illegally from Mexico by her parents when she was four years old – has developed a “scholarship network” in the form of an app called the DREAMer’s Roadmap. “It is going to be the roadmap to the road of the journey that we lead every day of uncertainty,” said Salamanca, who is now 26 and lives in East Palo Alto. “This would be their guide to college. It will give them hope.” While Salamanca qualified for in-state tuition in 2008 under AB540 in California, she was not permitted to work to help pay for college tuition. Four years later, President Barack Obama created the Deferred Action for Childhood Arrivals (DACA) immigration policy that offers illegal immigrants a renewable work permit. DREAMer’s Roadmap gathers scholarships provided by various organizations and will launch this month for iOS and Android, KQUED reports. At its launch, users will be able to access the 500 scholarships on the app’s database, and share information via text, email, or social media. Salamanca said DREAMer’s Roadmap will be free to use. “We didn’t want to risk charging for the app and depriving people from obtaining a tool that can potentially help them get money and go to college,” she said. Salamanca’s start-up plans for the app were launched with $100,000 she won from the 2015 Voto Latino Innovators Challenge, which were awarded to five Latinos in the United States with the best ideas in science, technology, engineering and math. Salamanca – who now has her green card and plans to attend a four-year college this year – began to work full time for DREAMer’s Roadmap after obtaining her associate’s degree from Cañada College in Redwood City and winning the competition. She used the initial $100,000 to develop the Android version and then received an additional $25,000 from an anonymous donor to design the iOS version. “The toughest part has been finding additional funding,” Salamanca said. “A lot of foundations have trouble seeing the impact that we’re going to have in the community.” Salamanca was named one of Forbes 30 Under 30: Education in 2016; a Champions of Change recipient by the Obama administration in 2014; and also was a participant in the DREAMer Hackathon hosted by Mark Zuckerberg in 2013. Salamanca and her colleagues have begun promoting the new app through partnerships with schools that cater to illegal immigrants. “It’s much more in tune with what teenagers are doing right now, and that’s doing everything on their phone, including writing essays,” said Jane Slater, the adviser for the Sequoia High School Dream Club. Alicia Carmen Aguirre, who is on the board of directors of DREAMer’s Roadmap and a professor at Cañada College, said illegal immigrants are hesitant to speak up to get help for college tuition. “If students get to this app and find opportunities to go to college, it will make a difference in their lives and it will open up opportunities,” she said. For years undocumented students have struggled more than permanent residents or citizen students in everyday situations. But when senior year in high school comes around it becomes one of the hardest year for an undocumented student. This is the year when most students find out that they don’t qualify for FAFSA and the majority of scholarships. First because they don’t have a social security number and second because they are not “legal” permanent residents or citizens of the United States. Many students by this point are discouraged and don’t believe that going to college is a possibility. That is why we are very proud to provide the DREAMer’s Roadmap App for students across the country. Our tool will equip DREAMers to realize their path to college. Once launched, app users will be invited to simply explore the database or create an account. Salamanca says she wanted to give illegal students that option in case some didn’t feel comfortable giving their personal information. In addition, DREAMer’s Roadmap allows users to filter the database according to whether or not they have qualified for DACA. Partners of DREAMer’s Roadmap are: University of California Berkeley Undocumented Student Program; Dream Club DC; Latinos and Tech Initiative; and Undocumedia.
/* * Copyright (C) 2015, Broadcom Corporation. All Rights Reserved. * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. * * $Id: time.c,v 1.9 2009-07-17 06:23:12 $ */ #include <linux/version.h> #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,36) #include <linux/config.h> #endif #include <linux/init.h> #include <linux/kernel.h> #include <linux/sched.h> #include <linux/serial_reg.h> #include <linux/interrupt.h> #include <asm/addrspace.h> #include <asm/io.h> #include <asm/time.h> #include <typedefs.h> #include <osl.h> #include <bcmutils.h> #include <bcmnvram.h> #include <hndsoc.h> #include <sbchipc.h> #include <siutils.h> #include <hndmips.h> #include <mipsinc.h> #include <hndcpu.h> #include <bcmdevs.h> /* Global SB handle */ extern si_t *bcm947xx_sih; extern spinlock_t bcm947xx_sih_lock; /* Convenience */ #define sih bcm947xx_sih #define sih_lock bcm947xx_sih_lock #define WATCHDOG_MIN 3000 /* milliseconds */ extern int panic_timeout; extern int panic_on_oops; static int watchdog = 0; #ifndef CONFIG_HWSIM static u8 *mcr = NULL; #endif /* CONFIG_HWSIM */ static void __init bcm947xx_time_init(void) { unsigned int hz; char cn[8]; /* * Use deterministic values for initial counter interrupt * so that calibrate delay avoids encountering a counter wrap. */ write_c0_count(0); write_c0_compare(0xffff); if (!(hz = si_cpu_clock(sih))) hz = 100000000; bcm_chipname(sih->chip, cn, 8); printk(KERN_INFO "CPU: BCM%s rev %d at %d MHz\n", cn, sih->chiprev, (hz + 500000) / 1000000); /* Set MIPS counter frequency for fixed_rate_gettimeoffset() */ mips_hpt_frequency = hz / 2; /* Set watchdog interval in ms */ watchdog = simple_strtoul(nvram_safe_get("watchdog"), NULL, 0); /* Ensure at least WATCHDOG_MIN */ if ((watchdog > 0) && (watchdog < WATCHDOG_MIN)) watchdog = WATCHDOG_MIN; /* Set panic timeout in seconds */ panic_timeout = watchdog / 1000; panic_on_oops = watchdog / 1000; } #ifdef CONFIG_HND_BMIPS3300_PROF extern bool hndprofiling; #ifdef CONFIG_MIPS64 typedef u_int64_t sbprof_pc; #else typedef u_int32_t sbprof_pc; #endif extern void sbprof_cpu_intr(sbprof_pc restartpc); #endif /* CONFIG_HND_BMIPS3300_PROF */ static irqreturn_t bcm947xx_timer_interrupt(int irq, void *dev_id) { #ifdef CONFIG_HND_BMIPS3300_PROF /* * Are there any ExcCode or other mean(s) to determine what has caused * the timer interrupt? For now simply stop the normal timer proc if * count register is less than compare register. */ if (hndprofiling) { sbprof_cpu_intr(read_c0_epc() + ((read_c0_cause() >> (CAUSEB_BD - 2)) & 4)); if (read_c0_count() < read_c0_compare()) return (IRQ_HANDLED); } #endif /* CONFIG_HND_BMIPS3300_PROF */ #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,36) /* Generic MIPS timer code */ timer_interrupt(irq, dev_id); #else /* 2.6.36 and up */ { /* There is no more a MIPS generic timer ISR */ struct clock_event_device *cd = dev_id; BUG_ON( ! cd ); cd->event_handler(cd); /* Clear Count/Compare Interrupt */ write_c0_compare(read_c0_count() + mips_hpt_frequency / HZ); } #endif /* Set the watchdog timer to reset after the specified number of ms */ if (watchdog > 0) si_watchdog_ms(sih, watchdog); #ifdef CONFIG_HWSIM (*((int *)0xa0000f1c))++; #else /* Blink one of the LEDs in the external UART */ if (mcr && !(jiffies % (HZ/2))) writeb(readb(mcr) ^ UART_MCR_OUT2, mcr); #endif return (IRQ_HANDLED); } #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,36) static void bcm947xx_clockevent_set_mode(enum clock_event_mode mode, struct clock_event_device *cd) { printk( KERN_CRIT "bcm947xx_clockevent_set_mode: %d\n", mode ); /* Need to add mode switch to support both periodic and one-shot operation here */ } #ifdef BRCM_TIMER_ONESHOT /* This is used in one-shot operation mode */ static int bcm947xx_clockevent_set_next(unsigned long delta, struct clock_event_device *cd) { unsigned int cnt; int res; printk( KERN_CRIT "bcm947xx_clockevent_set_next: %#lx\n", delta ); cnt = read_c0_count(); cnt += delta; write_c0_compare(cnt); res = ((int)(read_c0_count() - cnt) >= 0) ? -ETIME : 0; return res; } #endif struct clock_event_device bcm947xx_clockevent = { .name = "bcm947xx", .features = CLOCK_EVT_FEAT_PERIODIC, .rating = 300, .irq = 7, .set_mode = bcm947xx_clockevent_set_mode, #ifdef BRCM_TIMER_ONESHOT .set_next_event = bcm947xx_clockevent_set_next, #endif }; #endif /* named initialization should work on earlier 2.6 too */ static struct irqaction bcm947xx_timer_irqaction = { .handler = bcm947xx_timer_interrupt, .flags = IRQF_DISABLED | IRQF_TIMER, .name = "bcm947xx timer", .dev_id = &bcm947xx_clockevent, }; #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,36) void __init plat_timer_setup(struct irqaction *irq) { /* Enable the timer interrupt */ setup_irq(7, &bcm947xx_timer_irqaction); } #else void __init plat_time_init(void) { struct clock_event_device *cd = &bcm947xx_clockevent; const u8 irq = 7; /* Initialize the timer */ bcm947xx_time_init(); cd->cpumask = cpumask_of(smp_processor_id()); clockevent_set_clock(cd, mips_hpt_frequency); #ifdef BRCM_TIMER_ONESHOT /* Calculate the min / max delta */ cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd); cd->min_delta_ns = clockevent_delta2ns(0x300, cd); #endif clockevents_register_device(cd); /* Enable the timer interrupt */ setup_irq(irq, &bcm947xx_timer_irqaction); } #endif
Q: Download Manager not working I'm trying to develop app that show videos and you can Download it i'm using Download Manager class but it didn't work, also it didn't give me any error :( this is my download manager code: public void downloadFileFromUrl(String url, String fileName) { String filePath=Environment.getExternalStorageDirectory() + File.separator + "BlueNet"; File folder = new File(filePath); if (!folder.exists()) { folder.mkdirs(); } try { Uri downloadUri = Uri.parse(url); DownloadManager.Request request = new DownloadManager.Request(downloadUri); request.setAllowedNetworkTypes(DownloadManager.Request.NETWORK_WIFI); request.allowScanningByMediaScanner(); request.setDestinationInExternalPublicDir("/BlueNet/",fileName); request.setNotificationVisibility(DownloadManager.Request.VISIBILITY_VISIBLE_NOTIFY_COMPLETED); request.setVisibleInDownloadsUi(true); DownloadManager downloadManager = (DownloadManager)getApplicationContext().getSystemService(DOWNLOAD_SERVICE); long id= downloadManager.enqueue(request); Toast.makeText(this, fileName, Toast.LENGTH_LONG).show(); Toast.makeText(this, filePath, Toast.LENGTH_LONG).show(); } catch (Exception ex){ Toast.makeText(this, ex.toString(), Toast.LENGTH_LONG).show(); } } and this is how I'm calling it downloadFileFromUrl(path, fileName); where: path: "192.168.1.5:8080/BlueNet_NMC/blue_elephant.mp4" filename: "blue_elephant.mp4" and i already give this permissions to manifests <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> so please any help A: As I said in the comments, DownloadManager only handles requests starting with http:// or https:// as you can see in the docs. I don't know exactly what's the problem because I lack information about your server, but I think it's a common issue, so you should avoid using an IP address without providing that scheme.
Sen. Lindsey Graham Lindsey Olin GrahamMcConnell locks down key GOP votes in Supreme Court fight Will Republicans' rank hypocrisy hinder their rush to replace Ginsburg? Momentum growing among Republicans for Supreme Court vote before Election Day MORE (R-S.C.) on Wednesday shot back at late-night host Jimmy Kimmel for his "unfair" criticism of Sen. Bill Cassidy William (Bill) Morgan CassidyCoushatta tribe begins long road to recovery after Hurricane Laura Senators offer disaster tax relief bill Bottom line MORE (R-La.) and his involvement in the new GOP health-care bill, saying Kimmel likely read a "liberal talking point" before hastily attacking the lawmaker. "I bet you he never called Sen. Cassidy and said 'would you please set this straight?' I bet he looked at some liberal talking point, bought it hook, line and sinker, and went after Bill Cassidy without talking to him, and I think that's unfair,” Graham said on "FOX & Friends." ADVERTISEMENT Graham said that their health-care bill will cover pre-existing conditions, while adding that he sympathized with Kimmel, whose child had been diagnosed with a heart condition shortly after birth. "I understand the emotional nature of having a sick child, and we're all grateful your child is doing well. Bill Cassidy is a doctor who worked in a nonprofit hospital serving the underprivileged. Factually, our bill requires pre-existing illnesses to be covered in the block grant," Graham said in part, adding that the bill would allow "50 states to come up with solutions to help sick people, not just some bureaucrat in Washington." Kimmel took aim at Cassidy on Tuesday, saying the lawmaker lied when he promised an affordable health-care bill that would pass the “Jimmy Kimmel test.” The comedian accused the Louisiana senator of failing to fulfill the promises he had made following Kimmel's emotional appeal in May to keep ObamaCare in place after his son's heart problem had been detected. The measure, put forward by Cassidy, Graham and other Republican lawmakers, aims to give more power to states by converting ObamaCare funding for subsidies — which help people afford health-care coverage and pay for Medicaid expansion — into block grants to states. Cassidy similarly insisted Wednesday that Kimmel does not fully understood the protections included in the Graham-Cassidy bill. "I'm sorry he does not understand," Cassidy said Wednesday on CNN's "New Day," adding the bill protects "those with pre-existing conditions."
7871 What is the next term in 40533304, 40533306, 40533308, 40533310, 40533312, 40533314? 40533316 What comes next: 33744, 134934, 303592, 539724, 843336, 1214434, 1653024? 2159112 What is the next term in 36819, 36772, 36699, 36600, 36475? 36324 What is the next term in -370597, -741216, -1111835, -1482454? -1853073 What is next in -288176, -288242, -288332, -288458, -288632, -288866, -289172, -289562? -290048 What comes next: 17142, 17152, 17160, 17166, 17170? 17172 What comes next: 21316, 22047, 24160, 28345, 35292, 45691? 60232 What comes next: 113709, 113290, 112871? 112452 What comes next: -1376, -5711, -13012, -23285, -36536, -52771, -71996, -94217? -119440 What is next in -33400, -67892, -102396, -136918, -171464, -206040, -240652? -275306 What is next in -668195, -668213, -668231, -668249? -668267 What comes next: 317744, 317666, 317602, 317552? 317516 What is the next term in -264353, -528653, -792953, -1057253? -1321553 What is next in -14750, -30069, -45390, -60713? -76038 What is next in -1523, -7044, -16257, -29162, -45759, -66048? -90029 What is next in -114, -443, -1120, -2241, -3902, -6199? -9228 What is next in -33690727, -67381436, -101072145, -134762854? -168453563 What comes next: -3450, -13369, -29742, -52551, -81778, -117405, -159414? -207787 What is the next term in -66241449, -66241450, -66241451, -66241452, -66241453, -66241454? -66241455 What is the next term in 950, 710, 364, -88, -646, -1310? -2080 What is next in -621, -198, 455, 1452, 2907, 4934? 7647 What is next in -10112, -12209, -14306? -16403 What is next in -90, -604, -1556, -2946, -4774, -7040, -9744? -12886 What is the next term in -12330653, -12330655, -12330657, -12330659, -12330661, -12330663? -12330665 What comes next: 13409, 53776, 121031, 215180, 336229, 484184? 659051 What is next in 39647, 158523, 356633, 633977? 990555 What comes next: 1497683, 2995200, 4492717? 5990234 What is next in 72761, 72236, 71359, 70130, 68549, 66616? 64331 What is next in -48101, -96120, -144139, -192158, -240177, -288196? -336215 What is next in -701, -5686, -13993, -25622? -40573 What is the next term in 15614, 20878, 26144, 31412, 36682, 41954, 47228? 52504 What is the next term in -563981, -563918, -563755, -563444, -562937, -562186, -561143, -559760? -557989 What is the next term in -282422, -282502, -282582? -282662 What comes next: 51351646, 51351647, 51351648? 51351649 What is next in -310909, -621504, -932099, -1242694? -1553289 What is the next term in -127179, -126168, -123419, -118062, -109227? -96044 What comes next: -5330, -21609, -48744, -86735, -135582, -195285? -265844 What is the next term in -62331, -62344, -62363, -62394, -62443, -62516, -62619, -62758? -62939 What is the next term in 2346, 7694, 16610, 29100, 45170, 64826, 88074? 114920 What is the next term in 636, 668, 684, 696, 716, 756, 828, 944? 1116 What is the next term in 8826, 8824, 8818, 8808, 8794, 8776? 8754 What comes next: 8410460, 16820916, 25231374, 33641834, 42052296, 50462760, 58873226? 67283694 What is next in 1596758, 3193313, 4789868? 6386423 What is the next term in 14682, 17762, 22898, 30090, 39338? 50642 What comes next: 619662, 619682, 619728, 619812, 619946, 620142, 620412? 620768 What is next in 31541593, 31541580, 31541559, 31541530, 31541493? 31541448 What is the next term in -1701308, -1701338, -1701366, -1701386, -1701392, -1701378, -1701338, -1701266? -1701156 What comes next: -4883, -4669, -4473, -4295? -4135 What comes next: 152697, 305403, 458109, 610815, 763521? 916227 What comes next: -67058, -268099, -603166, -1072259? -1675378 What is the next term in -9970, -9843, -9710, -9571, -9426? -9275 What is the next term in 539992, 539980, 539968, 539956, 539944, 539932? 539920 What is the next term in -16904, -68371, -154148, -274235, -428632? -617339 What is next in 596174, 595599, 595028, 594461, 593898? 593339 What is next in -36449206, -36449207, -36449208, -36449209, -36449210? -36449211 What comes next: 1167, 1369, 1545, 1689, 1795, 1857, 1869, 1825? 1719 What is the next term in 2616372, 10465449, 23547244, 41861757? 65408988 What is the next term in -16258, -37589, -58920, -80251? -101582 What is the next term in -128, -382, -1048, -2336, -4456? -7618 What is next in 2540832, 10163326, 22867484, 40653306, 63520792, 91469942, 124500756? 162613234 What comes next: 10381124, 20762247, 31143370, 41524493, 51905616? 62286739 What is the next term in -720510, -1441014, -2161518, -2882022, -3602526, -4323030? -5043534 What comes next: 725, 2562, 5597, 9812, 15189? 21710 What is next in -34160, -68059, -101936, -135779, -169576, -203315, -236984? -270571 What comes next: 115694, 115691, 115694, 115703? 115718 What is next in 4151, 8788, 14359, 21332, 30175, 41356, 55343? 72604 What is the next term in -4628, 1919, 8466, 15013? 21560 What comes next: -28125, -28116, -28109, -28104, -28101, -28100? -28101 What is the next term in 641, 491, 293, 89, -79, -169, -139, 53? 449 What is next in -948, -1708, -2962, -4716, -6976, -9748? -13038 What is next in 225353674, 901414690, 2028183050, 3605658754, 5633841802? 8112732194 What is the next term in -198232, -198162, -198018, -197776, -197412? -196902 What comes next: -9530375, -9530372, -9530369, -9530366, -9530363, -9530360? -9530357 What comes next: 120331, 962605, 3248815, 7700941, 15040963, 25990861, 41272615? 61608205 What is next in -565285, -565359, -565563, -565963, -566625? -567615 What is the next term in -19748, -19471, -19188, -18893, -18580, -18243, -17876, -17473? -17028 What is next in 2822968, 5645939, 8468914, 11291893, 14114876, 16937863, 19760854? 22583849 What is next in 479757, 479855, 479953? 480051 What is the next term in 1259, 4726, 10463, 18470, 28747, 41294? 56111 What is next in 631369, 2525372, 5682047, 10101394, 15783413, 22728104, 30935467? 40405502 What comes next: -231, -1116, -2581, -4632, -7275? -10516 What is next in -47010, -94028, -141046, -188064, -235082? -282100 What is the next term in 2942486, 5884893, 8827300, 11769707? 14712114 What is the next term in 11783472, 11783473, 11783474, 11783475, 11783476? 11783477 What comes next: 9244, 9352, 9538, 9808, 10168, 10624? 11182 What is next in -16890, -33996, -51274, -68724, -86346, -104140, -122106? -140244 What is the next term in 538572, 1077165, 1615758, 2154351? 2692944 What is the next term in -1011, -1346, -1961, -2904, -4223, -5966, -8181, -10916? -14219 What is the next term in -19136, -38358, -57580, -76802? -96024 What comes next: -37931118, -75862238, -113793358? -151724478 What comes next: -1125, -9024, -30391, -71940, -140385? -242440 What comes next: 31002, 30801, 30600? 30399 What is the next term in -10708293, -21416588, -32124883, -42833178, -53541473, -64249768? -74958063 What is next in -68225, -64505, -58319, -49679, -38597? -25085 What is the next term in -8191, -8800, -9409, -10018? -10627 What is next in -28, -29, 76, 365, 916, 1807? 3116 What comes next: 79011, 315409, 709407, 1261005, 1970203, 2837001, 3861399? 5043397 What is next in -57005, -113689, -170373, -227057, -283741? -340425 What is the next term in -20991104, -41982207, -62973308, -83964407, -104955504? -125946599 What comes next: -614, -1248, -1944, -2732, -3642, -4704? -5948 What is the next term in -29115798, -58231596, -87347394, -116463192, -145578990, -174694788? -203810586 What is the next term in -375877, -751848, -1127819, -1503790? -1879761 What is next in 17584, 70304, 158162, 281158, 439292? 632564 What is the next term in 4802, 19361, 43640, 77639, 121358? 174797 What is the next term in 689618, 1379301, 2069006, 2758739, 3448506, 4138313, 4828166, 5518071? 6208034 What is next in -97468, -389748, -876882, -1558870, -2435712, -3507408, -4773958? -6235362 What is the next term in -1674734, -3349471, -5024208? -6698945 What is the next term in -30738, -61433, -92112, -122769, -153398? -183993 What is next in 3452554, 3452414, 3452274? 3452134 What is next in 109042, 108846, 108654, 108466, 108282? 108102 What is the next term in -341014, -1364050, -3069106, -5456182? -8525278 What is next in -12846, -25022, -37198? -49374 What comes next: 9688, 9020, 8360, 7708, 7064, 6428? 5800 What is the next term in -37765, -75358, -112943, -150520