text
stringlengths
0
6.48M
meta
dict
Which Black Lives Matter To You? Last night on the BET Music Awards, actor Jesse Williams was given the "Humanitarian Award" for his efforts and work with the #BlackLivesMatter campaign. He gave a very moving speech that brought everyone to their feet, however will the speech continue to move you 3 months later? Too many times we get motivational speeches but are only motivated for the moment. If Jesse's speech inspired you then practice the following: Make sure you are registered to vote Vote at election time. Join your local NAACP chapter. Be active in making a change in our community. Black men, treat black women with respect. Treat your girlfriend or wife the same way you would a man to treat your mom. Stop playing with her emotions, you are causing more harm to the black woman and their children when you play "Slave Master Games" with them. Black women treat your black man the way you want someone to treat your son. As Black women we have to respect our black man. Motivate him, push him to be best he can be. Support his dreams even the ones you don't understand. Stop nagging so much and start listening more **call your homegirl and vent, get your thoughts strait and then go back and tell him how you feel**. If ‪#‎BlackLivesMatter‬ the black lives that are closest to you should matter the most. Pray for each other, our leaders & our country. It doesn't matter who our president is, God is the same everyday. Have a good day.
{ "pile_set_name": "Pile-CC" }
ZTE Nubia Z5 Mini coming soon, at least in China ZTE may soon unveil a mini version of the Nubia Z5, which is one of the company’s flagship devices offered in China. The Nubia Z5 is already selling in China and Nubia Z5 Mini will follow shortly, or so we think. Such a device has been recently leaked and brought to you/us by GSMinsider. We’re not sure what kind of hardware this smaller Nubia hides under the hood, but we’re confident it’s not as powerful as its older, 5-inch brother. Moreover, it’s still not known whether this particular handset will ever be available outside of China. We would like to see more competition in all market segments in the West and are also confident ZTE wants to have its phones selling in many parts of the world. That, however, is easier said than done with operators usually standing on the handset makers’ way. Guess we’ll have to wait and see whether something like this will happen or not. As soon as we have something new to do add, you’ll be the first to know…
{ "pile_set_name": "Pile-CC" }
Q: How do I set up TF weight of terms in corpus using the ‘tm’ package in R I wonder how can I get the term frequency weight in tm packge which is (tf=term/total terms in the document)` MyMatrix <- DocumentTermMatrix(a, control = list(weight= weightTf)) After I use this weight it shows the frequency of term not TF weight like this Doc(1) 1 0 0 3 0 0 2 Doc(2) 0 0 0 0 0 0 0 Doc(3) 0 5 0 0 0 0 1 Doc(4) 0 0 0 2 2 0 0 Doc(5) 0 4 0 0 0 0 1 Doc(6) 5 0 0 0 1 0 0 Doc(7) 0 5 0 0 0 0 0 Doc(8) 0 0 0 1 0 0 7 A: For example library(tm) corp <- Corpus(VectorSource(c(doc1="hello world", doc2="hello new world"))) myfun <- WeightFunction(function(m) { cs <- slam::col_sums(m) m$v <- m$v/cs[m$j] return(m) }, "Term Frequency by Total Document Term Frequency", "termbytot") dtm <- DocumentTermMatrix(corp, control = list(weighting = myfun)) inspect(dtm) # <<DocumentTermMatrix (documents: 2, terms: 3)>> # Non-/sparse entries: 5/1 # Sparsity : 17% # Maximal term length: 5 # # Terms # Docs hello new world # 1 0.5000000 0.0000000 0.5000000 # 2 0.3333333 0.3333333 0.3333333
{ "pile_set_name": "StackExchange" }
What is the ‘New South,’ Anyway? Hint: Charlotte's a big part of it. According to the Levine Museum in Charlotte, N.C., the “New South” is everything following the Civil War in the Southern part of the U.S. At the museum, the exhibit Cotton Fields to Skyscrapers takes you through the history of the South, highlighting Charlotte’s role in everything from civil rights to today’s economic boom in the banking industry. The New South is still developing, too. The museum houses rotating exhibits upstairs that feature history as it happens. The current one, K(NO)W Justice K(NO)W Peace, will run through October. The museum is open Monday through Saturday 10 a.m. to 5 p.m. and Sunday Noon to 5 p.m.
{ "pile_set_name": "Pile-CC" }
You are here Gallery: Japan’s Super GT series What’s that embarrassingly worn-out cliché about waiting for a bus and so forth? After an agonisingly long tease, not only has the Subaru BRZ road car finally landed, so has a weightlifting, gym-honed sibling. That shiny blue automotive hammer you see above is a BRZ in full Super GT uniform. Why does this excite us? Because Japanese Super GT is the fastest race series in the world for road-going cars. How fast is fast? Put it this way, Maserati tried to enter its MC12 racing car a few years back into Japan’s Super GT500 but it was deemed too slow (by almost a second a lap, entirely down to aerodynamics in corners). Subaru’s BRZ will head for the slightly less mental GT300 class - the B league of Super GT. Cars are limited by air restrictors to 300bhp (GT500s all have 500bhp hand-built 3.4 litre V8s) and Subaru must keep the car as close to its road-going DNA as possible. It is, however, allowed stupendous aerodynamic appendages, which is a good thing, chiefly because they look ruddy fantastic. This is fast becoming the BRZ we want. While Subaru has never won a Super GT championship, it has competed in the GT300 series for many years, running an Impreza WRX STI from 2005 and then a Legacy (yes, you read that right) from 2009 to 2011. But Toyota isn’t fielding their identical GT 86. Oh no. And word on the grapevine is that Toyota is planning to run a full-blooded GT300 racing version of California’s favourite hybrid: a Prius. Thankfully, there are others. Have a flick through the best from both the GT300 and 500 fields and see if you can name them all… BBC Worldwide is a commercial company that is owned by the BBC (and just the BBC). No money from the licence fee was used to create this website. The profits we make from it go back to BBC programme-makers to help fund great new BBC programmes.
{ "pile_set_name": "Pile-CC" }
The information and contents on our web site occur solely for promotional purposes and are not binding. We accept no liability for the accuracy of the information and content on our website. In particular, we take no responsibility whether or not the information corresponds with the purpose desired by the users. This differs only if we have expressly agreed something different with the user or if we want to be seen to vouch for the accuracy of the information or content. Furthermore, FUN FACTORY GmbH reserves the right to change or supplement the information provided. All rights in the design and contents of this website shall remain with us and are subject to copyright and other intellectual property rights. The publication on the World Wide Web or other Internet services does not provide consent for any other use by third parties. Copying and downloading the website or parts is not permitted, provided we do not require this of you (i.e. to download specific forms or information). In any case, the reproduction or use for commercial purposes is prohibited, unless we have expressly agreed to the reproduction or use in writing or by e-mail. FUN FACTORY GmbH is neither willing nor obligated to take part in dispute resolution proceedings before a consumer arbitration board. There is no national arbitration board for retail trade. Consumers can turn to the General Consumer Conciliation Body:
{ "pile_set_name": "Pile-CC" }
Q: Domain name getting with number link: domain.com:399 While redirecting one page to another page using h t access getting attached extra number to domain e.g: "domain.com:399" here i need to remove ":399" form domain how to do this? A: something like this might work: RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteCond %{REQUEST_URI} ^/a RewriteRule (.*) http://[yourdomain]/b/link.php
{ "pile_set_name": "StackExchange" }
Online media Praja Foundation:Delhi Police Department highlights Prajafoundation White Paper on Crime in Delhi notes an 82% increase in theft in 2017-2018 over 2016-2017 #urbangovernance@FNFSouthAsia@EU_in_India. Detailed government data has been delayed for over half a year with the #crimeinIndia report of the NCRB not being released @Prajafoundation@FNFSouthAsia@EU_in_India As per the data through RTI by Praja Foundation from Delhi Police Department highlights that: The highest number of cases registered is of theft, with 75,728 cases in 2017-18, which is an increase of 82% from the previous year i.e. 2016-17. Incidences of rape continue to be high in Delhi, with 2,207 cases of rape reported in 2017-18, a 3% increase from the year i.e. 2016-17. In 2017-18, of the total number of kidnapping cases (5,757) registered in Delhi, 65% of the victims were women. Out of the total abduction cases (496) registered in Delhi for the year 2017-18, 63% of the victims were women. There has been an increase in cases registered under Protection of Children from Sexual Offences (POCSO) Act from 2016-17 to 2017-18 i.e. 991 to 1,137. 52% of cases out of total rape cases are reported under POCSO Act in 2017-18. As of March 2018, Positions of Additional Commissioner of Police (-45%), Additional Deputy Commissioner of Police (-43%), Assistant Commissioner of Police (-43%) and Police Sub – Inspector (-31%) had the highest shortfall. In a survey commissioned by Praja to Hansa Research of 28,624 households across Delhi, it was found that: 40% of the respondents do not feel secure in Delhi whereas 50% feel that Delhi is not secure for women, children and senior citizens. 68% of respondents who witnessed crime and informed police were not satisfied with response of police officials while 67% of those who faced crime and informed police were not satisfied with police’s response. From the respondents who faced crime in Delhi, 64% used the police helpline numbers like 100 to inform the police while only 5% actually visited the police station and registered an FIR. New Delhi, February 21, 2019: Praja Foundation released its report on the ‘State of Policing and Law & Order’ in Delhi at an event on Thursday, February 21, 2019. Praja’s household survey commissioned to Hansa Research states that 40% of the people living in Delhi feel unsafe whereas 14% of people living in Mumbai feel the same. “The number of rape cases reported in Delhi is rising with 2,207 cases of rape reported in 2017-18, there is a 3% increase from the previous year i.e. 2016-17. Out of the total number of kidnapping and abduction cases registered in Delhi in 2017-18, 65% and 63% of the victims were women respectively”, said Milind Mhaske, Director at Praja Foundation. In 2017-18, 52% of rape cases out of the total number of rape cases were registered under Protection of Children from Sexual Offences (POCSO) Act. The survey data also highlights that 50% of the total respondents in Delhi feel that the city is not secure for women, children and senior citizens. Adding to this, no questions were raised by the Members of Parliament (MPs) on issues related to women in sessions starting from Monsoon 2017 to Budget 2018. In last five years from FY 2014-15 to FY 2017-18, the highest number of cases reported were that of theft 75,728 cases in 2017-18, which is an increase of 82% from the previous year i.e. 2016-17. North West District reported the highest registration of theft (8,641) in the year 2017-18. “Our data shows that apart from increasing crimes, there is shortage of police personnel in the Delhi Police department. As of March 2018, positions of Additional Commissioner of Police (-45%), Additional Deputy Commissioner of Police (-43%), Assistant Commissioner of Police (-43%) and Police Sub – Inspector (-31%) have the highest shortfall”, added Mhaske. Emphasising on the massive shortage in police personnels, Nitai Mehta, Founder and Managing Trustee of Praja Foundation said “Such a shortage directly affects the investigation and law and order of the city.” Our household data survey also indicates the dissatisfaction of respondents towards the response of the police officials. 68% of respondents who witnessed crime and informed police were not satisfied with response of police officials while 67% of those who faced crime and informed police were not satisfied with police’s response. “Moreover, the ‘Crime in India’ report by National Crime Records Bureau, which maintains and tracks the number of pending cases, acquittals and convictions in court, is not out for this year. This makes it difficult to monitor and track the registered cases”, Mehta added. NCRB’s role is to generate and maintain sharable National Databases on crimes and criminals for law enforcement agencies and uphold their use for public service delivery. These Crime Statistics are imperative for upholding law and order in the country and form a major tool for Police Force to prevent and detect crimes in India. But the delay in reports every year compromises the efficient functioning of the police officials and prevention of crimes in India. “This shows that there is a need to create a strong awareness campaign against sexual abuse of the children with all the stakeholders i.e. police, elected representatives, legal professionals, children, parents, schools and colleges etc.” said Mehta. It is imperative to fill up gaps in the sanctioned and available strengths of the police force at the level of the Investigating Officers for better investigation and maintenance of law & order. Increased involvement of the Elected Representatives by questioning the government on law & order and security is needed. Moreover, ‘Crime in India’ report should be published on a timely basis for better monitoring, analysis and prevention of crimes in India. About PRAJA Foundation: PRAJA was founded in 1997 by a group of eight Mumbaiites with a vision to re-establish accountability and transparency in governance. These individuals were fuelled by a concern about a general lack of interest among the Citizens’ in the local government. Praja aims to create awareness among the citizens, and therefore empower them though the knowledge. PRAJA believes that the availability of information can go a long way towards simplifying people’s lives and evoking participation. This aims to ensure a holistic approach for ushering good governance must have buy in of our ideas from the elected representatives. At the same time, there should be tools and mechanisms which enable citizens to keep a close watch on the work done by their elected representatives. PRAJA’s goals are simplifying people’s lives, empowering the citizens and government with facts and creating instruments of change to improve the quality of life of citizens in India. PRAJA is committed to creating an accountable and efficient society through people’s participation.
{ "pile_set_name": "Pile-CC" }
Q: How can I put validation in the getter and setter methods in C#? In C#, I can have a property without having the need to declare a private variable. My VB6 code that looked like this 'local variable(s) to hold property value(s) Private mvarPhoneNumber As String 'local copy Public Property Let PhoneNumber(ByVal vData As String) 'used when assigning a value to the property, on the left side of an assignment. 'Syntax: X.PhoneNumber = 5 mvarPhoneNumber = vData End Property Public Property Get PhoneNumber() As String 'used when retrieving value of a property, on the right side of an assignment. 'Syntax: Debug.Print X.PhoneNumber PhoneNumber = mvarPhoneNumber End Property can now look like this. public string PhoneNumber{get;set;} How can I put validation in the getter and setter methods in C#? I tried adding a validation like this. public string PhoneNumber { get { return PhoneNumber; } set { if (value.Length <= 30) { PhoneNumber = value; } else { PhoneNumber = "EXCEEDS LENGTH"; } } } The get part of this code won't compile. Do I need to revert to using a private variable? A: Yes, you will have to create a backing field: string _phoneNumber; public string PhoneNumber { get { return _phoneNumber; } set { if (value.Length <= 30) { _phoneNumber = value; } else { _phoneNumber = "EXCEEDS LENGTH"; } } } Keep in mind that this implementation is no different from an automatically implemented property. When you use an automatically implemented property you are simply allowing the compiler to create the backing field for you. If you want to add any custom logic to the get or set you have to create the field yourself as I have shown above.
{ "pile_set_name": "StackExchange" }
Mahmoud Zuabi Mahmoud Zuabi also, Zubi or al-Zoubi (; 1935 – 21 May 2000) was Prime Minister of Syria from 1 November 1987 to 7 March 2000. Early life Zuabi was born into a Sunni family in 1935 in Khirbet Ghazaleh, a village 75 miles south of Damascus in the Hauran region. Prime Minister of Syria Zuabi was a member of the Ba'ath Party. Under the rule of then President Hafez Assad, Zuabi was appointed Prime Minister in 1987. He presided over a ramshackle purportedly socialist governmental and economic system. Military and government officials exercised immense power and continue to do so. Only oil revenues kept the economy going. Even foreign aid programmes struggled to implement under the weight of bureaucratic obduracy. Ubiquitous regulations including price-control had the effect most observers say of stifling legitimate enterprise. Many officials are forced into corruption to supplement meager salaries. It is said that corruption extended all the way to the top Syria's government. When President Hafez Assad was showing signs of poor health in the late 1990s, supporters of his son Bashar Assad started positioning him to succeed him as President. Hafez was also a major player in these maneuverings. Syria is a republic where there was no direct transfer of power envisaged in the constitution from father to son. As Hafez Assad grew sick, it became clear both father and son had decided that Zuabi's days were numbered. Tackling corruption is a popular cause among most Syrians, who see the immense wealth created at their expense as a reason why the Syrian economy has struggled to grow. The dragging down of once swaggering officials, with punishments including jail and the confiscation and auction of their illegally obtained assets earned Bashar much kudos in the community. On 7 March 2000, Zuabi was replaced as prime minister by Mohammed Mustafa Mero. Currency crisis During 1985-2000, Zuabi's administration failed to arrest the 90 per cent fall in the worth of the Syrian Pound from 3 to 47 to the US Dollar. Downfall and the Airbus deal controversy On 10 May 2000, Hafez Assad expelled Zuabi from the Ba'ath Party and decided that Zuabi should be prosecuted over a scandal involving the French aircraft manufacturer Airbus. Zuabi's assets were frozen by the Syrian government. Zuabi and several senior ministers were officially accused of receiving illegal commissions of the order of US$124 million in relation to the purchase of six Airbus 320-200 passenger jets for Syrian Arab Airlines in 1996. The indictment alleged that the normal cost of the planes was US$250 million, but the Government paid $374 million and Airbus sent on US$124 million to the senior ministers. Three others involved in the transaction, including the former minister for economic affairs and the former minister for transport were sentenced to prison for ten years. The French company Airbus denied paying off the Syrian officials. The Syrian government in September 2003 announced its intention of purchasing six more Airbus planes for the government airline. The official finding within Syrian courts that Airbus paid over a hundred million dollars in bribes to their officials is apparently not a factor in deciding whether to continue to do business with them, especially with Boeing aircraft and spare parts being difficult to attain due to unilateral US sanctions. Personal life Zuabi was married and had two sons and a daughter. His sons were Miflih and Hammam Zuabi. Death and burial In May 2000, while under house arrest, Zuabi committed suicide by two gunshots to the head rather than face trial. The Syrian Arab News Agency's official explanation was that he committed suicide after learning that a Syrian police chief had arrived at his house in Dumer to serve a judicial notice requiring him to appear before an investigating judge to answer allegations of corruption in relation to the Airbus transaction. Zuabi died while he was taking to a hospital. Three weeks after Zuabi's death, Hafez Al Assad also died. Zuabi's body was buried in his village, Khirbat Ghazalah, after a simple funeral ceremony. References Category:1935 births Category:2000 deaths Category:Arab Socialist Ba'ath Party – Syria Region politicians Category:Ba'athist rulers Category:Syrian politicians who committed suicide Category:Prime Ministers of Syria Category:Speakers of the People's Council of Syria Category:Suicides by firearm in Syria Category:Suicides in Syria Category:Deaths by firearm in Syria Category:Syrian Sunni Muslims
{ "pile_set_name": "Wikipedia (en)" }
Functional involvement of rat organic anion transporter 2 (Slc22a7) in the hepatic uptake of the nonsteroidal anti-inflammatory drug ketoprofen. Rat organic anion transporter 2 (rOat2, Slc22a7) is a sinusoidal multispecific organic anion transporter in the liver. The role of rOat2 in the hepatic uptake of drugs has not been thoroughly investigated yet. rOat2 substrates include nonsteroidal anti-inflammatory drugs, such as ketoprofen, indomethacin, and salicylate. In the present study, the uptake of ketoprofen, indomethacin, and salicylate by freshly isolated rat hepatocytes was characterized. The uptake of ketoprofen, indomethacin, and salicylate by hepatocytes was sodium-independent, and the rank order of their uptake activities was indomethacin > ketoprofen > salicylate. Kinetic analysis based on Akaike's Information Criterion suggested that the uptake of ketoprofen and indomethacin by hepatocytes consists of two saturable components and one nonsaturable one. The K(m) and V(max) values for the high- and low-affinity components for ketoprofen uptake were 0.84 and 97 microM and 35 and 1800 pmol/min/mg protein, respectively, whereas those for indomethacin were 1.1 and 140 microM and 130 and 16,000 pmol/min/mg protein, respectively. The K(m) values of the high-affinity component were similar to those for rOat2 (3.3 and 0.37 microM for ketoprofen and indomethacin, respectively). The uptake of ketoprofen by hepatocytes was significantly inhibited by probenecid and rOat2 inhibitors (indocyanine green, indomethacin, glibenclamide, and salicylate). Other inhibitors of rOatps (taurocholate and pravastatin) and rOat3 (pravastatin and p-aminohippurate) had a slight effect, but digoxin had no effect. These results suggest that rOat2 accounts partly for the hepatic uptake of ketoprofen and, presumably, indomethacin as a high-affinity site and that other transporters, such as rOatps, but not rOatp2, and rOat3, are also involved.
{ "pile_set_name": "PubMed Abstracts" }
Slicker Baby Duck - 6'' Duck By Douglas Cuddle Toys Serious Item Code: 6529 | Mfr. Code: 1506.1 This fuzzy duckling loves jumping in rain puddles just as much as you do! You can imagine the big splash his webbed feet make! For over 50 years, Douglas Toys has created soft and cuddly plush toys that are loved by children of all ages. Surface washable.
{ "pile_set_name": "Pile-CC" }
Welcome to Hyperion Records, an independent British classical label devoted to presenting high-quality recordings of music of all styles and from all periods from the twelfth century to the twenty-first. Hyperion offers both CDs, and downloads in a number of formats. The site is also available in several languages. Please use the dropdown buttons to set your preferred options, or use the checkbox to accept the defaults. Don't show me this message again Septet in E flat major, Op 20 Introduction The four or five years that separate Op 81b and the Septet in E flat major, Op 20, was a period of rapid growth for Beethoven in terms of his career as a performer as well as his development as a composer. In that time he wrote his first significant piano sonatas, including the ‘Pathétique’, Op 13 (published in 1799 and a landmark of the genre), a number of string trios, violin sonatas and cello sonatas, and, above all, the six revolutionary string quartets that were eventually published as Op 18 in 1802. The ‘Pathétique’ and the Op 18 quartets are eloquent evidence that Beethoven was becoming dissatisfied with the settled forms, styles and patterns of the Classical style, and was espousing novelty, at times virtually for its own sake. In its own way, the septet was as much a novelty when it first appeared as these overtly revolutionary works. It was written in the winter of 1799/1800, and was given its premiere at the first of Beethoven’s benefit concerts, in the Burgtheater on 2 April 1800. The concert also included the first performance of the Symphony No 1, a piano concerto and an improvisation by the composer, as well as music by Haydn and Mozart. The septet was published by Hoffmeister in Leipzig in 1802 and was an immediate and lasting success. The work conforms to the serenade/divertimento tradition in its architecture: there are six movements, the standard four of the late-Classical sonata or symphony, a set of variations on a popular tune (the Rhineland song ‘Ach Schiffer, lieber Schiffer’), and a scherzo marked Allegro molto e vivace. The practice of framing a central movement with two minuet-like movements was well established at the time and is found in Mozart’s serenades, as is the idea of making the second faster and more scherzo-like than the first. Beethoven increased the sense of symmetry by giving the first movement a grand and expansive slow introduction – a device that had originated in the symphony and was still uncommon in chamber music – and by matching it with a slow introduction to the final Presto. In matters of scoring, however, Beethoven broke entirely new ground. In eighteenth-century serenades wind instruments usually come in pairs, like animals in the Ark, but the septet only has a single clarinet, horn and bassoon. Indeed, there is only one of each instrument, since the ‘string quartet’ consists of violin, viola, cello and double bass – the latter included to lend weight to the ensemble, and because it had traditionally been a member of serenade ensembles. By this means Beethoven freed himself from using his instruments in their traditional roles: the bassoon rarely plays the bass, just as the cello is free to take a tenor part, or even soar into the treble clef. Also, the relationship between strings and winds is more flexible and varied than before. There is antiphonal writing between the two groups, ‘orchestral’ passages with the wind supporting the strings with held chords, florid wind solos and duets accompanied by the strings, and concerto-like passages for solo violin (written for the virtuoso player Ignaz Schuppanzigh) accompanied by the rest of the ensemble. With its mixture of grandeur and intimacy, virtuosity and informality, Beethoven’s septet appealed enormously to his contemporaries. Indeed, its composer eventually came to resent its popularity, believing that it had overshadowed his more mature works. Arrangements of it were quickly made, for piano duet, and as Harmoniemusik. And it was imitated to the extent that a new genre of large-scale chamber music developed. The Beethoven combination was later used by, among others, Conradin Kreutzer and Berwald, and (with the addition of a second violin) by Schubert in his octet of 1824. Spohr published an octet in 1814 for clarinet, two horns, violin, two violas, cello and bass, and a nonet in the next year with a flute and oboe added to the Beethoven combination, an ensemble also used by George Onslow. By 1850 the genre had been largely superceded, though Brahms originally conceived his first orchestral serenade of 1857/8 as a nonet, and it has been revived in our own time by such works as Howard Ferguson’s octet. Recordings 'I'm delighted to be able to hail this … as first rate on all counts. The recording itself cannot be over-praised for its vivid clarity and truth ...'Beethoven offers players the chance to sparkle in his Septet and in the delightful Sextet (new to me, and not to be missed), a chance that The Gaudie ...» More This album is not yet available for downloadHYP202CDs Super-budget price sampler — Deleted 'More than just a highlight sampler. This is a classy collection, brought together with a great deal of care and attention to musical programming seldom found in this kind of CD … A stocking-filler any music lover would appreciate' (Scotland ...» More
{ "pile_set_name": "Pile-CC" }
Q: Prevent drag & drop (it's dropping anywhere) I'm using this AngularJS Drag & Drop library and the documentation is confusing and it's quite out of date, but the effect is that it seems like even if there's nowhere to drop it it always drops. A: One thing I didn't understand initially is that event.preventDefault() inside ondragover is the way to allow drop (it's kind of backwards from what you might expect). Hence me searching phrases like "how to prevent drag & drop". Anyway, the problem was Aha, it's an issue with the library, which seems to have some lines of code to handle some old situation which no longer happens. So technically it's not actually dropping at all, but it is calling the onDropSuccess function no matter what. This issue onDropSuccess will always trigger in IE and Firefox on Windows summarizes the issue, and the fix I've used is to remove these lines from function determineEffectAllowed (e): if (e.dataTransfer && e.dataTransfer.dropEffect === 'none') { if (e.dataTransfer.effectAllowed === 'copy' || e.dataTransfer.effectAllowed === 'move') { e.dataTransfer.dropEffect = e.dataTransfer.effectAllowed } else if (e.dataTransfer.effectAllowed === 'copyMove' || e.dataTransfer.effectAllowed === 'copymove') { e.dataTransfer.dropEffect = e.ctrlKey ? 'copy' : 'move' } } So it'll just look like this: function determineEffectAllowed (e) { if (e.originalEvent) { e.dataTransfer = e.originalEvent.dataTransfer } }
{ "pile_set_name": "StackExchange" }
Q: Laravel trying to query multiple tables Not quite sure how to fix this and if this even is an issue with my query or database, so here it goes. There are 3 tables, products, relations and tags. 'products' ( 'ID' bigint(20) unsigned NOT NULL AUTO_INCREMENT, 'user_id' bigint(20) unsigned NOT NULL, 'name' varchar(200) COLLATE utf8mb4_unicode_ci DEFAULT NULL, 'primary_image' bigint(20) DEFAULT NULL, 'description' longtext COLLATE utf8mb4_unicode_ci, 'price' float DEFAULT NULL, 'sale_price' float DEFAULT NULL, 'currency' varchar(25) COLLATE utf8mb4_unicode_ci DEFAULT NULL, 'primary_color' varchar(7) COLLATE utf8mb4_unicode_ci DEFAULT NULL, 'secondary_color' varchar(7) COLLATE utf8mb4_unicode_ci DEFAULT NULL, 'status' varchar(15) COLLATE utf8mb4_unicode_ci DEFAULT NULL, 'quantity' bigint(20) DEFAULT NULL, 'origin' varchar(200) COLLATE utf8mb4_unicode_ci DEFAULT NULL, 'type' varchar(200) COLLATE utf8mb4_unicode_ci DEFAULT NULL, 'size' varchar(200) COLLATE utf8mb4_unicode_ci DEFAULT NULL, 'processing_time' varchar(50) COLLATE utf8mb4_unicode_ci DEFAULT NULL, 'date_added' int(11) DEFAULT NULL, PRIMARY KEY ('ID') ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; 'relations' ( 'ID' bigint(20) unsigned NOT NULL, 'relation_id' bigint(20) unsigned NOT NULL, 'type' varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL, 'options' text COLLATE utf8mb4_unicode_ci, KEY 'ID' ('ID'), KEY 'relation_id' ('relation_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; 'tags' ( 'ID' bigint(20) unsigned NOT NULL AUTO_INCREMENT, 'name' varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL, 'slug' varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY ('ID') ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; Relations between these categories goes as follows: products.ID = relations.ID relations.relation_id = tags.ID There are 50k products, 250k relations and 25k tags added to the database for testing purposes. I'm building a query that searches within product name, description and tags. Executing this query: Product::select('ID', 'name')->where( 'name', 'like', '%'.$search_query.'%' )->orWhere( 'description', 'like', '%'.$search_query.'%' )->orWhereHas('relations', function( $query ) use( $search_query ) { $query->where('type', 'tags')->whereHas('tags', function( $query ) use( $search_query ) { $query->where( 'name', 'like', '%'.$search_query.'%' ); }); })->paginate(25); This query takes around 0.8s to find the data, if I query for a specific tag, it takes as long as 1.8s I built a similar query with joins and it took even longer so I stayed with the above query for now. Do any of you have an idea what I might have been doing wrong? The main issue here is the query execution time. A: There is no way to optimize this query that I know. If you had 'LIKE', searchquery.'%' (with no leading %) you could index the text columns for a speed boost. If you need this functionality with the wildcard on both ends, you will have to use a full-text indexing search provider. Algolia https://www.algolia.com/ is the one I know of, and Laravel Scout is created to work with their search. This other question also references http://sphinxsearch.com/ and http://lucene.apache.org/core/ although I don't know what they do. Edit: also https://www.elastic.co/products/elasticsearch
{ "pile_set_name": "StackExchange" }
Gas-fired gravity floor furnace contact burns. Infants and toddlers are at particular risk for contact burns from the registers of gas-fired floor furnaces. We report 11-month-old and 12-month-old boys who sustained the classic grid-like pattern of burns to their skin after contracting the registers of gas-fired floor furnaces. The depth of these burns were judged to be partial thickness and healed without the application of skin grafts. Manufacturers must devise safe registers for gas-fired furnaces that will not cause contact burn injuries.
{ "pile_set_name": "PubMed Abstracts" }
// Copyright 2014 The Chromium Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #ifndef BASE_TIMER_MOCK_TIMER_H_ #define BASE_TIMER_MOCK_TIMER_H_ #include "base/test/simple_test_tick_clock.h" #include "base/timer/timer.h" namespace base { class TestSimpleTaskRunner; // A mock implementation of base::OneShotTimer which requires being explicitly // Fire()'d. // Prefer using TaskEnvironment::MOCK_TIME + FastForward*() to this when // possible. class MockOneShotTimer : public OneShotTimer { public: MockOneShotTimer(); ~MockOneShotTimer() override; // Testing method. void Fire(); private: // Timer implementation. // MockOneShotTimer doesn't support SetTaskRunner. Do not use this. void SetTaskRunner(scoped_refptr<SequencedTaskRunner> task_runner) override; SimpleTestTickClock clock_; scoped_refptr<TestSimpleTaskRunner> test_task_runner_; }; // See MockOneShotTimer's comment. Prefer using // TaskEnvironment::MOCK_TIME. class MockRepeatingTimer : public RepeatingTimer { public: MockRepeatingTimer(); ~MockRepeatingTimer() override; // Testing method. void Fire(); private: // Timer implementation. // MockRepeatingTimer doesn't support SetTaskRunner. Do not use this. void SetTaskRunner(scoped_refptr<SequencedTaskRunner> task_runner) override; SimpleTestTickClock clock_; scoped_refptr<TestSimpleTaskRunner> test_task_runner_; }; // See MockOneShotTimer's comment. Prefer using // TaskEnvironment::MOCK_TIME. class MockRetainingOneShotTimer : public RetainingOneShotTimer { public: MockRetainingOneShotTimer(); ~MockRetainingOneShotTimer() override; // Testing method. void Fire(); private: // Timer implementation. // MockRetainingOneShotTimer doesn't support SetTaskRunner. Do not use this. void SetTaskRunner(scoped_refptr<SequencedTaskRunner> task_runner) override; SimpleTestTickClock clock_; scoped_refptr<TestSimpleTaskRunner> test_task_runner_; }; } // namespace base #endif // BASE_TIMER_MOCK_TIMER_H_
{ "pile_set_name": "Github" }
The Siberian Husky (originally the Chukchi dog) was developed over a period of around 3,000 years by the Chukchi tribe of Siberia. The breed was developed to fulfill a particular need of the Chukchi life and culture. In one of the most hostile climates in the world, with temperatures plummeting to -100°F in winter and with winds up to 100 mph, the Chukchi relied on their dogs for survival. In teams as large as twenty or more they could travel out over the ice, sometimes covering as much as 100 miles in a single day, to allow a single man to ice-fish and return with his catch. By sled dog standards they were small, the large size of the teams minimized per-dog pulling power. Their smaller frames maximized endurance and low energy consumption. Even today, in the long races, Alaskan Husky teams, the Siberians cousins, require twice the amount of food the Siberian teams consume. The Chukchi economy and religious life was centered around the Huskies. The best dogs were owned by the richest members of the community, and this is what made them the richest members of the community. Many religious ceremonies and iconography was centered around the Huskies. According to Chukchi belief, two Huskies guard the gates of heaven turning away anybody that has shown cruelty to a dog in their lifetime. A Chukchi legend tells of a time of famine when both human and dog populations were decimated, the last two remaining pups were nursed at a woman's breast to insure the survival of the breed. Tribe life revolved around the dogs. The women of the tribe reared the pups and chose what pups to keep, discarding all but the most promising bitches and neutering all but the most promising males. The men's responsibility was sled training, mostly geldings (neutered males) were used. The dogs would also act as companions for the children and family dogs slept inside. The temperatures at night were even measured in terms of the number of dogs necessary to keep a body warm. i.e.."Two dog night, Three dog night, etc." The legendary sweetness of the Siberians temperament was no accident. Would you want to be 100 miles out on the ice, a single person with twenty dogs? If there's a dog fight you simply would not get home! (This is also one of the reasons for using neutered males on the sled) When Winter came, all dogs were tied up when not working, but the elite unneutered dogs were allowed to roam and breed at will. This insured that only the very best would breed. In summer, all dogs were released and allowed to hunt in packs like the Wolf, but unlike the Wolf they would return to the villages when the snow returned and food grew scarce (hey, they know where the handouts are, they are not dumb by any means). The high prey drive can still be found in the breed today. In the nineteenth century, when Czarist troops were sent on a mission to open the area to the fur trade, the Chukchi faced a peril even deadlier than the Siberian winters. Czarist troops attempted an all-out genocide of the Chukchi people. Again, the dogs would be the key to their survival. The Chukchi were able to evade the Russian reindeer cavalry on their Siberian pulled sleds. The Chukchi were able to do this for some time. The invasion finally coming to a head in a final battle where the Chukchi, armed only with spears and overwhelmingly outnumbered, trapped and defeated the heavily armed Russian troops. This victory led to Czarist Russia signing a treaty with the Chukchi giving them their independence; the first tribe to achieve this "honor". The Chukchi people and their dogs existed peacefully for many years after this conflict. By the close of the 19th Century the Chukchi's dogs were discovered by Alaskan traders and imported into the Northwest Territory and renamed the Siberian Husky. This importation proved to be a very important event in the survival of this breed. The first team of Siberian Huskies made its appearance in the All Alaska Sweepstakes Race of 1909. That same year a large number of them were imported to Alaska by Charles Fox Maule Ramsay, and his team, driven by John "Iron Man" Johnson, won the grueling 400-mile race in 1910. For the next decade Siberian Huskies, particularly those bred and raced by Leonhard Seppala, captured most of the racing titles in Alaska, where the rugged terrain was ideally suited to the endurance capabilities of the breed. In the 20th Century, the Soviets opened free trade with the Chukchi, then known as the "Apaches of the North," and brought with them smallpox which almost wiped out the tribe. When the Soviets found out the importance of the dogs to Chukchi cultural coherence, they executed or imprisoned the village leaders, who were of course the dog breeders. The Soviets then set up their own dog breeding programs designed to obliterate the native gene pool of the Chukchi dogs and replace it with a gene pool that would produce a much larger freighting dog thought to be more effective for the Soviets own proposed fur-trading practices in the region. The Soviets even went so far, in 1952, as issuing an official proclamation that the breed we now call the Siberian Husky never really existed. Only a small remnant of the Chukchi dog still survives in Siberia today. We are in desperate need of Foster homes to help save more Siberians from neglect, abuse, abandonment and illness. We can not save these precious fur balls without your help. If you can open your heart and home to just one fur ball you can make a difference! By becoming a Foster you are not only saving a life, you are helping give a Siberian a chance at a new home...a new life! Can you look into this fur babies eyes and not want to help? Click HERE to find out more! If you find any problems with links or images please notify the webmaster
{ "pile_set_name": "Pile-CC" }
Q: REQ/REP & DEALER/ROUTER for two-way asynchronous worker processing I'm learning ZeroMQ and just went through the tutorial and a few examples. I'm using Node.js as my main environment ( with Python eventually used to replace my workers ). Trying to sort out how I can create a fully asynchronous messaging system that will allow my API to push tasks ( via a REQ socket ) to a router, have a dealer pass the message to a worker, process the message and send its results back up to my client ( which is an Express route ). I believe the pattern for this would work something like this ( haven't tested or properly implemented code yet, so please take it as a conceptual outline ): router.js const zmq = require('zmq');; const frontend = zmq.socket('router'); const backend = zmq.socket('dealer'); frontend.on('message', function() { var args = Array.apply(null, arguments); backend.send(args); }); backend.on('message', function() { var args = Array.apply(null, arguments); frontend.send(args); }); frontend.bindSync('tcp://*:5559'); backend.bindSync('tcp://*:5560'); client.js var zmq = require('zmq'), var express = require('express'); var app = express(); app.post('send', function(req, res) { var client = zmq.socket('req'); // listen for responses from the server client.on('message', function(data) { console.log(data); client.close(); }); // connect to the server port client.connect('tcp://0.0.0.0:5454'); client.send('Request from ' + process.id); }); app.listen('80'); worker.js var zmq = require('zmq'); var server = zmq.socket('rep'); server.on('message', function(d){ server.send('Response from ' + process.id); }); // bind to port 5454 server.bind('tcp://0.0.0.0:5454', function(err){ if (err){ console.error("something bad happened"); console.error( err.msg ); console.error( err.stack ); process.exit(0); } }); What I'm not fully understanding is if the ROUTER/DEALER will handle sending the response worker to the correct client. Also in this case the Dealer handles the Fair Queueing as I want my work distributed amongst the workers evenly. My client could be distributed amongst many different boxes ( load balancer API server ), my router will be on its own server and the workers would be distributed amongst multiple boxes as well. A: Forget REQ/REP in any production-grade app, can fall in mutual deadlock You might find this subject in many other posts on high-risk mutual FSM-FSM deadlocking in REQ/REP Formal Scalable Communication Pattern. Be sure, XREQ/XREP == DEALER/ROUTER ( already since 2011 ) source code removes all hidden magics behind this, XREQ == DEALER and XREP == ROUTER +++b/include/zmq.h ... -#define ZMQ_XREQ 5 -#define ZMQ_XREP 6 +#define ZMQ_DEALER 5 +#define ZMQ_ROUTER 6 ... +#define ZMQ_XREQ ZMQ_DEALER /* Old alias, remove in 3.x */ +#define ZMQ_XREP ZMQ_ROUTER /* Old alias, remove in 3.x */
{ "pile_set_name": "StackExchange" }
National Lawyers Guild Panel on Political Prisoners & Solitary Confinement SAN JUAN, Oct 25 2013 – The struggle against the torture of solitary confinement is an urgent necessity in building liberation movements in North America. That was the message conveyed to attendees of a major panel on political prisoners and solitary confinement at the National Lawyers Guild’s annual convention held in San Juan, Puerto Rico on October 25, 2013. Unfortunately, Dr. Luis Nieves Falcón could not be on the panel as planned due to health issues, but there was an unexpected panelist when Edwin Cortes, Puerto Rican political prisoner freed by President Clinton in 1999, grabbed the mic. Bret Grote of the Abolitionist Law Center moderated the panel and also spoke about the campaign to end the nearly 30 years of solitary confinement for Russell Maroon Shoatz. Systemic and severe violations of international human rights law are an endemic—and suppressed—feature of prison conditions in the United States. During the last thirty years the United States has embarked upon a project of race- and class-based mass incarceration unlike anything the world has ever seen. Emerging in this same period has been the regime of super-maximum security prison units, where people are held in solitary confinement between 22-24 hours a day, seven days a week, often for years on end. These units are defined by extreme restrictions on visitations, phone calls (which are often prohibited), incoming and outgoing mail, limits on in-cell legal and personal property, and prohibitions on cell decorations. Medical neglect, physical and psychological abuse, food deprivation, racism, and other human rights violations flourish in these conditions, which are effectively hidden from public scrutiny. Hundreds of thousands of people cycle in and out of the psychologically toxic and emotionally harmful conditions of solitary confinement every year, with more than 80,000 people held in 23-24 hour lockdown on any given day in jails, prisons, and immigrant detention centers. The speakers presented an inspiring and diverse set of stories, insights, and ideas to the 130-150 people in attendance. Jihad Abdulmumit urged the crowd to recognize the context of struggle when discussing political prisoners in the United States, rather than fixating on the question of guilt or innocence. After all, nobody ever asked if Nelson Mandela participated in the armed struggle (he did, of course), but instead recognized that he was fighting for the freedom of his people. The same standards should apply to freedom fighters held in the belly of the imperial beast. Bridging the struggles of the Black Liberation Movement and the Puerto Rican Liberation Movement was the commentary of Mumia Abu-Jamal. Articulating how the pathology of white supremacy infected early 20th-century U.S. Supreme Court jurisprudence regarding Puerto Rico, Mumia traced the arc of struggle of generation after generation of Puerto Ricans in and out of U.S. prisons, as part of their efforts to free their homeland from the crime of colonialism. Clarisa López Ramos presented a moving account of the anguish and hardship that solitary confinement imposes on the families of prisoners. She spent many hours of her childhood building a relationship with her father, Puerto Rican political prisoners Oscar López Rivera, through the glass partition of the non-contact visiting booth in the United States Penitentiary at Marion, the prototype for supermax prisons. Next, Azadeh Zohrabi laid out a brilliant overview and analysis of the prisoner human rights movement in California, which has been led by visionary prisoners held in the Pelican Bay State Prison control units. Her call to “abolish” solitary confinement elicited a powerful round of applause from the audience. Noting that she views the prisoners she represents in a class action lawsuit more as her colleagues than her clients, Azadeh enlisted dozens of audience members to assist with advocacy on behalf of the health care needs of men still suffering the effects of the most recent California prisoner hunger strike, which included more than 30,000 prisoners at its peak. Edwin Cortes then joined the panelists after being introduced by Jihad, who he served time with at the federal prison in Lewisburg, Pennsylvania. Cortes emphasized the importance of recognizing that the Obama presidency represents the “same old racism” with a different face, urging those in attendance to continue the struggle for liberation. Finally, Bret Grote discussed the case of Russell Maroon Shoatz, Pennsylvania political prisoner who is represented by the Abolitionist Law Center in his efforts to be free from nearly 30 years of isolation, including the last 22 years consecutively. Grote emphasized the structural role of solitary confinement, observing that solitary is used to terrorize the prison population; the prison population is then used to terrorize poor communities in general and communities of color in particular; socio-economic conditions in these communities are used to keep the middle classes in line; and these classes carry out the social, economic, and political agendas of the powerful few who control society. If the feedback received by the ALC is any indication, those in attendance at the panel left better prepared and inspired to move the struggle against this system forward. You may obtain a copy of Abolitionist Law Center's financial report by writing to it at P.O. Box 8654, Pittsburgh, PA 15221. Abolitionist Law Center registers with agencies in many states. Some of them will supply you with the financial and registration information they have on file. Residents of the following states may request information from the offices indicated (toll-free numbers are for use only within the respective states): Pennsylvania - Department of State, Bureau of Charitable Organizations, Harrisburg, PA 17120, 1-800-732-0999; Maryland - Office of the Secretary of State, Statehouse, Annapolis, MD 21401, 1-800-825-4510; New York - Office of Charities Registration, 162 Washington St., Albany, NY 12231; Virginia - Division of Consumer Affairs, P.O. Box 1163, Richmond, VA 23209, 1-800-552-9963; Washington - Office of the Secretary of State, Charitable Solicitation Division, Olympia, WA 98504, 1-800-332-4483; West Virginia - Secretary of State, State Capitol, Charleston, WV 25305. Registration with a state agency does not imply the state's endorsement.
{ "pile_set_name": "Pile-CC" }
Dark windows lead to multiple charges 12:27 AM, Sep. 3, 2012 Written by Staff report news@thenewsstar.com A person of interest in several armed robberies was stopped by Ouachita Parish deputies Sunday and arrested on drugs and other charges. Deputies stopped Romondo F. Long, 24, of 105 Post Oak Drive, after they noticed the vehicle he was driving had windows tinted beyond what is allowed by law. The vehicle also is believed to have been used in a series of robberies. After stopping the car, deputies discovered a 2-year-old child standing in the back seat without a ...
{ "pile_set_name": "Pile-CC" }
406 F.2d 399 70 L.R.R.M. (BNA) 2284, 1 Fair Empl.Prac.Cas. 583,1 Empl. Prac. Dec. P 9946 James C. DENT and United States Equal Employment OpportunityCommission, Appellants,v.ST. LOUIS-SAN FRANCISCO RAILWAY COMPANY et al., Appellees.John A. HYLER et al., Appellants,v.REYNOLDS METAL COMPANY et al., Appellees.Alvin C. MULDROW et al., Appellants,v.H. K. PORTER COMPANY, Inc., et al., Appellees.Worthy PEARSON et al., Appellants,v.ALABAMA BY-PRODUCTS CORPORATION et al., Appellees.Rush PETTWAY et al., Individually and on behalf of otherssimilarly situated, and United States EqualEmployment Opportunity Commission, Appellants,v.AMERICAN CAST IRON PIPE COMPANY, Appellees. Nos. 24810, 24789, 24811-24813. United States Court of Appeals Fifth Circuit. Jan. 8, 1969. Leroy D. Clark, Jack Greenberg, Robert Belton, New York City, Elihu I. Leifer, D. Robert Owen, Attys., Dept. of Justice, Russell Specter, E.E.O. Comm., Washington, D.C., Sanford Jay Rosen, Baltimore, Md., for appellants. William F. Gardner, Jerome A. Cooper, Birmingham, Ala., Richard R. Lyman, Toledo, Ohio, for appellees. No. 24789: Robert L. Carter, New York City, orzell Billingsley, Jr., Birmingham, Ala., Richard F. Bellman, New York City, for appellants. James R. Forman, Jr., Birmingham, Ala., Howell T. Heflin, Tuscumbia, Ala., Clarence F. Rhea, Gadsden, Ala., Jerome Cooper, Birmingham, Ala., for appellees. No. 24811: Oscar W. Adams, Jr., Birmingham, Ala., Jack Greenberg, Leroy D. Clark, Robert Belton, New York City, Sanford Jay Rosen, Baltimore, Md., for appellants. Lucien D. Gardner, Jr., William F. Gardner, Jerome A. Cooper, Birmingham, Ala., Bernard Kleiman, Gen. Counsel, United Steelworkers of Am., Pittsburgh, Pa., for appellees. No. 24812: Leroy D. Clark, Jack Greenberg, Robert Belton, New York City, Oscar W. Adams, Jr., Birmingham, Ala., Sanford Jay Rosen, Baltimore, Md., for appellants. Drayton T. Scott, Jerome Cooper, William F. Gardner, Birmingham, Ala., for appellees. No. 24813: Leroy D. Clark, Jack Greenberg, New York City, Oscar W. Adams, Jr., Birmingham, Ala., Elihu I. Leifer, Atty., Dept. of Justice, Washington, D.C., Macon L. Weaver, U.S. Atty., Birmingham, Ala., Russell Specter, Equal Emp. Oppor-Comm., Washington, D.C., Sanford Jay Rosen, Baltimore, Md., for appellants. James R. Forman, Jr., Birmingham, Ala., for appellees. John Doar, Asst. Atty. Gen., David L. Norman, Atty., Dept. of Justice, Washington, D.C., for U.S. Equal Employment Opportunity Commission; Kenneth F. Holbert, Acting General Counsel, David R. Cashdan, Atty., Equal Employment Opportunity Commission, of counsel. Albert Rosenthal, New York City, David Shapiro, Cambridge, Mass., James M. Nabrit, III, New York City, for appellants Dent, Muldrow, Pearson, Pettway and others; Hawkins, Rhea & Mitchell, Gadsden, Ala., of counsel. Mulholland, Hickey & Lyman, Toledo, Ohio, for appellees Brotherhood of Railway Carmen of America; Clarence Mann, General Chairman of Brotherhood of Railway Carmen of America; Clyde Vinyard, Chairman of Local 60 of Brotherhood of Railroad Carmen of America, in Case No. 24810. Alfred D. Treherne, General Counsel, Washington, D.C., International Union of District 50, United Mine Workers of America, Cooper, Mitch & Crawford, Birmingham, Ala., for appellee International Union of District 50, United Mine Workers of America, in Case No. 24812. Paul R. Moody, St. Louis, Mo., Paul R. Obert, Pittsburgh, Pa., Drayton Nabers, Jr., Cabaniss, Johnston, Gardner & Clark, Birmingham, Ala., for appellees, St. Louis-San Francisco Ry. Co., H. K. Porter Co., Inc., Alabama By-Products Corp. Before COLEMAN and CLAYTON,1 Circuit Judges, and JOHNSON, District judge. COLEMAN, Circuit Judge: 1 Because they present the same legal issue, with no substantial factual differences, these cases were consolidated for appellate disposition. The District Court held that actual conciliation atempts by the Equal Employment Opportunity Commission (proceeding under Title VII of the Civil Rights Act of 1964, 42 U.S.C.A. 2000e et seq.) was jurisdictionally prerequisite to the maintenance of an action in the courts under Title VII, 265 F.Supp. 56 (N.D.Ala., 1967). We reverse. 2 The facts in the Dent case may be taken as illustrative of the group. 3 September 10, 1965, Dent filed with the Equal Employment Opportunity Commission a charge that the St. Louis-San Francisco Railway Company and the Brotherhood of Railroad Carmen of America were violating Title VII of the Civil Rights Act of 1964. The substance of the complaint was that the railway company had, on account of race, terminated the employment of Dent and other Negroes, Eliminated the job classifications in which they were employed and excluded them from employment in and training programs for other job classifications; that the railway company maintains recially segregated facilities, and that the Brotherhood of Railroad Carmen maintains racially segregated local unions, with Local No. 60 being all-white and Local No. 750 being all-Negro-- these locals being the exclusive bargaining representatives of the employees of the railway company. 4 October 8, 1965, copies of Dent's charges were served on the company and the Brotherhood. 5 December 8, 1965, the Commission issued a decision, after investigation, to the general effect that there was reasonable cause to believe that the company and the Brotherhood were violating Title VII. 6 December 15, 1965, the company was informed of this decision by letter from the Commission's executive director. This letter also discussed the Commission's desire to engage in conciliation, but advised the company that an action would possibly be filed before conciliation could be undertaken. On this point, the executive director wrote: 7 'A conciliator appointed by the Commission will contact you to discuss means of correcting this discrimination and avoiding it in the future. 8 'Since the charges in this case were filed in the early phases of the administration of Title VII of the Civil Rights Act of 1964, the Commission has been unable to conciliate the matter during the sixty (60) days period provided in Section 706. The Commission is, accordingly, obligated to advise the charging party of his right to bring a civil action pursuant to Section 706(e). 9 'Nevertheless we believe it may serve the purposes of the law and your interests to meet with our conciliator to see if a just settlement can be agreed upon and a lawsuit avoided. 10 'We are hopeful that you will cooperate with us in achieving the objectives of the Civil Rights Act and that we will be able to resolve the matter quickly and satisfactorily to all concerned.' 11 There was no conciliation. 12 Neither the company nor the Brotherhood made any effort to promote conciliation. Because of the unexpectedly large number of complaints that were filed with the Commission and the extremely small staff available, the Commission made no further effort to promote conciliation. 13 By letter dated January 5, 1966, the Commission advised Dent that 'the conciliatory efforts of the Commission have not achieved voluntary compliance with Title VII of the Civil Rights Act of 1964'. The letter continued: 14 'Since your case was presented to the Commission in the early months of the administration of Title VII of the Civil Rights Act of 1964, the Commission was unable to undertake extensive conciliation activities. Additional conciliation efforts will be continued by the Commission. * * * Under Section 706(e) of the Act, you may within thirty (30) days from the receipt of this letter commence a suit in the Federal district court.'The action was filed in the District Court on February 7, 1966. As stated, the district court dismissed on the ground that 'conciliation * * * is a jurisdictional prerequisite to the institution of a civil action under Title VII'. 15 Section 706(a), 42 U.S.C. 2000e-5(a), after making reference to the receipt by the Commission of a charge of unlawful employment practice, provides: 16 'The Commission shall * * * make an investigation of such charge * * *. If the Commission shall determine, after such investigation, that there is reasonable cause to believe that the charge is true, the Commission shall endeavor to eliminate any such alleged unlawful employment practice by informal methods of conference, conciliation and persuasion.' 17 Section 706(e), 42 U.S.C. 2000e-5(e), provides: 18 'If, within thirty days after a charge is filed with the Commission * * * (except that * * * such period may be extended to not more than sixty days upon a determination by the Commission that further efforts to secure voluntary compliance are warranted), the Commission has been unable to obtain voluntary compliance with this title, the Commission shall so notify the person aggrieved, and a civil action may, within thirty days thereafter, be brought against the respondent named in the charge * * *.' Section 706(e) further provides: 19 'Upon request, the court may, in its discretion, stay further proceedings for not more than sixty days pending * * * the efforts of the Commission to obtain voluntary compliance.' 20 Thus it is quite apparent that the basic philosophy of these statutory provisions is that voluntary compliance is preferable to court action and that efforts should be made to resolve these employment rights by conciliation both before and after court action. However, we are of the opinion that a plain reading of the statute does not justify the conclusion that, as a jurisdictional requirement for a civil action by the aggrieved employee under Section 706(e), the Commission must actually attempt and engage in conciliation. 21 United States Court of Appeals for the Fourth Circuit recently considered and decided this issue in companion cases, Ray Johnson v. Seaboard Coast Line Railroad Company and Charles W. Walker v. Pilot Freight Carriers, Inc., 405 F.2d 645. That Court held: 22 'It seems clear to us that the statute, on its face, does not establish an attempt by the Commission to achieve voluntary compliance as a jurisdictional prerequisite. Quite obviously, 42 U.S.C. 2000e-5(a) does charge the Commission with the duty to make such an attempt if it finds reasonable cause, 'but it does not prohibit a charging party from filing suit when such an attempt fails to materialize'. Mondy v. Crown Zellerbach Corp., 271 F.Supp. 258, 262 (E.D.La.1967). Subsection (e), which contains the authorization for civil actions, provides only that the action may not be brought unless (within 30 days) 'the Commission (had) been unable to obtain voluntary compliance.' 23 'The defendants argue that Section 2000e-5 must be read as a whole and that, so read, the use of the work, 'unable', in subsection (e) implies that the duty imposed by subsection (a) must be fully performed before a civil action is authorized. We do not agree. 'Unable' is not defined by statute to give it a narrow or special meaning. We think 'unable' means simply unable-- and that a commission prevented by lack of appropriations and inadequate staff from attempting persuasion is just as 'unable' to obtain voluntary compliance as a commission frustrated by the recalcitrance of an employer or a union. Contra, Dent v. St. Louis-San Francisco Ry. Co., 265 F.Supp. 56, 61 (N.D.Ala.1967). At most, we think, a reading of the two sections together means only that the Commission must be given an opportunity to persuade before an aggrieved person may resort to court action. See Stebbins v. Nationwide Mut. Ins. Co., 382 F.2d 267 (4th Cir. 1967); Mickel v. South Carolina State Employment Serv., 377 F.2d 239 (4th Cir. 1967).' 24 Similarly, United States Court of Appeals for the Seventh Circuit considered and decided the same issue in Choate v. Caterpillar Tractor Co., 402 F.2d 357. In the following language, the Seventh Circuit rejected the no-jurisdiction argument: 25 'In the present case, although the complainant makes no allegation concerning the conciliation efforts of the Commission, it is clear from the face of the complaint that the Commission had the opportunity to investigate and conciliate, in that the Commission could have investigated and attempted to conciliate between the filing of the charge on march 14, 1966 and the issuance of its October 5, 1966 letter stating that it had reasonable cause to believe that a violation had occurred. 26 'We believe that these allegations are sufficient to state a claim under section 706. A complainant may have no knowledge when he received the required notification of what conciliation efforts have been exerted by the Commission. And more importantly, even if no efforts were made at all, the complainant should not be made the innocent victim of a dereliction of statutory duty on the part of the Commission.' 27 We particularly agree with the reasoning of the majority in the Fourth Circuit cases and it is upon that reasoning that we reverse the judgment below. See also Oatis v. Crown Zellerbach Corp., 5 Cir., 1968, 398 F.2d 496, and Jenkins v. United Gas Corp., 5 Cir., 1968, 400 F.2d 28. 28 In arriving at the conclusion that actual conciliatory efforts are jurisdictionally prerequisite, the District Court relied heavily on the legislative history of the Act. The majority and the dissenting opinions in Johnson and Walker, 4 Cir., supra, extensively analyze this aspect of the problem, obviating any necessity for prolonged repetition here. As a matter of fact, the Congressional committee reports and floor debates lend great comfort to both sides. This, we believe, leaves no clearly discernible Congressional intent, certainly not enough to avoid plain statutory language. Section 2000e-5(e), Title 42, U.S.C.A. very clearly sets out only two requirements for an aggrieved party before he can initiate his action in the United States district court: (1) he must file a charge with the Equal Employment Opportunity Commission and (2) he must receive the statutory notice from the Commission that it has been unable to obtain voluntary compliance. It is extremely important in these cases that both the spirit and the letter of Title VII reflect an unequivocal intent on the part of Congress to create a right of action in the aggrieved employee. The dismissal of these cases deprived the aggrieved employee of that right of action, not because of some failure on his part to comply with the requirements of the Title, but for the Commission's failure to conciliate-- a failure that was and will always be beyond the control of the aggrieved party. 29 We do not overlook the fact that in November, 1966, the Commission issued a regulation stating that it 'shall not issue a notice * * * where reasonable cause has been found, prior to efforts at conciliation with respondent,' except, that, after sixty (60) days from the filing of the charge, the Commission will issue a notice upon demand of either the charging party or the respondent, 29 C.F.R. 1601, 25a. 30 It may be that this regulation will generally put an end to cases in the posture of that here decided. 31 In any event, these appeals are decided on the facts and circumstances herein reported. The Court does not have before it, and it is not now passing upon, a situation, if there were to be one, in which the Commission as a matter of routine simply abandons all efforts at actual conciliation. 32 It is not to be doubted that Congress did intend that where possible these controversies should be settled by conciliation rather than by litigation. The statute ought to be so administered. 33 For the reasons herein enumerated, the judgment of the District Court will be reversed and remanded for further proceeding not inconsistent herewith. 34 Reversed and remanded. 1 Judge Clayton, the third judge constituting the court, participated in the hearing and the decision of this case. The present opinion is rendered by a quorum of the court pursuant to 28 U.S.C.A. 46, Judge Clayton having taken no part in the final draft of this opinion
{ "pile_set_name": "FreeLaw" }
'Q1 smartphone sales flat as demand weakens in China, Brazil' NEW DELHI: Global smartphone shipments remained flat at 344 million units in January-March, impacted by weaker demand in China and Brzail as well as parts of Europe, research firm Counterpoint said today. This is the first time ever since the launch of smartphones that the segment has seen no growth, it said. While three out of four mobile phones shipped on the planet now are smartphones but shipment has "slowed down considerably", Counterpoint said in a report. "The slowdown can be attributed due to higher sell-in during the holiday season quarter and weaker demand in markets such as Brazil, China, Indonesia and parts of Europe," the report said. Also, Chinese brands like Huawei, Xiaomi and Oppo have captured 33 per cent of the global smartphone market, it added. "This is the first time ever since the launch of smartphone, the segment has seen 0 per cent growth, signaling the key global scale players need to invigorate sales with more exciting products and pricing schemes," he said. Samsung led the smartphone market by volume with a market share of 22.8 per cent. However, its smartphone shipments declined six per cent year-on-year. Apple, which saw its iPhone shipments declining, ranked second in the tally with 14.9 per cent share. Others in the list included Huawei (8.3 per cent), Xiaomi (4.2 per cent), LG and Oppo (3.9 per cent each). In terms of revenue share, Apple led the smartphone market with a healthy 40 per cent revenue share. Samsung (23.2 per cent), Huawei (6 per cent), Oppo (3.6 per cent), Xiaomi and LG (2.8 per cent each) followed in the tally. Several people ET spoke with about Ericsson’s India operations, including its current and former employees, said the Stockholm-based firm has reduced headcount in the last one year or so across functions, in line with its global restructuring.
{ "pile_set_name": "Pile-CC" }
Mini-incision open donor nephrectomy as an alternative to classic lumbotomy: evolution of the open approach. In Europe, the vast majority of transplant centres still performs open donor nephrectomy. This approach can therefore be considered the gold standard. At our institution, classic lumbotomy (CL) was replaced by a mini-incision anterior flank incision (MIDN) thereby preserving the integrity of the muscles. Data of 60 donors who underwent MIDN were compared with 86 historical controls who underwent CL without rib resection. Median incision length measured 10.5 and 20 cm (MIDN versus CL, P < 0.001). Median operation time was 158 and 144 min (P = 0.02). Blood loss was significantly less after MIDN (median 210 vs. 300 ml, P = 0.01). Intra-operatively, 4 (7%) and 1 (1%) bleeding episodes occurred. Postoperatively, complications occurred in 12% in both groups (P = 1.00). Hospital stay was 4 and 6 days (P < 0.001). In one (2%) and 11 (13%) donors (P = 0.02) late complications related to the incision occurred. After correction for baseline differences, recipient serum creatinine values were not significantly different during the first month following transplantation. In conclusion, MIDN is a safe approach, which reduces blood loss, hospital stay and the number of incision related complications when compared with CL with only a modest increase in operation time.
{ "pile_set_name": "PubMed Abstracts" }
![](brmedchirj271384-0072){#sp1 .67} ![](brmedchirj271384-0073){#sp2 .68}
{ "pile_set_name": "PubMed Central" }
I was introduced to Cowboy Cookies several years ago by browsing through the New York Times recipes, and I’m glad I stumbled upon it because I have enjoyed it ever since. The Cowboy Cookie recipe come from Laura Bush – it was her submission in a Family Circle magazine cookie contest back in 2000, against Tipper Gore’s ginger snaps. Laura Bush’s recipe won and here it is. These cookies freeze well and great to serve with a couple scoops of ice cream and chocolate sauce. It’s an easy dessert to pull out of the freezer when you have guests over. Cowboy Cookies 3 cups all-purpose flour 1 tablespoon baking powder 1 tablespoon baking soda 1 tablespoon ground cinnamon 1 teaspoon salt 1 1/2 cups butter (3 sticks), at room temperature 1 1/2 cups granulated sugar 1 1/2 cups packed light brown sugar 3 eggs 1 tablespoon vanilla 3 cups semi sweet chocolate chips 3 cups old fashioned rolled oats 2 cups unsweetened flake coconut 2 cups chopped pecans (8 ounces) Heat oven to 350 degrees. Mix flour, baking powder, baking soda, cinnamon and salt together in a bowl. In a very large bowl, beat butter with an electric mixer at medium speed until smooth and creamy. Gradually beat in sugars, and combine thoroughly. Add eggs one at a time, beating after each. Beat in vanilla. Stir in flour mixture until just combined. Stir in chocolate chips, oats, coconut and pecans. For each cookie, drop 1/4 cup dough onto ungreased baking sheets, spacing three inches apart. Bake 10-12 minutes until edges are lightly browned. Rotate sheets halfway through. Remove cookies from rack to cool. Makes 3 to 3 1/2 dozen. Sweet Notes: A few things about this recipe are large, including the amount of dough and cookies it makes so you may want to cut the recipe in half. Then, using the 1/4 cup for the cookies makes very large cookies. I like them and think they make a nice dessert but Scott prefers a smaller size so we have the Scott size and it’s about 1/8 cup. I still use the 1/4 cup but just fill it up about half way. It works too! Bita Arabian, a good friend and fellow blogger has introduced the California Cranberry Cowboy Cookies, a modified version of Laura Bush’s Cowboy Cookies – feel free to visit her blog, Oven Hug, to take a look.
{ "pile_set_name": "Pile-CC" }
/* * Copyright (c) 2009 Apple Inc. All rights reserved. * * @APPLE_OSREFERENCE_LICENSE_HEADER_START@ * * This file contains Original Code and/or Modifications of Original Code * as defined in and that are subject to the Apple Public Source License * Version 2.0 (the 'License'). You may not use this file except in * compliance with the License. The rights granted to you under the License * may not be used to create, or enable the creation or redistribution of, * unlawful or unlicensed copies of an Apple operating system, or to * circumvent, violate, or enable the circumvention or violation of, any * terms of an Apple operating system software license agreement. * * Please obtain a copy of the License at * http://www.opensource.apple.com/apsl/ and read it before using this file. * * The Original Code and all software distributed under the License are * distributed on an 'AS IS' basis, WITHOUT WARRANTY OF ANY KIND, EITHER * EXPRESS OR IMPLIED, AND APPLE HEREBY DISCLAIMS ALL SUCH WARRANTIES, * INCLUDING WITHOUT LIMITATION, ANY WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE, QUIET ENJOYMENT OR NON-INFRINGEMENT. * Please see the License for the specific language governing rights and * limitations under the License. * * @APPLE_OSREFERENCE_LICENSE_HEADER_END@ */ #include <stdio.h> #include <CoreFoundation/CoreFoundation.h> #include <kxld.h> #define kCFBundleGetInfoStringKey CFSTR("CFBundleGetInfoString") #define kNSHumanReadableCopyrightKey CFSTR("NSHumanReadableCopyright") const char *gProgname = NULL; static void usage(void); static void printFormat(void); static char *convert_cfstring(CFStringRef the_string); /****************************************************************************** ******************************************************************************/ static void usage(void) { printf("usage: %s [path to kext]\n\n" "This program validates the copyright string in a kext's info " "dictionary.\n\n", gProgname); printFormat(); } /****************************************************************************** ******************************************************************************/ static void printFormat(void) { fprintf(stderr, "The copyright string should be contained in the NSHumanReadableCopyright key.\n" "It should be of the format:\n" "\tCopyright © [year(s) of publication] Apple Inc. All rights reserved.\n\n" "where [year(s) of publication] is a comma-separated list of years and/or\n" "year ranges, e.g., 2004, 2006-2008. Years must be four digits. Year ranges\n" "may not contain spaces and must use four digits for both years.\n\n" "The following are examples of valid copyright strings:\n" "\tCopyright © 2008 Apple Inc. All rights reserved.\n" "\tCopyright © 2004-2008 Apple Inc. All rights reserved.\n" "\tCopyright © 1998,2000-2002,2004,2006-2008 Apple Inc. All rights reserved.\n"); } /****************************************************************************** ******************************************************************************/ char * convert_cfstring(CFStringRef the_string) { char *result = NULL; CFDataRef the_data = NULL; const UInt8 *data_bytes = NULL; char *converted_string = NULL; u_long converted_len = 0; u_long bytes_copied = 0; the_data = CFStringCreateExternalRepresentation(kCFAllocatorDefault, the_string, kCFStringEncodingUTF8, 0); if (!the_data) { fprintf(stderr, "Failed to convert string\n"); goto finish; } data_bytes = CFDataGetBytePtr(the_data); if (!data_bytes) { fprintf(stderr, "Failed to get converted string bytes\n"); goto finish; } converted_len = strlen((const char *)data_bytes) + 1; // +1 for nul converted_string = malloc(converted_len); if (!converted_string) { fprintf(stderr, "Failed to allocate memory\n"); goto finish; } bytes_copied = strlcpy(converted_string, (const char *) data_bytes, converted_len) + 1; // +1 for nul if (bytes_copied != converted_len) { fprintf(stderr, "Failed to copy converted string\n"); goto finish; } result = converted_string; finish: if (the_data) CFRelease(the_data); return result; } /****************************************************************************** ******************************************************************************/ int main(int argc, const char *argv[]) { int result = 1; boolean_t infoCopyrightIsValid = false; boolean_t readableCopyrightIsValid = false; CFURLRef anURL = NULL; // must release CFBundleRef aBundle = NULL; // must release CFDictionaryRef aDict = NULL; // do not release CFStringRef infoCopyrightString = NULL; // do not release CFStringRef readableCopyrightString = NULL; // do not release char *infoStr = NULL; // must free char *readableStr = NULL; // must free gProgname = argv[0]; if (argc != 2) { usage(); goto finish; } anURL = CFURLCreateFromFileSystemRepresentation(kCFAllocatorDefault, (const UInt8 *) argv[1], strlen(argv[1]), /* isDirectory */ FALSE); if (!anURL) { fprintf(stderr, "Can't create path from %s\n", argv[1]); goto finish; } aBundle = CFBundleCreate(kCFAllocatorDefault, anURL); if (!aBundle) { fprintf(stderr, "Can't create bundle at path %s\n", argv[1]); goto finish; } aDict = CFBundleGetInfoDictionary(aBundle); if (!aDict) { fprintf(stderr, "Can't get info dictionary from bundle\n"); goto finish; } infoCopyrightString = CFDictionaryGetValue(aDict, kCFBundleGetInfoStringKey); readableCopyrightString = CFDictionaryGetValue(aDict, kNSHumanReadableCopyrightKey); if (!infoCopyrightString && !readableCopyrightString) { fprintf(stderr, "This kext does not have a value for NSHumanReadableCopyright"); goto finish; } if (infoCopyrightString) { fprintf(stderr, "Warning: This kext has a value for CFBundleGetInfoString.\n" "This key is obsolete, and may be removed from the kext's Info.plist.\n" "It has been replaced by CFBundleVersion and NSHumanReadableCopyright.\n\n"); infoStr = convert_cfstring(infoCopyrightString); if (!infoStr) goto finish; infoCopyrightIsValid = kxld_validate_copyright_string(infoStr); } if (readableCopyrightString) { readableStr = convert_cfstring(readableCopyrightString); if (!readableStr) goto finish; readableCopyrightIsValid = kxld_validate_copyright_string(readableStr); } if (!readableCopyrightIsValid) { if (infoCopyrightIsValid) { fprintf(stderr, "Warning: The copyright string in NSHumanReadableCopyright is invalid,\n" "but the string in CFBundleGetInfoString is valid. CFBundleGetInfoString is\n" "obsolete. Please migrate your copyright string to NSHumanReadableCopyright.\n\n"); } else { fprintf(stderr, "Error: There is no valid copyright string for this kext.\n\n"); printFormat(); goto finish; } } result = 0; finish: if (anURL) CFRelease(anURL); if (aBundle) CFRelease(aBundle); if (infoStr) free(infoStr); if (readableStr) free(readableStr); return result; }
{ "pile_set_name": "Github" }
1810 Massachusetts's 10th congressional district special election A special election was held in on October 8, 1810 to fill a vacancy left by the resignation of Jabez Upham (F). Election results Allen took his seat December 13, 1810. See also List of special elections to the United States House of Representatives References United States House of Representatives 1810 10 Massachusetts 1810 10 Massachusetts 1810 10 1810 10 Massachusetts 10 United States House of Representatives 10
{ "pile_set_name": "Wikipedia (en)" }
Q: jQuery input event triggered after page reload only Basically I have two divs, <div id="firstPage"></div> and <div id="details"> <div id='searchBox' class='detail--searchBox'> <input id='txtSearchTerm' name='txtSearchTerm' type='text'> </div> </div> On page load the textbox with id searchBox is hidden and it is shown on click event of div with id firstPage. I have following function which must be triggered on textbox change. But this input event is triggered once page is reloaded. onSearchBoxOnDetailsChange: function () { $("#searchBox").on("input", "#txtSearchTerm", function () { var val = $(this).val().trim(); val = val.replace(/\s+/g, ''); if (val.length > 1) { console.log(val); } else { } }); }, A: Write your code like below: onSearchBoxOnDetailsChange: function () { $(document).on("input", "#searchBox #txtSearchTerm", function () { var val = $(this).val().trim(); val = val.replace(/\s+/g, ''); if (val.length > 1) { console.log(val); } else { } }); },
{ "pile_set_name": "StackExchange" }
Incidence of fever of unknown origin and subsequent antitubercular medications in hemodialysis patients: a two-year prospective study. Hemodialysis (HD) patients are susceptible to atypical tuberculosis (TB), especially among patients presenting with fever of unknown origin (FUO), because of their impaired cellular immunity. Diagnostic trials of anti-TB drugs are therefore recommended in some TB endemic countries, including Japan, though clinical evidence for this therapy is scarce. We prospectively collected data for incident cases of clinical FUO for two years in 78 of 169 dialysis facilities in Aichi prefecture, located in central Japan. Clinical FUO was defined as sustained fever without any localizing signs and no infiltration on chest x-rays after a one-week antibiotic trial. The baseline characteristics, subsequent body temperatures on the days of HD therapy, and names of antibiotics including anti-TB drugs with the durations of medication were reported until fever alleviation or fever sustainment for over eight weeks. We identified 15 newly developed clinical FUO patients among 8,125 HD patients. The incidence rate was estimated to be 92 (95% CI, 26-158) per 100,000 person-years. This corresponds to 244 cases per year among 264,473 HD patients in Japan. Anti-TB drugs were secondarily prescribed in 8 of 15 clinical FUO patients (53%). No improved fever alleviation was observed when anti-TB drugs were secondarily prescribed compared with cases in which other antibiotics were preferred. We investigated the incidence of FUO in HD patients and found that the rate was not very high, whereas anti-TB drugs were frequently used for FUO cases. The efficacy of this diagnostic therapy should be elucidated in large-scale studies.
{ "pile_set_name": "PubMed Abstracts" }
Coexistence mechanisms and the paradox of the plankton: quantifying selection from noisy data. Many species of phytoplankton typically co-occur within a single lake, as do many zooplankton species (the "paradox of the plankton"). Long-term co-occurrence suggests stable coexistence. Coexistence requires that species be equally "fit" on average. Coexistence mechanisms can equalize species' long-term average fitnesses by reducing fitness differences to low levels at all times, and by causing species' relative fitness to fluctuate over time, thereby reducing differences in time-averaged fitness. We use recently developed time series analysis techniques drawn from population genetics to estimate the strength of net selection (time-averaged selection over a year) and fluctuating selection (an index of the variation in selection throughout the year) in natural plankton communities. Analysis of 99 annual time series of zooplankton species dynamics and 49 algal time series reveals that within-year net selection generally is statistically significant but ecologically weak. Rates of net selection are -10 times faster in laboratory competition experiments than in nature, indicating that natural coexistence mechanisms are strong. Most species experience significant fluctuating selection, indicating that fluctuation-dependent mechanisms may contribute to coexistence. Within-year net selection increases with enrichment, implying that among-year coexistence mechanisms such as trade-offs between competitive ability and resting egg production are especially important at high enrichment. Fluctuating selection also increases with enrichment but is independent of the temporal variance of key abiotic factors, suggesting that fluctuating selection does not emerge solely from variation in abiotic conditions, as hypothesized by Hutchinson. Nor does fluctuating selection vary among lake-years because more variable abiotic conditions comprise stronger perturbations to which species exhibit frequency-dependent responses, since models of this mechanism fail to reproduce observed patterns of fluctuating selection. Instead, fluctuating selection may arise from internally generated fluctuations in relative fitness, as predicted by models of fluctuation-dependent coexistence mechanisms. Our results place novel constraints on hypotheses proposed to explain the paradox of the plankton.
{ "pile_set_name": "PubMed Abstracts" }
The whale (Odontoceti) spleen: a type of primitive mammalian spleen. Three spleens from two Odontoceti species were studied histo-anatomically. These spleens consisted of lymphatic nodules, the red pulp (broad sense), and the trabeculo-capsular system composed of the elasto-fibroleiomyocytic tissue. The periarterial lymphatic sheath (PALS) was unclear. Two layers, the intermediate zone and perivenous layer, were distinguishable in the red pulp (broad sense). The perivenous layer was narrow in width and consisted of venules and the intervascular reticular tissue rich in myeloid cells. The collecting and drainage veins were enclosed in this layer. The perivenous layer corresponds to the red pulp (narrow sense) of the common mammalian spleen and may be under involution in a process that probably relates to the remodelling of the intrasplenic vein. The pattern of the arteriovenous communication seemed to be closed, and no ellipsoids were noted around arterial terminals. The Odontoceti spleen has two venous drainage routes (hilar and capsular systems), suggesting a primitive state of evolution, and may be an additional example of the primitive mammalian spleen.
{ "pile_set_name": "PubMed Abstracts" }
Previously molds for forming engine blocks with expendable cores for forming jackets around the cylinders are known. These cores, however, were primarily held by printouts extending from their cylindrical outer surfaces. Also the dies for forming such engine blocks included dowels mounted on the dies for forming the inside of the cylinders and for mounting liners for the cylinders. In casting V-engine blocks, complicated angularly mounted die parts which moved axially of the centerlines of the cylinders were also required for removal of the dowels from the cylinders in the blocks. Thus V-engine block castings required a minimum of six and usually eight relatively movable die parts.
{ "pile_set_name": "USPTO Backgrounds" }
Coronary artery disease is the single leading cause of human mortality and is annually responsible for over 900,000 deaths in the United States alone. Additionally, over 3 million Americans suffer chest pain (angina pectoris) because of it. Typically, the coronary artery becomes narrowed over time by the build up of fat, cholesterol and blood clots. This narrowing of the artery is called arteriosclerosis; and this condition slows the blood flow to the heart muscle (myocardium) and leads to angina pectoris due to a lack of nutrients and adequate oxygen supply. Sometimes it can also completely stop the blood flow to the heart causing permanent damage to the myocardium, the so-called "heart attack." The conventional treatment procedures for coronary artery disease vary with the severity of the condition. If the coronary artery disease is mild, it is first treated with diet and exercise. If this first course of treatment is not effective, then the condition is treated with medications. However, even with medications, if chest pain persists (which is usually secondary to development of serious coronary artery disease), the condition is often treated with invasive procedures to improve blood flow to the heart. Currently, there are several types of invasive procedures: (1) Catheterization techniques by which cardiologists use balloon catheters, atherectomy devices or stents to reopen up the blockage of coronary arteries; or (2) Surgical bypass techniques by which surgeons surgically place a graft obtained from a section of artery or vein removed from other parts of the body to bypass the blockage. Conventionally, before the invasive procedures are begun, coronary artery angiography is usually performed to evaluate the extent and severity of the coronary artery blockages. Cardiologists or radiologists thread a thin catheter through an artery in the leg or arm to engage the coronary arteries. X-ray dye (contrast medium) is then injected into the coronary artery through a portal in the catheter, which makes the coronary arteries visible under X-ray, so that the position and size of the blockages in the coronary arteries can be identified. Each year in U.S.A., more than one million individuals with angina pectoris or heart attack undergo coronary angiographies for evaluation of such coronary artery blockages. Once the blocked arteries are identified, the physician and surgeons then decide upon the best method to treat them. One of the medically accepted ways to deal with coronary arterial blockage is percutaneous transluminal coronary angioplasty (PTCA). In this procedure, cardiologists thread a balloon catheter into the blocked coronary artery and stretch it by inflating the balloon against the arterial plaques causing vascular blockage. The PTCA procedure immediately improves blood flow in the coronary arteries, relieves angina pectoris, and prevents heart attacks. Approximately 400,000 patients undergo PTCA each year in the U.S. However, when the arterial blockages are severe or widespread, the angioplasty procedure may fail or cannot be performed. In these instances, coronary artery bypass graft (CABG) surgery is then typically performed. In such bypass surgery, surgeons typically harvest healthy blood vessels from another part of the body and use them as vascular grafts to bypass the blocked coronary arteries. Each vascular graft is surgically attached with one of its ends joined to the aorta and the other end joined to the coronary artery. Approximately 500,000 CABG operations are currently performed in the U.S. each year to relieve symptoms and improve survival from heart attack. It is useful here to understand in depth what a coronary arterial bypass entails and demands both for the patient and for the cardiac surgeon. In a standard coronary bypass operation, the surgeon must first make a foot-long incision in the chest and split the breast bone of the patient. The operation requires the use of a heart-lung machine that keeps the blood circulating while the heart is being stopped and the surgeon places and attaches the bypass grafts. To stop the heart, the coronary arteries also have to be perfused with a cold potassium solution (cardioplegia). In addition, the body temperature of the patient is lowered by cooling the blood as it circulates through the heart-lung machine in order to preserve the heart and other vital organs. Then, as the heart is stopped and a heart-lung machine pumps oxygenated blood through the patient's body, the surgeon makes a tiny opening into the front wall of the target coronary artery with a very fine knife (arteriotomy); takes a previously excised saphenous vein (a vein from a leg) or an internal mammary artery (an artery from the chest); and sews the previously excised blood vessel to the coronary artery. The most common blood vessel harvested for use as a graft is the greater (long) saphenous vein, which is a long straight vein running from just inside the ankle bone to the groin. The greater saphenous vein provides a bypass conduit of the most desired size, shape, and length for use with coronary arteries. The other blood vessel frequently used as a bypass graft is the left or right internal mammary artery, which comes off the subclavian artery and runs alongside the undersurface of the breastbone (sternum). Typically, the internal mammary artery remains attached to the subclavian artery proximally (its upper part) but is freed up distally (its lower part); and it is then anastomosed to the coronary artery. However, the saphenous vein graft should be sewn not only to coronary artery but also to the aorta, since the excised vein is detached at both ends. Then, to create the anastomosis at the aorta, the ascending thoracic aorta is first partially clamped using a curved vascular clamp to occlude the proper segment of the ascending aorta; and a hole is then created through the front wall of the aorta to anchor the vein graft with sutures. The graft bypasses the blockage in the coronary artery and restores adequate blood flow to the heart. After completion of the grafting, the patient is taken off of the heart-lung machine and the patient's heart starts beating again. Most of the patients can leave the hospital in about 6 days after the CABG procedure. It will be noted that coronary artery bypass surgery is considered a more definitive method for treating coronary arterial disease because all kinds of obstructions cannot be treated by angioplasty, and because a recurrence of blockages in the coronary arteries even after angioplasty is not unusual. Also coronary artery bypass surgery usually provides for a longer patency of the grafts and the bypassed coronary arteries in comparison with the results of PTCA procedure. However, coronary artery bypass surgery is a far more complicated procedure, having need of a heart-lung machine and a stoppage of the heart. Also, it is clearly the more invasive procedure and is more expensive as a graft to the coronary arteries such as the left anterior descending, diagonal branches, and ramus intermedius arteries (which are located on the surface of the heart relatively close to the left internal mammary artery). However, there are several disadvantages associated with a CABG operation with a left internal mammary artery graft, which are as follows: (1) technically, this artery is more tedious to take down; (2) sometimes the left internal mammary artery is inadequate in size and length; (3) the operation is suitable only for the five percent of candidates for coronary artery bypass because only a single left internal mammary artery is available as a graft; (4) anatomically, the operation is limited mainly to the left anterior descending coronary artery because of its location ad length; and (5) the majority of patients need more than single vessel bypass surgery. In comparison, coronary arteries as small as 1 mm in diameter can be revascularized by vein grafting; and the saphenous vein is longer, larger, and more accessible than the left internal mammary artery. Equally important, although the greater or lesser saphenous veins of the leg are preferred, the cephalic or basilic veins in the arm are available as alternatives when the leg veins in the patient are unavailable or are unsuitable. For these reasons, the vein graft has today become the standard conduit for myocardial revascularization. There remains, however, a long-standing and continuing need for a bypass technique which would allow surgeons to perform multiple bypass procedures using vein grafts as vascular shunts in a minimally invasive way, and, in particular, the need remains for a simpler method to place more than one vein graft proximally to the aorta and distally to the coronary artery without using a heart-lung machine and without stopping the heart. If such a technique were to be created, it would be recognized as a major advance in bypass surgery and be of substantial benefit and advantage for the patient suffering from coronary artery disease.
{ "pile_set_name": "USPTO Backgrounds" }
Beginning July 1, Indiana homeowners who have fallen behind on their mortgage due to an involuntary loss of employment or reduction in employment income will have the opportunity to apply for one-time reinstatement assistance of up to $30,000. "Both the economy and the housing market in Indiana have bounced back dramatically since we were first awarded Hardest Hit Fund dollars in 2011," said Lt. Governor Suzanne Crouch, who serves as IHCDA's board chair. "Understanding these positive changes, which include a record low in the state's unemployment rate, we adjusted our program to best address the current needs homeowners have in our state." In October 2009, Indiana and the country were in the midst of a recession and approaching levels of unemployment not seen since the early 1980s. In Indiana, the unemployment rate reached 10%. In addition to this, Indiana experienced a high rate of families facing foreclosure, which is why we were one of 18 states that was awarded HHF dollars by the U.S. Department of the Treasury. "The HHF program has allowed us to help more than 10,000 Hoosier families across all 92 counties remain in their homes," said Jacob Sipe, Executive Director at IHCDA. "We are pleased to be in a position to reopen the application portal and provide targeted and immediate assistance to working Hoosiers who have fallen behind on their mortgage." The new program will cover one-time reinstatement-only assistance for eligible Indiana homeowners for up to $30,000. To be eligible, homeowners must have fallen behind on their mortgage due to an involuntary loss of employment or reduction in employment income. In order to qualify, homeowners must be able to make current mortgage payments but unable to pay the past-due balance. Last June, Indiana's HHF program stopped accepting applications to ensure sufficient funds remained to assist the homeowners enrolled in the program. Due to recycled and reallocated funds, there are now sufficient funds available to reopen the application portal. The portal will remain open until the funds available have been distributed. In addition to our updated program, the IFPN will continue to provide free foreclosure prevention counseling. This counseling is offered through a network of housing counselors throughout the state who work directly with homeowners and their lenders to find a solution to their specific financial situation.
{ "pile_set_name": "Pile-CC" }
1. Introduction {#sec1-ijms-16-22811} =============== Patients affected by chronic myeloid leukemia (CML) were the first ones who benefited from a targeted therapy with imatinib in the late 1990s \[[@B1-ijms-16-22811]\] and this drug remains the most frequently used as first-line therapy for CML \[[@B2-ijms-16-22811]\]. Nowadays several targeted agents are in clinical use to treat haematological and solid neoplasms, while a multitude of them are already in clinical development. Therefore, after the introduction of tyrosine kinase activity inhibitors (TKIs), the outcome of CML patients has improved, with 82% having complete cytogenetic responses (CCyR), and more than 90% of patients alive and free from progression eight years after the diagnosis \[[@B3-ijms-16-22811],[@B4-ijms-16-22811],[@B5-ijms-16-22811],[@B6-ijms-16-22811]\]. Nevertheless, about 30% of the treated patients must interrupt the treatment because of poor tolerability or cytogenetic or molecular failure \[[@B7-ijms-16-22811],[@B8-ijms-16-22811]\]. Those situations may be managed by the substitution of imatinib with second or third-generation TKIs, such as nilotinib, dasatinib, bosutinib and the most recent ponatinib. Although they share with imatinib the same mechanisms of action, the spectrum of their inhibitory capabilities against other intracellular kinases is different, as well as their substrate affinity with respect to metabolic enzymes and transmembrane transporters. As a consequence, there are non-negligible differences among the TKIs and this means that the predictive biomarkers for one drug do not perfectly fit for another one. Therefore, the present review will present and discuss possible pharmacogenetic markers predictive of response to TKIs. The published literature has been evaluated through the PubMed database using the following keywords in different combinations: chronic myeloid leukemia, imatinib, nilotinib, dasatinib, bosutinib, ponatinib, pharmacogenetic, pharmacogenomic, polymorphisms, gene expression, pharmacokinetics. Other pertinent articles have been selected among the references of retrieved literature. 2. Many Pieces for the Pharmacogenetic Puzzle {#sec2-ijms-16-22811} ============================================= Many preclinical experiments and clinical trials have focused their efforts to investigate any possible gene variation (in its broadest meaning) that could be predictive of treatment efficacy or toxicity and that could be applied in the clinical practice. Different studies identified at the same time more than one pharmacogenomic determinant, hence revealing the multifactorial characteristics of TKI efficacy and/or toxicity. In other cases, the search for predictive biomarkers was conducted by the aid of a pharmacokinetic approach, through the analysis of plasma concentrations of drugs. In this search for biomarkers, a clear-cut definition of clinical endpoints is an essential requirement. In CML patients, the treatment efficacy may be graded across a hematologic, cytogenetic and molecular response, according to the most recent European guidelines \[[@B7-ijms-16-22811]\]. In particular, molecular response may be scored depending on the magnitude of the logarithmic decrease of the BCR-ABL1 transcript (MR^3^ = major molecular response \[MR\]; MR^4^/MR^4.5^/MR^5^ = deep or complete molecular response) and according to the time required to attain these responses \[[@B7-ijms-16-22811]\]. The above-listed aspects of the TKI investigation may represent some critical issues in clinical trials, while they may explain both discrepancies among different studies and the lack of complete, matched associations between the presence of certain polymorphisms and clinical efficacy. 2.1. Liver Enzymes {#sec2dot1-ijms-16-22811} ------------------ Initial studies confirmed the involvement of CYP3A isoforms in the metabolic clearance of imatinib \[[@B9-ijms-16-22811]\], whose main metabolite is *N*-desmethyl-imatinib (CPG74588). The latter accounts for approximately 20% of plasma levels of the parent drug and its cytotoxic effect as an active metabolite is estimated to be approximately 3--4 times lower than that of imatinib \[[@B10-ijms-16-22811]\]. Because imatinib is a substrate of liver enzymes, it is subjected to drug-drug interactions. In particular, the intake of rifampin, a hepatic CYP3A inducer, causes a significant decrease in the systemic exposure of the drug (approximately 68%) \[[@B11-ijms-16-22811]\], whereas the administration of ketoconazole, an inhibitor of liver enzymes, led to a 40% increase in exposure \[[@B12-ijms-16-22811]\]. Further evidence strengthened the role of CYP isoforms as determinants of the imatinib response. The use of a probe drug such as quinidine allowed the stratification of patients according to their systemic CYP3A activity \[[@B13-ijms-16-22811]\], and the complete MR was significantly associated with the highest *in vivo* enzymatic activity. This result suggested that biotransformation of imatinib by CYP3A led to the active metabolite. Several clinical trials collected evidence about the association between CYP3A4 and CYP3A5 polymorphisms and response to imatinib \[[@B14-ijms-16-22811],[@B15-ijms-16-22811]\]. It is worth noting that in all these studies the evaluation of CYP genotypes was part of a wider investigation, which often included transmembrane transporters. The CYP3A4 isoform catalyses the biotransformation of the more recent TKIs dasatinib and ponatinib into active metabolites that have an equipotent or a 4-time lower inhibitory activity in comparison to the parent drug, respectively \[[@B16-ijms-16-22811],[@B17-ijms-16-22811]\]. Conversely, nilotinib and bosutinib are inactivated by that isoform. Interestingly, no polymorphisms of the CYP3A4/5 genes have been investigated with respect to changes in the pharmacokinetics or pharmacodynamics of these TKIs, whereas the major efforts have been addressed to the evaluation of potential drug-drug interactions \[[@B18-ijms-16-22811],[@B19-ijms-16-22811],[@B20-ijms-16-22811]\]. 2.2. Transmembrane Transporters {#sec2dot2-ijms-16-22811} ------------------------------- In order to bypass the possible negative effect exerted by the transmembrane transporters on TKI efficacy, researchers have recently designed additional drugs whose efficacy is not significantly affected by the ATP-binding cassette (ABC) transporters, as is the case for ponatinib \[[@B21-ijms-16-22811]\]. The importance of the ABC and solute carrier (SLC) transporters relies on their variable expression on the membrane of different cell types, their wide distribution within the organism and their involvement in the cellular influx or efflux of the drugs. ### 2.2.1. ABCB1 {#sec2dot2dot1-ijms-16-22811} Among the most investigated transporters, a prominent role is played by the ABCB1 ([Table 1](#ijms-16-22811-t001){ref-type="table"}). Since the first pharmacokinetic/pharmacogenetic studies, it was evident that this protein is involved in the extrusion of imatinib outside the Philadephia + leukemic cells \[[@B22-ijms-16-22811],[@B23-ijms-16-22811]\]. In particular, the ABCB1 overexpression has been associated with resistance to imatinib \[[@B22-ijms-16-22811],[@B24-ijms-16-22811]\], its reduced intracellular concentration \[[@B25-ijms-16-22811]\], and a diminished inhibition of BCR-ABL1 \[[@B26-ijms-16-22811]\]. Furthermore, the distribution of this transporter on the membrane of the epithelial cells in the gut mucosa and excretory organs \[[@B27-ijms-16-22811]\] is responsible for a lower tissue exposure to imatinib and is considered as a predictive marker of drug response. In particular, those patients carrying minor alleles for the c.1236C\>T and c.2677G\>T/A single nucleotide polymorphisms (SNPs) experienced a better benefit from imatinib, whereas the 1236C-2677G-3435C haplotype was associated with less frequent MR \[[@B28-ijms-16-22811],[@B29-ijms-16-22811]\]. On the other hand, patients homozygous for the low-activity c.1236T allele had the highest plasma concentrations of imatinib. Therefore, all these observations show that the transporters' activity could act at two different levels: a highest ABCB1 activity causes a reduced intestinal absorption (*i.e.*, diminished bioavailability) and increases excretion of imatinib from kidney and liver, while at the cellular level it causes a lower cytoplasmic accumulation of the drug, thus reducing the BCR-ABL1 inhibition capability. ijms-16-22811-t001_Table 1 ###### Clinical trials investigating the role of ABCB1 and ABCG2 on the imatinib efficacy. In the case of ABCB1, all of the listed studies evaluated the c.1236C\>T, c.2677G\>T/A, c.3435C\>T polymorphisms. Other SNPs (single nucleotide polymorphisms) are indicated. Transporter SNPs \* Patients Main Results Reference --------------------- ------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------- -------------------------- ABCB1 c.-129T\>C 189 c.3435CT/TT was an adverse genotype for complete MR in Caucasians ^a^ \[[@B15-ijms-16-22811]\] 215 c.1236CC and CGC haplotype were associated with resistance, while c.2677TT/TA/AA were related with better CCyR \[[@B30-ijms-16-22811]\] 100 TGT haplotype was associated with worse therapeutic effect from imatinib \[[@B31-ijms-16-22811]\] 90 c.1236TT and c.2677TT/TA were associated with better major MR rate \[[@B28-ijms-16-22811]\] 90 CGC haplotype associated with less frequent major MR \[[@B28-ijms-16-22811]\] 52 c.1236TT or c.3435CT/TT were associated with higher resistance; patients with the c.2677AG/AT/AA genotype had better CCyR than those carrying c.2677TT/GT/GG \[[@B32-ijms-16-22811]\] 84 c.3435TT associated with significantly longer times to major MR compared to CC/CT genotypes \[[@B33-ijms-16-22811]\] 28 Polymorphic alleles were associated with a reduced *ex vivo* ABCB1 activity; the highest transporter activity was present in patients who did not achieve major MR \[[@B29-ijms-16-22811]\] ABCG2 c.34G\>A 229 c.34GG genotype was associated with lowest rates of major MR and CCyR \[[@B14-ijms-16-22811]\] c.34G\>A, c.421C\>A 215 c.421CC associated with resistance; AA haplotype, better response \[[@B30-ijms-16-22811]\] c.421C\>A 82 c.421CC/CA associated with lower rate of major MR ^b^ \[[@B34-ijms-16-22811]\] **\***, other than c.1236C\>T, c.2677G\>T/A, c.3435C\>T; ^a^, other investigated genes: CYP3A4, CYP3A5, OATP1A2; ^b^, further genes investigated: CYP3A4, CYP3A5, CYP2C9, CYP2C19, CYP2D6, ABCB1, SLC22A1 and SLC22A2. Abbreviations: MR, molecular response; CCyR, complete cytogenetic response. However, several preclinical and clinical studies reported discordant results about the relationship between the ABCB1 activity and efficacy of imatinib. In the K562 cell line, the expression of ABCB1 variants was not associated with increased resistance against imatinib \[[@B35-ijms-16-22811]\] while the c.1236T-c.2677T-c.3435T haplotype was associated with the highest ABCB1 expression on cell membranes. Among clinical trials, Ni and coworkers \[[@B32-ijms-16-22811]\] found that the resistance to imatinib was more frequent in c.1236TT and c.3435TT or CT patients; the same conclusion was sustained by Ali and colleagues \[[@B31-ijms-16-22811]\]. Furthermore, Vine and colleagues showed that the time to major MR was significantly longer in patients harbouring the c.3435TT genotype than in subjects carrying the CC or CT genotypes \[[@B33-ijms-16-22811]\]. Moreover, although the c.1236C-c.2677G-c.3435C haplotype was significantly related to an increased risk of resistance, the c.2677T/A variant was associated with a lower MR rate in another recent study \[[@B30-ijms-16-22811]\]. In order to better clarify the effect of the ABCB1 SNPs in imatinib pharmacokinetics, patients' genotypes and haplotypes were investigated also by mathematical models including population pharmacokinetic approaches \[[@B36-ijms-16-22811]\]. Results from two independent studies on 67 and 60 Caucasian subjects excluded a significant influence of the ABCB1 polymorphisms on the drug pharmacokinetics \[[@B37-ijms-16-22811],[@B38-ijms-16-22811]\]. On the contrary, a third trial found a significant association among a combined ABCB1/SLC22A1 haplotype, imatinib clearance, and plasma concentrations \[[@B39-ijms-16-22811]\]. However, the latter study enrolled only 38 Asian patients and imatinib clearance was calculated on the basis of trough plasma concentrations \[[@B39-ijms-16-22811]\]. Therefore the discrepancies among these studies could depend on the different number of enrolled patients, their race (Caucasian *vs.* Asian subjects), and the employed methodologies. In another study, patients who were homozygotes at the three loci for the polymorphic alleles (*i.e*., c.1236TT, c.2677TT and c.3435TT) had a higher imatinib clearance; interestingly, these patients reduced imatinib dose for toxicity less frequently than the other ones \[[@B40-ijms-16-22811]\]. That "apparent paradox" was explained by different mechanisms (*i.e*., a more significant inhibition of ABCB1 activity by imatinib in c.3435CC patients, or the presence of other tagged SNPs), but the Authors concluded that confirmatory trials were needed. In the case of the second-generation TKIs, a different affinity toward ABCB1 has been reported. *In vitro* experiments demonstrated that nilotinib is not an ABCB1 substrate \[[@B25-ijms-16-22811],[@B41-ijms-16-22811]\], and this TKI could also inhibit the activity of the transporter \[[@B42-ijms-16-22811],[@B43-ijms-16-22811],[@B44-ijms-16-22811]\]. Moreover, imatinib, nilotinib and bosutinib are "comparatively weaker ABCB1 substrates" with respect to *N*-desmethyl-imatinib and dasatinib \[[@B35-ijms-16-22811]\]. However, results are still contradictory: other two *in vitro* studies demonstrated that nilotinib is a substrate of ABCB1 \[[@B26-ijms-16-22811]\] and that the increased expression of the transporter is associated with a reduced sensitivity of the K562 leukemia cell line to this TKI \[[@B45-ijms-16-22811]\]. Finally, a recent population pharmacokinetic study demonstrated that the excretion of nilotinib was influenced by the c.2677G\>T/A SNP \[[@B46-ijms-16-22811]\], but further clinical trials are required to confirm these data. Also dasatinib is a substrate of ABCB1 \[[@B35-ijms-16-22811],[@B41-ijms-16-22811],[@B47-ijms-16-22811]\], as showed by the recovery of its efficacy when resistant cells were exposed to PSC833, a specific ABCB1 inhibitor \[[@B48-ijms-16-22811]\]. Analogously, a murine model, knock-out for ABCB1 and/or ABCG2, confirmed that dasatinib is an ABCB1 substrate \[[@B49-ijms-16-22811]\]. However, only one *in vitro* study demonstrated that the use of ABCB1 inhibitors seems not to influence intracellular concentrations of dasatinib in CML-CD34(+) progenitors \[[@B47-ijms-16-22811]\]. Those preclinical results strongly suggest that dasatinib is effectively a substrate of ABCB1, but the relative scarcity of clinical trials does impede the further evaluation of those data. Concerning bosutinib, some authors reported that it is a weak substrate of ABCB1 \[[@B35-ijms-16-22811]\]. Further *in vitro* experiments on the K562 cell line overexpressing the drug transporters ABCB1, ABCG2, and SLC22A1 showed that only ABCB1 was responsible for the active bosutinib transport, because resistance was overcome by the addition of the ABCB1 inhibitor verapamil \[[@B50-ijms-16-22811]\]. Finally, some authors reported that ponatinib is able to reverse the ABCG2- and ABCB1-mediated multi-drug resistance \[[@B51-ijms-16-22811]\]. However, a more recent study seems not to confirm those results for ponatinib \[[@B21-ijms-16-22811]\]. Overall, the sometime contradictory results can be explained by (a) the relatively lower number of patients treated with second-generation TKIs in respect to those receiving imatinib; (b) the lack of standardized methods and instruments for the measurement of plasma concentrations of TKIs; and (c) the adoption of variable modelling approaches. ### 2.2.2. ABCG2 {#sec2dot2dot2-ijms-16-22811} The second transmembrane transporter that caught the attention of the researchers is ABCG2, formerly known as breast cancer resistance protein (BCRP). Several *in vitro* studies demonstrated that imatinib is a substrate of this transporter \[[@B23-ijms-16-22811],[@B24-ijms-16-22811],[@B25-ijms-16-22811],[@B52-ijms-16-22811],[@B53-ijms-16-22811]\], and that imatinib is able to inhibit the transporter activity \[[@B54-ijms-16-22811]\]. Therefore, preclinical studies suggested that the variable ABCG2 activity could influence the steady-state plasma concentrations of the drug and, ultimately, its efficacy \[[@B55-ijms-16-22811]\]. That hypothesis has been confirmed by several clinical trials: the high-activity of the c.421CC and c.34GG ABCG2 genotypes has been associated with lower plasma concentrations \[[@B56-ijms-16-22811]\], whereas the c.421C\>A SNP was predictive of significant changes in the apparent clearance of imatinib \[[@B37-ijms-16-22811]\]. Furthermore, c.421C\>A and c.34G\>A genotypes predicted a lower rate of complete cytogenetic response to imatinib \[[@B14-ijms-16-22811],[@B30-ijms-16-22811]\]. Interestingly, a study enrolling 154 CML patients found that the c.421C\>A SNP was an independent predictor of complete MR in multivariate analysis (OR = 0.3953, *p* = 0.0284, CC as reference) \[[@B57-ijms-16-22811]\]. Another trial demonstrated that patients carrying the c.421AA genotype achieved the major MR in a greater percentage (*n* = 4, 100%) compared to individuals with other genotypes (*n* = 73, 12.3%) \[[@B34-ijms-16-22811]\], but the unbalanced number of patients between the two groups of this study should be taken into consideration. Finally, the cumulative incidence of major MR was the same in patients who received imatinib 600 or 400 mg/day, if the latter ones carried a favourable G-G haplotype composed by two SNPs (rs12505410 and rs2725252) of the ABCG2 gene \[[@B58-ijms-16-22811]\]. Overall, all these clinical trials show that ABCG2 SNPs are predictive markers of imatinib efficacy and pharmacokinetics. Controversial results have been published regarding the ABCG2 affinity and second-generation TKIs. Nilotinib \[[@B24-ijms-16-22811],[@B41-ijms-16-22811],[@B53-ijms-16-22811]\], dasatinib \[[@B41-ijms-16-22811],[@B48-ijms-16-22811]\] and ponatinib \[[@B51-ijms-16-22811]\] are substrate of this transporter, even if nilotinib and ponatinib are also capable of inhibiting the transporter activity \[[@B42-ijms-16-22811],[@B51-ijms-16-22811],[@B59-ijms-16-22811]\]. However, an *in vivo* study with knock-out mice suggested a partial role for ABCG2 in the transmembrane transport of dasatinib \[[@B49-ijms-16-22811]\], while other *in vitro* experiments failed to confirm a role for ABCG2 in drug disposition \[[@B25-ijms-16-22811],[@B47-ijms-16-22811]\]. Finally, a recent work denies a role for ABCG2 as a transporter of ponatinib \[[@B21-ijms-16-22811]\]. At present, preclinical investigations seem unable to offer robust and reliable data, and clinical studies investigating the role of ABCG2 on the second-generation TKIs efficacy are necessary to draw conclusive considerations. ### 2.2.3. SLC22A1 {#sec2dot2dot3-ijms-16-22811} SLC22A1, also known as hOCT1 (human organic cation transporter member 1), plays an important role in cellular uptake of imatinib \[[@B25-ijms-16-22811],[@B60-ijms-16-22811],[@B61-ijms-16-22811]\]. As a consequence, the different cellular uptake and cytoplasmic retention of the drug are significantly associated to the variable BCR-ABL1 inhibiting activity, which in turn is correlated with cytogenetic and molecular responses \[[@B62-ijms-16-22811]\] ([Table 2](#ijms-16-22811-t002){ref-type="table"}). Furthermore, SLC22A1 gene harbours several polymorphisms that are associated with an altered transport activity \[[@B63-ijms-16-22811]\]. Thus, several studies evaluated the correlations between its polymorphisms and response to imatinib: the SLC22A1 SNP c.1022C\>T and a combination of polymorphisms were predictive of higher rates of major cytogenetic and molecular responses \[[@B15-ijms-16-22811]\]. Analogously, the c.480CC genotype was significantly associated with a higher percentage of event-free survival (EFS) at 24 months with respect to the CG or GG genotypes \[[@B38-ijms-16-22811]\]. ijms-16-22811-t002_Table 2 ###### Studies evaluating the possible correlation between clinical endpoints and polymorphisms of SLC22A1, OCTN1, OATP1A2 and CYP3A5 genes, their mRNA levels or transporter activity in CML patients treated with imatinib. Genes Polymorphisms, Gene Expression, Functional Test Patients Main Results Reference ---------------------------------- ------------------------------------------------- ----------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- -------------------------- *SLC22A1* c.1002C\>T 189 c.1022CT/TT were adverse genotypes for major cytogenetic response in all patients \[[@B15-ijms-16-22811]\] c.1260-1262delGAT, c.1222\>G ^a^ 336 Deletion was associated with time to treatment failure, but it was restored by the c.1222G allele \[[@B64-ijms-16-22811]\] c.480G\>C 229 c.480GG associated with high rate of loss of response or treatment failure \[[@B14-ijms-16-22811]\] c.1260-1262delGAT, c.1222A\>G 153 c.1222AA/AG genotypes associated with longer time to MR \[[@B65-ijms-16-22811]\] Gene expression 28 Higher levels of mRNA were observed in patients who achieved major and complete MR \[[@B29-ijms-16-22811]\] Gene expression 70 Highest pre-treatment mRNA levels were associated with better CCyR rates, PFS and OS ^b^ \[[@B61-ijms-16-22811]\] Transporter activity \- High TA, better major MR at 60 months; low TA, lower OS, EFS and higher kinase domain mutation rate \[[@B66-ijms-16-22811]\] Haplotype ^c^ 189 Associated with both complete and major MR \[[@B15-ijms-16-22811]\] *OCTN1* c.1507C\>T 189 c.1507TT was an adverse genotype for major MR in all patients and in Caucasians \[[@B15-ijms-16-22811]\] *SLC22A1, OCTN1, OATP1A2* various 189 Complete and major MR were associated with a combination of SNPs \[[@B15-ijms-16-22811]\] *CYP3A5* g.12083G\>A 229 g.12083AA genotype was associated with lowest rates of major MR and CCyR \[[@B14-ijms-16-22811]\] ^a^, further 21 polymorphisms have been investigated, but they were not related to the clinical outcome; ^b^, pretreatment mRNA levels of ABCB1, ABCG2 and ABCC1 did not predict treatment outcome; ^c^, combined molecular endpoint composed by 4 SLC22A1 SNPs. Abbreviations: MR, molecular response; CCyR, complete cytogenetic response; OS, overall survival; EFS, event-free survival; TA, transporter activity. The c.1260-1262delGAT polymorphism (M420del) was associated with an increased risk of treatment failure with imatinib \[[@B64-ijms-16-22811]\], but that risk would be counterbalanced by the concomitant presence in *cis* of the c.1222A\>G SNP. This still remains a debated issue, because other authors never found the M420del polymorphism in *cis* with the c.1222A\>G SNP \[[@B65-ijms-16-22811]\]. Similar results have been obtained by Tzetkov and coworkers who found that the M420del polymorphism was responsible for the loss in the substrate specificity "*that is only marginally affected by M408V*" (*i.e*., c.1222A\>G) \[[@B67-ijms-16-22811]\]. Beyond the discrepancies observed in these trials, two important points of discussion emerged: first, the c.1222A\>G polymorphism is responsible for an alternative splicing, and it is associated with a longer time to achieve an optimal response \[[@B65-ijms-16-22811]\]. Second, the differences in SLC22A1 mRNA levels could depend on the primers used for the real-time PCR \[[@B64-ijms-16-22811]\], thus explaining the contrasting results obtained when the relationship between SLC22A1 gene expression levels and the clinical outcome was assessed \[[@B61-ijms-16-22811],[@B65-ijms-16-22811],[@B68-ijms-16-22811],[@B69-ijms-16-22811],[@B70-ijms-16-22811]\]. As reported about ABCB1, population pharmacokinetic analyses were planned to evaluate the role of SLC22A1 in imatinib disposition. The rationale for this approach resides in tissue expression of the transporter, because SLC22A1 is present in the gut mucosa and the liver where it would participate in the absorption and excretion of imatinib, respectively \[[@B27-ijms-16-22811],[@B71-ijms-16-22811],[@B72-ijms-16-22811]\]. Some authors reported that this transporter does not influence the drug pharmacokinetics in CML patients carrying low-activity variants of the transporter \[[@B73-ijms-16-22811]\]. On the contrary, two recent independent trials confirmed the relationship among the SLC22A1 polymorphisms, plasma concentrations, and imatinib systemic clearance \[[@B38-ijms-16-22811],[@B39-ijms-16-22811]\]. In particular, the presence of the c.480CA/AA genotypes was associated with a drop of 12% in drug clearance, whereas the mean trough plasma concentrations increased by 45% compared to the c.480CC individuals \[[@B38-ijms-16-22811]\]. Furthermore, this genetic variation explained approximately 10% of the inter-individual variability of imatinib pharmacokinetics \[[@B38-ijms-16-22811]\]. Recently, two research teams published interesting findings about the possibility of adopting SLC22A1 as a potential marker of the imatinib effect. In the first study, a low transport activity in peripheral mononuclear cells was associated with poor response rates and the highest probability of transformation to accelerated phase or blast crisis \[[@B66-ijms-16-22811]\]. Therefore, the study proposed an increase of daily dose of imatinib to overcome the low activity of SLC22A1 \[[@B74-ijms-16-22811]\]. Intriguingly, the presence of some polymorphisms, namely c.181C\>T, c.1260_1262delGAT, and c.1222A\>G, were not related to the SLC22A1 activity nor to the achievement of the major MR at 24 months \[[@B75-ijms-16-22811]\]. Another *in vitro* study from Nies and colleagues clearly demonstrated that the uptake and the cytotoxic effect of imatinib in CML cell lines and patients' leukemic cells were independent from the SLC22A1 expression \[[@B76-ijms-16-22811]\]. Interestingly, an SLC22A1-knockout murine model did not show any alteration of the plasma kinetics and liver concentrations of imatinib compared to the controls \[[@B76-ijms-16-22811]\]. Overall, the comparison of those findings with previously obtained results \[[@B62-ijms-16-22811],[@B74-ijms-16-22811]\] suggested that some transporters other than SLC22A1 could be involved in imatinib uptake, as already hypothesised by Hu and colleagues \[[@B73-ijms-16-22811]\]. The SLC22A1 transporter does not recognize nilotinib as a substrate \[[@B25-ijms-16-22811],[@B62-ijms-16-22811],[@B77-ijms-16-22811]\] nor dasatinib \[[@B78-ijms-16-22811]\], as it has been also demonstrated in peripheral mononuclear cells obtained from chronic-phase CML patients \[[@B48-ijms-16-22811]\]. However, the mathematical modelling through a factor analysis of mixed data demonstrated that the hOCT1 c.480C\>G genotype could be a possible covariate of the pharmacokinetic model of nilotinib, and c.480CC patients had a significantly better three-year EFS with respect those individuals carrying at least one c.480G polymorphic allele \[[@B46-ijms-16-22811]\]. As pointed out by the authors, these results shed light on the role of the SLC22A1 transporter for nilotinib, but further studies are still required for conclusive considerations. In summary, preclinical and clinical results are strengthening the crucial role of SLC22A1 on imatinib kinetics at both the systemic and cellular level, despite some studies reported contrasting results that should be taken into consideration. Beyond this issue, the question regarding which SLC22A1 polymorphism is the best predictive biomarker of imatinib efficacy or whether a functional assay could be better than pharmacogenetic analyses is still unanswered. In the case of second-generation TKIs, data published in the literature are scarce for both preclinical experiments and clinical trials, and a definitive relationship between these drugs and SLC22A1 may not be demonstrated nor excluded. It is likely that an increasing number of studies evaluating nilotinib, dasatinib, and other most recent TKIs could solve this issue. ### 2.2.4. Other Transmembrane Transporters {#sec2dot2dot4-ijms-16-22811} The expression of other transmembrane transporters on different tissues may account for the unexplained inter-individual variability in both pharmacokinetics and pharmacodynamics. The uptake of several TKIs, other than those directed against BCR-ABL1, was clearly increased by the expression of the SLCO1B members in the human embryonic kidney cell line HEK293 \[[@B79-ijms-16-22811]\]. In particular, imatinib, nilotinib, and dasatinib accumulated within the cells when SLCO1B3 was expressed, but for nilotinib a significant increase was observed also in the presence of SLCO1B1. Recent results from a clinical trial have demonstrated that the SLCO1B3 polymorphism c.334T\>G significantly influenced the absorption of nilotinib in CML patients \[[@B46-ijms-16-22811]\], because the bioavailability of the drug was increased by 11% in patients with c.334TT genotype. Moreover, those individuals carrying the c.521C allele of SLCO1B3 had a longer time to achieve both the CCyR and the major MR. These results suggest that the activity of the transporter is highly associated with the absorption process, but confirmative multicentre trials are still needed. The involvement of other transmembrane transporters in TKI disposition is still under investigation. In particular, imatinib was demonstrated to be a substrate of ABCC4 and SLCO1A2 in a clinical trial \[[@B73-ijms-16-22811]\], whereas another study confirmed the influence of SLC22A4 but not of SLCO1A2 on the MR \[[@B15-ijms-16-22811]\]. Therefore, the relationship between imatinib and the three transmembrane transporters SLC22A1, ABCB1, and ABCG2 seems the most promising one to be translated into the clinical practice, but prospective confirmative trials are still needed. At present, the data for second-generation TKIs are discordant and insufficient to identify the best candidate among the different transporters (with their polymorphisms), with the exception of ABCB1 for dasatinib. An improved knowledge about the relationship between BCR-ABL1 inhibitors and transporters is an urgent need, because nilotinib and dasatinib are registered agents for first-line therapy \[[@B80-ijms-16-22811]\], and they represent an effective second- or third-line treatment for CML patients who fail imatinib \[[@B81-ijms-16-22811]\]. 3. Discussion {#sec3-ijms-16-22811} ============= Previous paragraphs describe the role of pharmacogenetics to predict the treatment efficacy (and perhaps tolerability) in CML patients: among the possible proteins involved in the kinetics of TKIs at the cellular and systemic level, transmembrane transporters play a pivotal role because they influence both intracellular and plasma concentrations of the drugs. In turn, the variable concentrations of imatinib, nilotinib, and other TKIs may result in significant variations in efficacy and tolerability. Among the different transporters, ABCB1 participates in the drug extrusion and excretion at the cellular level, but it could influence also the intestinal absorption because its characteristic tissue distribution \[[@B27-ijms-16-22811]\]. On the other side, ABCB1 is a well-known protein, it recognizes a number of pharmacological substrates and its gene harbours several polymorphisms \[[@B82-ijms-16-22811],[@B83-ijms-16-22811]\], making the transporter an optimal biomarker for the prediction of imatinib efficacy \[[@B84-ijms-16-22811]\]. Interestingly, the activity of the transmembrane transporters could be modulated by specific inhibitors to overcome treatment failure \[[@B85-ijms-16-22811]\], but in one report the ABCB1 inhibition did not enhance the efficacy of imatinib \[[@B86-ijms-16-22811]\]. About nilotinib, preclinical data are sometimes discordant and there is only one clinical study showing the involvement of ABCB1 in nilotinib disposition \[[@B46-ijms-16-22811]\]. Among the other TKIs, dasatinib displays a sufficient substrate affinity for ABCB1 for which the transporter (and its polymorphisms) may represent a pharmacogenetic biomarker \[[@B35-ijms-16-22811]\]. The efflux of imatinib from the cells is dependent also on the presence/activity of ABCG2 \[[@B23-ijms-16-22811],[@B24-ijms-16-22811]\], whose polymorphisms have been associated with the clinical efficacy of the drug \[[@B25-ijms-16-22811],[@B57-ijms-16-22811]\]. Whether the transporter is also involved in the kinetics of the second-generation TKIs is not still clearly demonstrated nor excluded because of the controversial results obtained until now \[[@B25-ijms-16-22811],[@B51-ijms-16-22811]\]. The introduction of TKIs in clinical settings addressed researchers' attention toward those transporters that ensure drug uptake within leukemic cells. Several studies confirm the role of SLC22A1 in the cellular uptake of imatinib and, as a consequence, the inhibition of BCR-ABL1 activity. In fact, clinical endpoints, such as cytogenetic or molecular responses, are predicted by polymorphisms of SLC22A1, but also tolerability and changes of drug disposition may be anticipated by the evaluation of allelic variants of the gene. These observations are based on tissue distribution of the transporter, being capable of influencing the drug excretion and/or absorption \[[@B27-ijms-16-22811]\]. However, one study demonstrated that imatinib uptake does not depend on SLC22A1 in different *in vitro* and *in vivo* models \[[@B76-ijms-16-22811]\], suggesting that other transporters could be involved despite "the mechanisms responsible for imatinib uptake into leukemic cells are still elusive" \[[@B76-ijms-16-22811]\]. TKIs other than imatinib, namely dasatinib and nilotinib, can inhibit members of the SLC22A family \[[@B87-ijms-16-22811]\], and preliminary data pointed out at SLC22A1 as a nilotinib transporter \[[@B46-ijms-16-22811]\], but the evidence is still limited. Overall, the landscape of TKI pharmacogenetics is characterized by several points of interest, such as the relationships among gene polymorphisms, altered pharmacokinetics, and drug effects. A number of factors suggest that pharmacogenetic results could be reliable for the prediction of treatment efficacy, but the discrepant data limit the confidence and the application of the biomarker ([Table 3](#ijms-16-22811-t003){ref-type="table"}). The reasons for these unexpected results may depend on the biologic system used (*i.e.*, different cell lines) \[[@B73-ijms-16-22811]\], the range of drug concentrations evaluated \[[@B25-ijms-16-22811],[@B41-ijms-16-22811]\], or the interplay among different genes and polymorphisms, the majority of which are not investigated at the same time. Population pharmacokinetic analyses and pharmacokinetic/pharmacodynamics mixed models could overcome some of these issues, at least in part, because each covariate (*i.e*., each polymorphism) is introduced within the model that in turn quantifies the kinetic and/or dynamic influence of the factor on the whole system (*i.e*., cells or individuals). ijms-16-22811-t003_Table 3 ###### Issues, causes and possible solutions related to contrasting results published in the literature regarding the search for TKI biomarkers. Study Phase Issues Causes Possible Solutions --------------------- ----------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------- Preclinical studies Discrepancies in substrate affinity/role of transporters in TKIs kinetics at both the cellular and systemic level Differences in cell lines, in *in vivo* models, and range of drug concentrations used Adoption of homogenous *in vitro* and *in vivo* models Same experimental conditions Clinical Trials Contradictory results reported for the evaluation of the relationship between pharmacogenetic results and treatment outcome Gene candidate strategy with limited number of genes/polymorphisms Definition of clinical endpoints has been changed over time Patients' compliance and adherence have not been fully investigated Increase the number of genes/polymorphisms investigated at one time Strict adherence to clinical guidelines for endpoints definition Check for patients' compliance Another possible confounding factor could be the patient's adherence to the prescribed treatment. The problem of patients' compliance has been clearly examined in the recent past \[[@B69-ijms-16-22811]\], and the findings demonstrated that clinical efficacy depends in large part on the regular intake of the drugs. Therefore, the assessment of adherence becomes a prerequisite for interpreting clinical results, and the therapeutic drug monitoring of plasma TKI concentrations could serve both as a measure for this endpoint \[[@B88-ijms-16-22811]\] and to obtain data for pharmacokinetic analyses \[[@B38-ijms-16-22811]\]. Two questions remain to be answered: when pharmacogenetic biomarkers should be used and what is needed to transfer those markers in clinical settings. Biomarkers may have a rational position in the management of CML patients before the start of the treatment. Pharmacogenomic tests may help in the choice of the TKI according to the patient's genotype of transmembrane transporters and substrate affinity of drugs, but also the dose could be a matter of choice. When treatment has begun, deepest genetic analyses could be useful to investigate possible unexplained relationships between drugs and toxicities or therapy failures. This approach will certainly benefit from next-generation sequencing and whole genome association studies. The second answer is a critical issue for many biomarkers in oncology and beyond. Confirmatory prospective clinical trials, which should be aimed at the final validation of the biomarkers, are lacking and the uncertainty due to discrepancies among studies remains an insurmountable hurdle to set up companion diagnostics. A part of the problem is related to the choice of clinical and molecular endpoints (*i.e*., complete cytogenetic response, major or complete MR), but their application according to international guidelines will represent an excellent solution, especially for those biomarkers that should predict the earliest drug effects. 4. Conclusions {#sec4-ijms-16-22811} ============== In conclusion, the investigation of TKI pharmacogenetics in CML patients has led to important results, with the aim to decipher the relationships between genes and drugs. Therefore, a great part of the inter-individual variability could be due to the transmembrane transporters, whose activity may be evaluated by both genetic analyses, and functional tests \[[@B75-ijms-16-22811]\]. However, a part of that variability still remains unexplained, even if other approaches and areas of research (*i.e*., epigenetics) could lead to interesting results in the close future \[[@B89-ijms-16-22811]\]. It is likely that a clear-cut definition of endpoints will bring to more generalizable results, which in turn will help in the setting up of prospective trials. Antonello Di Paolo conceptually designed the general organization of the review, while Marialuisa Polillo and Sara Galimberti wrote the manuscript. All of the authors contributed to the revision of the harvested literature and of the manuscript. Sara Galimberti, Claudia Baratè, Mario Petrini, and Antonello Di Paolo participated on Novartis advisory boards, but the activity is not relevant for the present article. The other authors declare no conflict of interest. [^1]: These authors contributed equally to this work.
{ "pile_set_name": "PubMed Central" }
Help Topics Syllabus In this article, scientometrics: the study of philosophy and theses defended in china. In this sense not a phd thesis writing. In fulfilment of scientometrics, An Empirical-Statistical Case Study. . A Regional Longitudinal Study of the Intra- and Intergenerational Formation of a In: Scientometrics, 1997 (40), No. 21 May 2015 Scholarly Communication, Scientometrics and Altmetrics. Valeria Aman. Local 522. Jelena Ivanišević. Social Networks for Social Changes: Case Study of . Only one thesis could be submitted by a su- pervising professor Scientometric analysis phd thesis - Professional Essay Writing And One study, which does attempt to address the endogeneity of academic rank is .. their thesis as well as papers coming out of their thesis. Scientometrics, 88,.Using a highly optimized author co-citation analysis methodology to study the and information science This paper presents a scientometric study of articles from . Potsdam -fhpotsdam/files/76/thesis.pdf 51 2007 Read Final Department 47-.pmd text version. DEPARTMENTS. 48 FACULTY OF ARTS ARABIC. Major Activities Arabic Language, being official language of 18 … Den eigenen reihen zu halten, scientometrics analysis of innovation. On a survey of the Mining and phd thesis, delft university of scientometric analysis; comparative approach. Statistical assessment. The study is the declining scientific. Scientometrics, 68(3), 343-360. Also in: Proceedings of ISSI 2005 - 10th International Conference of the International Society for Scientometrics and Informetrics, study analysed the number of references included in the theses and found an average of 242.79 references to a thesis. Out of . 719 thesis articles submitted by the scholars of the scientometric analysis on the doctoral dissertations in. We study the social structures that shape the scientific knowledge domains. This thesis would not have been possible without him and without freedom .. analysis from digital libraries has been studied in scientometrics and bibliometrics,. Scientometrics, 89(1), 349-364. . Paper presented at the 14th International Society of Scientometrics and Unpublished Dissertation PhD Thesis, Koblenz. 100 results A cross-cultural comparative study of German and Singaporean and Cross-Border Collaboration: A Scientometric Study. . Diploma Thesis Influential Factors for E-Government Success in the Middle East: Case Study The Attitude Construct in IT Adoption Research - A Scientometric Analysis A study of cited references in WoS-indexed journals from 1991-2013. in: Salah, A. A. (Ed.): Proceedings of the 15th International Society of Scientometrics and Islamic history is the major field of study of the Islamic History, Art Culture and . It should be mentioned at this point that such scientometric descriptions raise .. the revolutionary thesis that something could be philosophically right even thesis statement on freud · ruby moon by matt cameron essays · thesis proposal computer · white fang scientometric study thesis · why i need 30 Sep 2014 density-equalizing and scientometric study. To evaluate the .. This work is part of a thesis project (Niklas Pleger). Author Contributions. Publikationsanalysen zu den Themen ‚Depression und Suizidalität A WHO study projected an increase of global deaths by a further 17% in the period related diseases and expertise in the chosen special field of the thesis. Teaching thus shall cover statistics, scientometrics, and basics in laboratory methods 1. Nov. 2012 Schreiber, B., Eckhardt, A., 2010, Between Cost Effiencey and Limited Innovation - A Scientometric Study of Business Process Standardization, 28 Nov 2012 His thesis, which he obtained in 2012, is on usage-driven He is member of the editorial board of the journals Scientometrics and Information 4, form, together with this essay, a publication-based doctoral thesis in Business and Economics Internet presence, (B) to study behavioural patterns on scientists' Internet profiles, and (C) to Scientometrics, 84(2), 307–315. Pearson, E. Thesis A.Seehaus final - Fachbereich Mathematik a Fellow at the Centre for Innovation Studies (THESIS, University of Calgary, Alberta, She is particularly interested in the study of the structure and evolution of In Scientometrics in 1978, with collaborator Richard Rosen, he published a Social Influence in Technology Adoption Research – A Scientometric Study over two Decades. In: Proceedings . Category: Dissertation thesis. Reference No.This PhD thesis intends to contribute to this clarification by defining the ric and scientometric studies, which lead to the identification of important conferences Scientometrics. 82(3). 613-626. Area: Scientometrics COLLNET journal of Scientometrics and Information Management. 2(2). 57-70. .. Research Thesis esss - European Summer School for Scientometrics offers training covering and a Fellow at the Centre for Innovation Studies (THESIS, University of Calgary, of teaching and study programmes, alumni surveys, and university rankings Torben Schubert | Fraunhofer ISI As a member of the Geospatial unit of the Metadata Management Section of ITS, under the supervision of the Geospatial Metadata Librarian, the incumbent provides This paper is based on a case study of local nanotech networks in the city of Munich, Germany. In this sense, the Munich case study confirms Forman's thesis that there is a primacy of technology in . Scientometrics 70(3): 565-601. Multidisciplinary Research Groups: A Case Study of the Leibniz Association, 13th International Conference of the International Society for Scientometrics 26 Nov 2015 psychology thesis download, raisin in the sun essay, sample thesis title for nursing psychology thesis download, scientometric study doctoral 12. Aug. 2011 einer vom Team Bibliometrie betreuten Master-Thesis erstmals eine groß der European Summer School for Scientometrics (esss) betraut.
{ "pile_set_name": "Pile-CC" }
# -*- encoding: utf-8 -*- # Pilas engine - A video game framework. # # Copyright 2010 - Hugo Ruscitti # License: LGPLv3 (see http://www.gnu.org/licenses/lgpl.html) # # Website - http://www.pilas-engine.com.ar import pilas from pilas.actores import Actor import Image import os class Fondo(Actor): def __init__(self, imagen): self._eliminar_el_fondo_de_pantalla_actual() Actor.__init__(self, imagen) self.z = 1000 def _eliminar_el_fondo_de_pantalla_actual(self): fondos = [x for x in pilas.escena_actual().actores if x.es_fondo()] a_eliminar = [] for f in fondos: a_eliminar.append(f) for fondo in a_eliminar: fondo.eliminar() def es_fondo(self): return True class Volley(Fondo): "Muestra una escena que tiene un fondo de pantalla de paisaje." def __init__(self): Fondo.__init__(self, "fondos/volley.png") class Nubes(Fondo): "Muestra un fondo celeste con nubes." def __init__(self): Fondo.__init__(self, "fondos/nubes.png") class Pasto(Fondo): "Muestrak una escena que tiene un fondo de pantalla de paisaje." def __init__(self): Fondo.__init__(self, "fondos/pasto.png") class Selva(Fondo): "Muestra una escena que tiene un fondo de pantalla de paisaje." def __init__(self): Fondo.__init__(self, "fondos/selva.png") class Tarde(Fondo): "Representa una escena de fondo casi naranja." def __init__(self): Fondo.__init__(self, "fondos/tarde.png") self.y = 40 class Espacio(Fondo): "Es un espacio con estrellas." def __init__(self): Fondo.__init__(self, "fondos/espacio.png") class Noche(Fondo): "Muestra una escena que tiene un fondo de pantalla de paisaje." def __init__(self): Fondo.__init__(self, "fondos/noche.png") class Color(Fondo): "Pinta todo el fondo de un color uniforme." def __init__(self, color): Fondo.__init__(self, "invisible.png") self.color = color self.lienzo = pilas.imagenes.cargar_lienzo() def dibujar(self, motor): if self.color: self.lienzo.pintar(motor, self.color) class FondoPersonalizado(Fondo): "Permite definir una imágen como fondo de un escena y analizar la opacidad de sus pixeles" def __init__(self, imagen): Fondo.__init__(self, imagen) self.imagenFndo = Image.open(imagen) # pilas.mundo.get_gestor().escena_actual().set_fondo(self) pilas.mundo.gestor_escenas.escena_actual().set_fondo(self) def informacion_de_un_pixel(self, x, y): return self.imagenFndo.getpixel( (x,y) ) def dimension_fondo(self): return self.imagenFndo.size class Desplazamiento(Fondo): """Representa un fondo formado por varias capas (o actores). En fondo de este tipo, ayuda a generar un efecto de profundidad, de perspectiva en tres dimensiones. """ def __init__(self, ciclico=True): "Inicia el objeto, dando la opción de simular que el fondo es infinitio" Fondo.__init__(self, "invisible.png") self.posicion = 0 self.posicion_anterior = 0 self.capas = [] self.velocidades = {} self.escena.mueve_camara.conectar(self.cuando_mueve_camara) self.ciclico = ciclico if ciclico: self.capas_auxiliares = [] def agregar(self, capa, velocidad=1): x, _, _, y = pilas.utils.obtener_bordes() capa.fijo = True capa.izquierda = x self.capas.append(capa) self.velocidades[capa] = velocidad if self.ciclico: copia = capa.duplicar() copia.y = capa.y copia.z = capa.z copia.fijo = True copia.imagen = capa.imagen self.capas_auxiliares.append(copia) copia.izquierda = capa.derecha self.velocidades[copia] = velocidad def actualizar(self): if self.posicion != self.posicion_anterior: dx = self.posicion - self.posicion_anterior self.mover_capas(dx) self.posicion_anterior = self.posicion def cuando_mueve_camara(self, evento): dx = evento.dx # Hace que las capas no se desplacen naturalmente # como todos los actores. #for x in self.capas: # x.x += dx # aplica un movimiento respetando las velocidades. self.mover_capas(dx) def mover_capas(self, dx): for capa in self.capas: capa.x -= dx * self.velocidades[capa] if self.ciclico: for capa in self.capas_auxiliares: capa.x -= dx * self.velocidades[capa] # Resituar capa cuando se sale del todo de la ventana ancho = pilas.mundo.motor.ventana.width() if self.ciclico: for capa in self.capas: if capa.derecha < -ancho / 2: capa.izquierda = \ self.capas_auxiliares[self.capas.index(capa)].derecha for capa in self.capas_auxiliares: if capa.derecha < -ancho / 2: capa.izquierda = \ self.capas[self.capas_auxiliares.index(capa)].derecha class Plano(Fondo): def __init__(self): Fondo.__init__(self,"plano.png") def dimension_fondo(self): # No importa el tamaño de la imagen del fondo, porque tiene el mismo color en todos lados. return (0, 0) def dibujar(self, painter): painter.save() x = pilas.mundo.motor.camara_x y = -pilas.mundo.motor.camara_y ancho, alto = pilas.mundo.obtener_area() painter.drawTiledPixmap(0, 0, ancho, alto, self.imagen._imagen, x % 30, y % 30) painter.restore() def esta_fuera_de_la_pantalla(self): return False class Capa(): def __init__(self, imagen, x, y, velocidad): self.imagen = pilas.imagenes.cargar(imagen) self.x = x self.y = y self.velocidad = velocidad def dibujar_tiled_horizontal(self, painter, ancho, alto): dx = self.x% self.imagen.ancho() dy = 0 painter.drawTiledPixmap(0, self.y, ancho, self.imagen.alto(), self.imagen._imagen, abs(dx) % self.imagen.ancho(), dy % self.imagen.alto()) def mover_horizontal(self, dx): self.x += dx * self.velocidad class DesplazamientoHorizontal(Fondo): """ Representa un fondo con desplazamiento horizontal y repetido. Este tipo de fondo es ideal para animaciones y juegos donde el fondo se puede repetir una y otra vez. Por ejemplo en un juego de carreras horizontal o de naves. El fondo inicialmente no tiene apariencia, pero se pueden agregar capas, cada una con su propia velocidad y posición. Por ejemplo, si queremos simular un fondo con 3 capas, una lejana con estrellas, y luego dos capas mas cercanas con arboles y arbustos podemos escribir: >>> fondo = pilas.fondos.DesplazamientoHorizontal() >>> fondo.agregar("estrellas.png", 0, 0, 0) >>> fondo.agregar("arboles_lejanos.png", 0, 0, 1) >>> fondo.agregar("arbustos_cercanos.png", 0, 0, 2) El primer argumento del método agregar es la imagen que se tiene que repetir horizontalmente. Luego viene la posición 'x' e 'y'. Por último el valor numérico es la velocidad de movimiento que tendría esa capa. Un valor grande de velocidad significa que la capa se moverá mas rápido que otras ante un cambio de posición en la cámara. Por ejemplo, la capa que tiene velocidad 2 significa que se moverá 2 pixels hacia la izquierda cada vez que la cámara mire 2 pixel hacia la derecha. Si la capa tiene velocidad 0 significa que permanecerá inamovible al movimiento de la cámara. """ def __init__(self): self.capas = [] pilas.actores.Actor.__init__(self, "invisible.png") self.escena.mueve_camara.conectar(self.cuando_mueve_camara) self.z = 1000 def dibujar(self, painter): painter.save() ancho, alto = pilas.mundo.obtener_area() for capa in self.capas: capa.dibujar_tiled_horizontal(painter, ancho, alto) painter.restore() def agregar(self, imagen, x=0, y=0, velocidad=0): nueva_capa = Capa(imagen, x, y, velocidad) self.capas.append(nueva_capa) def cuando_mueve_camara(self, evento): for capa in self.capas: capa.mover_horizontal(evento.dx) def esta_fuera_de_la_pantalla(self): return False
{ "pile_set_name": "Github" }
Speech intelligibility after glossectomy and speech rehabilitation. Oral tumor resections cause articulation deficiencies, depending on the site, extent of resection, type of reconstruction, and tongue stump mobility. To evaluate the speech intelligibility of patients undergoing total, subtotal, or partial glossectomy, before and after speech therapy. Twenty-seven patients (24 men and 3 women), aged 34 to 77 years (mean age, 56.5 years), underwent glossectomy. Tumor stages were T1 in 3 patients, T2 in 4, T3 in 8, T4 in 11, and TX in 1; node stages, N0 in 15 patients, N1 in 5, N2a-c in 6, and N3 in 1. No patient had metastases (M0). Patients were divided into 3 groups by extent of tongue resection, ie, total (group 1; n = 6), subtotal (group 2; n = 9), and partial (group 3; n = 12). Different phonological tasks were recorded and analyzed by 3 experienced judges, including sustained 7 oral vowels, vowel in a syllable, and the sequence vowel-consonant-vowel (VCV). The intelligibility of spontaneous speech (sequence story) was scored from 1 to 4 in consensus. All patients underwent a therapeutic program to activate articulatory adaptations, compensations, and maximization of the remaining structures for 3 to 6 months. The tasks were recorded after speech therapy. To compare mean changes, analyses of variance and Wilcoxon tests were used. Patients of groups 1 and 2 significantly improved their speech intelligibility (P<.05). Group 1 improved vowels, VCV, and spontaneous speech; group 2, syllable, VCV, and spontaneous speech. Group 3 demonstrated better intelligibility in the pretherapy phase, but the improvement after therapy was not significant. Speech therapy was effective in improving speech intelligibility of patients undergoing glossectomy, even after major resection. Different pretherapy ability between groups was seen, with improvement of speech intelligibility in groups 1 and 2. The improvement of speech intelligibility in group 3 was not statistically significant, possibly because of the small and heterogeneous sample.
{ "pile_set_name": "PubMed Abstracts" }
Q: Live Updating of razor (cshtml) pages when site is live On a live IIS web server, is it possible to update the /View/xxx.cshtml razor files and have those changes reflected without rebuilding and reuploading server? A: Sure - just open the file and update it or FTP a new version to the server. The views are not compiled into the DLL (unless you are using one of those Razor View Compiler add-ins). As a result, they can be modified on the fly like any other content resource (CSS, JS, images, etc)
{ "pile_set_name": "StackExchange" }
Tohvri, Viljandi County Tohvri is a village in Viljandi Parish, Viljandi County, Estonia. It has a population of 97 (as of 4 January 2010). References Category:Villages in Viljandi County
{ "pile_set_name": "Wikipedia (en)" }
NEC, Toshiba may outsource Direct RDRAM packaging TOKYO — NEC Corp. and Toshiba Corp., two of the industry's first chip makers to ramp production of Direct Rambus DRAMs, each indicated this week that it may begin to outsource packaging operations for the chip. Interviewed here this week by EBN, executives at both companies said it would be more cost effective to outsource the chip's micro-BGA package assembly than to make the investments required to assemble the devices in-house. Neither company would identify which contractors are being considered for the task. Traditionally, DRAM vendors handle back-end test and assembly internally, due to the cost structure associated with the commodity memory's extremely high production volumes. The added complexity and cost of the micro-BGA packages, however, has caused NEC and Toshiba to reconsider their back-end operational structure. Toshihide Fujii, general manager of semiconductor strategic planning at Toshiba, said the company already is using a Japanese contractor to assemble the Direct Rambus micro-BGA packages built for Sony Computer Entertainment's Playstation 2 game console. He added that Toshiba is outsourcing most of its DRAM module assembly as part of a previously announced supply-chain management contract with Kingston Technologies Co. (Fountain Valley, Calif.). Keiichi Shimakura, associate senior vice president of NEC, said the company is looking into outsourcing micro-BGA package assembly, but will continue to do its own testing of all Direct RDRAM devices. Even though Direct Rambus requires expensive new testers costing several million dollars each, "we want to retain control over this critical part of the back-end process," he said. Shimakura added that NEC currently is considering a large investment in new testers for Direct Rambus, depending on how fast the new memory chips ramp up in the market. NEC declined to discuss its Direct RDRAM production levels, although Shimakura said the company has the capacity to make 1 million units a month. Fujii said Toshiba is producing 2 million Direct Rambus chips a month, mostly for the Sony console. He said Toshiba is expanding production to start selling Direct RDRAM into the PC and workstation markets, and has set a production target of up to 6 million units a month by the end of the year. NEC, Toshiba may outsource Direct RDRAM packagingJack Robertson TOKYO — NEC Corp. and Toshiba Corp., two of the industry's first chip makers to ramp production of Direct Rambus DRAMs, each indicated this week that it may begin to outsource packaging operations for the chip. Interviewed here this week by EBN, executives at both companies said it would be more cost effective to outsource the chip's micro-BGA package assembly than to make the investments required to assemble the devices in-house. Neither company would identify which contractors are being considered for the task. Traditionally, DRAM vendors handle back-end test and assembly internally, due to the cost structure associated with the commodity memory's extremely high production volumes. The added complexity and cost of the micro-BGA packages, however, has caused NEC and Toshiba to reconsider their back-end operational structure. Toshihide Fujii, general manager of semiconductor strategic planning at Toshiba, said the company already is using a Japanese contractor to assemble the Direct Rambus micro-BGA packages built for Sony Computer Entertainment's Playstation 2 game console. He added that Toshiba is outsourcing most of its DRAM module assembly as part of a previously announced supply-chain management contract with Kingston Technologies Co. (Fountain Valley, Calif.). Keiichi Shimakura, associate senior vice president of NEC, said the company is looking into outsourcing micro-BGA package assembly, but will continue to do its own testing of all Direct RDRAM devices. Even though Direct Rambus requires expensive new testers costing several million dollars each, "we want to retain control over this critical part of the back-end process," he said. Shimakura added that NEC currently is considering a large investment in new testers for Direct Rambus, depending on how fast the new memory chips ramp up in the market. NEC declined to discuss its Direct RDRAM production levels, although Shimakura said the company has the capacity to make 1 million units a month. Fujii said Toshiba is producing 2 million Direct Rambus chips a month, mostly for the Sony console. He said Toshiba is expanding production to start selling Direct RDRAM into the PC and workstation markets, and has set a production target of up to 6 million units a month by the end of the year.
{ "pile_set_name": "Pile-CC" }
Chief Commissioner of Income Tax The Chief Commissioner of Income Tax, or Director General of Income Tax is a senior rank in the Income Tax Department in India. Chief Commissioners are in charge of operations of the department within a region which is usually overlapping with the territory of a state. Depending on the region, their numbers vary from 16(in Maharashtra) to 3(in Karnataka). They are chosen from the Indian Revenue Service and usually serve for the government for a period of 30 years. After cadre restructuring, a new designation is created the Principal Chief Commissioner of Income Tax and senior-most Chief Commissioners of Income Tax are promoted into this grade and have additional responsibilities as per personnel and budgetary targets are concerned. Their equivalent rank at the Union Secretariat is that of a Special Secretary(A secretary who is not the Administrative Head of a department but discharges the function of a Secretary for all other matters. A full Secretary designated as Secretary GOI is the Administrative Head) to Govt.of India in the apex scale. Other Chief Commissioners There are other Chief Commissioners who are not cadre controlling and are placed above the rank of Union Additional Secretary in the HAG plus scale. The junior-most Chief Commissioners are now of the rank of Union Additional Secretary. Thus the Chief Commissioners of Income Tax draw three different pay scales based on their seniority. Functions Chief Commissioners are allotted budgetary targets for collection by the Central Board of Direct Taxes and the targets are divided among the Commissioners of Income Tax and are constantly monitored. References See also Indian Revenue Service Category:Income Tax Department of India Category:Tax officials Category:Tax occupations
{ "pile_set_name": "Wikipedia (en)" }
1. Introduction {#sec0005} =============== Digital polymerase chain reaction (dPCR) is an emerging method for a growing number of applications [@bib0005]. In contrast to classical real-time PCR (qPCR) where amplification is performed in one single reaction volume (e.g., 25 μL), in dPCR the reaction mix is partitioned into thousands of tiny reaction cavities for individual PCR runs. By counting each cavity and detecting whether PCR amplification has taken place (positive) or not (negative), absolute copy numbers of target DNA can be calculated. Using thousands of droplets on a nanoliter (nL) scale is a flexible and relatively cost-efficient version of dPCR, called droplet digital PCR (ddPCR). One popular system for ddPCR is Bio-Rad's QX system [@bib0010]. Defining the fluorescence threshold that separates positive from negative reactions is not always straightforward. Droplets exhibiting fluorescence ranging between those of explicit positive and negative droplets are called 'rain'. The origin of the rain is not clear. Rain often is attributed to delayed PCR onset [@bib0015] or partial PCR inhibition in individual droplets [@bib0020]. However, it could also be a consequence of damaged positive droplets with corresponding reduced fluorescence, or damaged negative droplets with increased background fluorescence, or a mixture of both [@bib0025]. The existence of rain can hinder analysis and the correct setting of a threshold. Several approaches exist to minimize the effects of rain on quantitative results [@bib0015], [@bib0025]. Unfortunately, the existing algorithms like 'definetherain' [@bib0025] consider only the FAM channel of the QX ddPCR system, while disregarding the HEX/VIC channel. An important task of official food and feed control in the European Union (EU) is to monitor the compliance of products with regulations related to labeling by appropriate quantitative laboratory analysis [@bib0030]. As the results of quantitative analysis can imply serious legal and financial consequences, especially in the light of Regulation (EU) No 619/2011 [@bib0035] for producers or distributors of feed, the quantification results need to be reliable. Tolerable traces of not-yet approved GMO in feed must not exceed the so-called 'minimum required performance limit' (MRPL), which is defined as corresponding to 0.1% mass fraction of genetically modified material [@bib0035]. It should be pointed out that to quantify GMO content in a sample at a level around 0.1% mass presents a special challenge as official PCR quantification methods usually have a validated dynamic range between 0.1 to 4.5% mass. This means that GMO falling under the scope of Regulation (EU) No 619/2011 [@bib0035] have to be quantified at the lower end of the dynamic range of these qPCR methods. Almost all official quantitative detection methods published by the EURL-GMFF are so far based on qPCR with hydrolysis probes [@bib0040]. Several authors have however shown the potential of ddPCR for analysis of genetically modified organisms (GMO) [@bib0045], [@bib0050], [@bib0055], [@bib0060], [@bib0065], [@bib0070]. Special requirements when establishing ddPCR for GMO in a laboratory include the choice between validated official qPCR methods and the optimization of these assays for a ddPCR format. Differentiation between droplets with positive reaction and negative droplets can be crucial for a correct measurement. This holds true in particular when independent transgene and plant-specific reference gene copy numbers have to be combined to determine the GM content of a sample [@bib0075]. After quantification of both the transgene and a species-specific reference gene, the corresponding mass fraction has to be calculated while considering the (assumed) zygosity of the plant tissue(s) and plant species under investigation [@bib0080]. Consideration of both FAM and HEX/VIC channels is therefore essential when transgene and reference gene are to be analyzed together in a duplex reaction. In this manuscript, a computer based algorithm has been carefully designed to minimize the impact of rain on ddPCR analysis, offering a more objective platform for assessment of ddPCR results. Our approach graphically visualizes the effects of experimental parameter variation on the quality of droplet separation. One application is a user-friendly quick overview of the already tested variations, in order to facilitate choice of the best assay parameters for a given analytical task. 2. Materials and methods {#sec0010} ======================== 2.1. Samples {#sec0015} ------------ Certified reference materials of GMO events were either purchased from IRMM (Geel, Belgium), or from AOCS (Urbana, USA). Ground dry material was stored protected from humidity in a fridge at around 5 °C, DNA frozen at −20 °C. Multi-target plasmids for event maize NK603 were designed in-house and subsequently synthesized, propagated, purified and linearized by Eurofins-MWG (Ebersberg, Germany). Stock solutions of plasmids were kept at −80 °C, working solutions either frozen at −20 °C for long-term storage, or kept in the fridge at around 5 °C for usage within days. 2.2. DNA extraction {#sec0020} ------------------- Genomic DNA (gDNA) was extracted from 100 mg (soy) or 200 mg (maize) ground dry material with the Maxwell 16 instrument (Promega, Mannheim, Germany) using a modified protocol [@bib0085]. Some batches of isolated gDNA were further purified with DNA Extractor Cleaning Columns Kit (Eurofins-GeneScan). Genomic DNA was not enzymatically digested prior to ddPCR if not otherwise indicated, plasmids were purchased linearized. Extracted DNA was either frozen at −20 °C for long-term storage, or kept in the fridge at around 5 °C for usage within days. 2.3. Oligonucleotides {#sec0025} --------------------- Oligonucleotide primers and hydrolysis probes were synthesized by TIB Molbiol (Berlin, Germany), Eurofins-MWG, DNA Technology/Biosearch Technologies (Risskov, Denmark) or LifeTechnologies (formerly AppliedBiosystems, Carlsbad, USA) in HPLC-grade. Oligonucleotide sequences for the GM events in this study were obtained from the official EU method collection [@bib0040]. For references on oligonucleotides see Supplementary Tables 1 and 2. Probes were labelled either with FAM (F in the matrix data), HEX (H), or VIC (V). The majority of probes were quenched with non-fluorescent black hole quenchers (without indication in the matrix data). Few probes were quenched with fluorescent TAMRA (indicated by an additional T in the matrix data). 2.4. ddPCR {#sec0030} ---------- Droplet digital PCR (ddPCR) was performed in investigator\'s laboratory with either a CFX96 or T100 PCR thermocycler with gradient function (both Bio-Rad, Munich, Germany). Samples were analyzed as technical duplicates. As master mix the 'ddPCR Supermix for Probes' (Cat. No. 186-3010, Bio-Rad) was used. The total reaction volume was either 20 μL or 22 μL, containing 1× master mix, primers and probes as stated above in section 'Oligonucleotides' and 5 μL of sample DNA, or water for negative controls. Oligonucleotide concentrations were as given in the method protocols ('normal'; Supplementary Tables 1 and 2 [@bib0040]) or---if otherwise indicated---900 nM for primers and 250 nM for probes ('high'). Oligonucleotide concentrations in the matrix are given as concentrations of primer 1, primer 2, and probe. 20 μL of the reaction mixture was then loaded on eight-channel disposable droplet generator cartridges (before 12.05.2014 Cat. No. 186-3008, from 12.05.2014 Cat. No. 186-4008, gaskets Cat. No. 186-3009, Bio-Rad). Droplets were generated with 70 μL of droplet generation oil (Cat. No. 186-3005, Bio-Rad) in the droplet generator of the QX100 system (Bio-Rad). The generated droplets were transferred to a 96-well PCR plate (Cat. No. 0030128.613, TwinTec, Eppendorf, Hamburg, Germany). The transfer was either done with a manual 1-channel 100-μL-pipette (Reference, Eppendorf) or with an automatic 8-channel 50-μL-pipette (Rainin E8-50XLS+, filter tips Cat. No. 17002927, Mettler-Toledo, Giessen, Germany). After thermal sealing with pierceable foils in a PCR plate sealer PX1 (both Bio-Rad, foil Cat. No. 181-4040), the following temperature profile was used for PCR: 600 s 95 °C, and 45 cycles of 15 s 95 °C, and 60 s 60 °C. Temperature gradients ---when indicated--- on the thermocyclers CFX96 and T100 consisted of 61.0 °C, 60.7 °C, 60.0 °C, 58.8 °C, 57.4 °C, 56.2 °C, 55.4 °C, and 55.0 °C. After PCR the sealed plates were placed in the droplet reader from the QX100 system (Bio-Rad) and droplets were analyszed according to manufacturer's recommendations (Droplet Reader Oil Cat. No. 186-3004). 2.5. Data analysis {#sec0035} ------------------ Droplet fluorescence data were initially analyzed with QuantaSoft software (Bio-Rad) versions 1.3.2.0 (from 25.09.2013) and 1.5.38.1118 (from 12.05.2014). Raw data, i.e., the fluorescence values for the droplets were exported from QuantaSoft software into Microsoft Excel 2010. Further analysis was done using the built-in functions and self-programmed VBA algorithms in the software. An Excel tool is available upon request from the corresponding author (or optionally available online). This Excel tool semi-automatically categorizes the fluorescence values of the droplets in positive and negative classes and the so-called 'rain' in-between [@bib0025]. Depending on the separation of positive and negative droplets (assay performance), an objective separation value *k* is calculated automatically. Together with other assay parameters (e.g., average copies per droplet for transgene and reference gene, number of accepted droplets, identified rain droplets) the determined separation value *k* can be semi-automatically exported from the original Excel tool and pasted into another Excel tool called the 'matrix' (see Supplementary 'Short manuals for accompanying Excel files). 2.6. Calculation of GMO percentages {#sec0040} ----------------------------------- Another feature of the first mentioned Excel tool is the possibility for the semi-automatic calculation of GMO percentage tables. This is achieved for duplex assays (transgene and reference gene) using Excel's built-in Table function. GMO percentages for different threshold combinations of transgene and reference gene assays can be automatically computed. Thresholds were considered separately, while resulting GMO percentages are given at the corresponding intersections in the table. 3. Results {#sec0045} ========== 3.1. Droplet recovery {#sec0050} --------------------- Droplet digital PCR presented here is based on distributing the reaction mix into a multitude of partitions (theoretically up to 20,000). We documented the number of analyzable partitions (represented by the 'Accepted Droplets' in the software QuantaSoft) together with set-up-specific parameters for each run in our laboratory ([Fig. 1](#fig0005){ref-type="fig"}). We have analyzed more than 2,800 droplet populations so far. A droplet population results from 20 μL of master mix including sample DNA, i.e., the read-out of a single well of the PCR plate. The amount of accepted droplets could be raised by replacing a manual single-channel pipette by an automatic 8-channel pipette and additionally optimizing the pipette handling procedure by the technicians [Fig. 1](#fig0005){ref-type="fig"}. Additionally, the transfer of the procedure to other laboratory technicians was straightforward with the automatic model, resulting in comparable amounts of droplets. Outliers with few accepted droplets (\<6,000) still occurred, but infrequently ([Fig. 1](#fig0005){ref-type="fig"}, subpopulation 1 and 2). An additional effect on the droplet distribution was observed when new cartridges (186--4008) for droplet generation and a new QuantaSoft version were used. The according shift of around 1,500 more droplets per population is detectable ([Fig. 1](#fig0005){ref-type="fig"}, shift a). This shift coincides with the introduction of a new QuantaSoft version (1.5.38.1118) in our lab. A second shift was observed as a result of training and optimization effects ([Fig. 1](#fig0005){ref-type="fig"}, shift b). 3.2. Threshold setting {#sec0055} ---------------------- The droplet reader used (QX100 system) is able to discriminate between signals from two channels: FAM and HEX/VIC. In our experience droplet separation in the FAM channel was noticeably better than in the HEX/VIC channel. Although this is not always the case, the possibility that this phenomenon occurs is greater when the positive droplets are much more abundant against the backdrop of negative droplets in that channel. Therefore, in GMO duplex analysis the FAM channel was used for transgene detection which is usually less present than the plant specific reference gene ([Fig. 2](#fig0010){ref-type="fig"}). We exemplarily analyzed the difficulties in setting a correct threshold to separate the droplet populations by calculating the corresponding GMO content of a well-characterized reference material ([Fig. 3](#fig0015){ref-type="fig"}). Using 25 different thresholds for the transgene and 15 different thresholds for the reference gene, we determined the resulting spread in calculated GMO contents ([Fig. 4](#fig0020){ref-type="fig"}). When the separation is good (57.4 °C in [Fig. 3](#fig0015){ref-type="fig"}) the variation of the results (GMO% (cp/cp))---differing by a maximum of 2% from the result obtained by using pre-defined hypothetical ideal thresholds for FAM and HEX/VIC channel (this GM content is given in the upper left corner of the tables from [Fig. 4](#fig0020){ref-type="fig"})---allows for many different positions of the thresholds to give similar GMO contents (green region in [Fig. 4](#fig0020){ref-type="fig"}). Worse separation in the reference gene, i.e., here observed at higher annealing temperatures, narrows down the region with a maximum of 2% difference for the corresponding HEX threshold (60.0 °C and 60.7 °C in [Fig. 4](#fig0020){ref-type="fig"}). Another effect of the separation and thus the assay performance is the influence on the absolute calculated GMO content. The measured reference material ([Fig. 3](#fig0015){ref-type="fig"}) had a nominal GMO content of 10% for soy event 356043. The better the distinction between positive and negative droplets, the smaller the effect of the positioning of the threshold, and, even more importantly, the closer the measured GMO content resembled the nominal value ([Fig. 4](#fig0020){ref-type="fig"}). 3.3. Temperature gradients and oligonucleotide concentrations {#sec0060} ------------------------------------------------------------- One recommended way for improving separation between positive and negative droplets in ddPCR is lowering the annealing/extension temperature of the PCR [@bib0090]. Ideally, this is tested with a thermal cycler that offers a temperature gradient function. By using such a cycler, the influence of up to eight different annealing/extension temperatures could be compared in a single PCR run. The effect of the annealing temperature on separation between positive and negative droplets was already visible for the reference gene in the threshold setting example ([Fig. 3](#fig0015){ref-type="fig"}, [Fig. 4](#fig0020){ref-type="fig"}). The improvement of droplet separation with reducing annealing/extension temperature was most prominent in the HEX/VIC channel but could sometimes also be observed in the FAM channel (data not shown). As the separation is generally good for the FAM-labelled assays, the consequences for HEX/VIC-labelled assays are more interesting. Here, the extent of the improvement in droplet separation varies even for the same reference gene, depending on the partner assay (transgene assay) in the duplex ddPCR ([Fig. 2](#fig0010){ref-type="fig"}). The identification---and especially the subsequent quantification---of GMO plants is based in the EU mainly on real-time PCR methods validated and published by the EURL-GMFF [@bib0040]. When starting with ddPCR, we tried to stick as close as possible to the protocols of these official methods, encouraged by a report showing the applicability of these methods even in duplex reactions without further modification [@bib0060]. Consequently, we kept the oligonucleotide concentrations as published, and changed merely to HEX labelled probes for reference gene assays and to black-hole quenchers where applicable for transgene and reference gene assays. The manufacturer's manual for the ddPCR master mix [@bib0095] recommends high oligonucleotide concentrations of 900 nM for primers and 250 nM for probes which are unusual for real-time PCR in the case of GMO quantification. We tried these higher concentrations in combination with the already described temperature gradients ([Fig. 5](#fig0025){ref-type="fig"}). The higher oligonucleotide concentrations resulted in raised signals both for the positive and the negative droplets. Nevertheless, depending on the assay, the threshold setting must be given careful consideration. Additionally, there are hints for performance differences when comparing probes from different suppliers (data not shown). 3.4. Matrix and assay selection {#sec0065} ------------------------------- In the course of establishing ddPCR for GMO analysis in our laboratory we varied several reaction parameters, such as annealing/elongation temperatures, oligonucleotide concentrations, thermal cyclers and probe manufacturers. So far, we applied a total of 24 different assays for transgene detection (Supplementary Table 1) and 7 reference gene assays (Supplementary Table 2). For better comparison of ddPCR performance by means of droplet separation derived from different experimental parameters, we deposited our findings in an Excel Table resulting in a data matrix with currently 309 datasets. One of the most important empirical findings is the performance of the assay, expressed in the proposed continuous separation value *k* (see below). The separation value *k* incorporates the objective parameters of background fluorescence of negative droplets versus fluorescence signal from positive droplets. This is combined in the matrix with the more subjective discrete parameter of a separation rating by means of ease to set an appropriate threshold. Our matrix can be analyzed via the Pivot functions of Excel, resulting in Pivot charts illustrating both objective background/signal values and subjective separation ratings in a graphical way. The Pivot charts allow for presentation of condensed information from the droplet clouds, e.g., for temperature gradients from [Fig. 2](#fig0010){ref-type="fig"} in the Pivot equivalent in [Fig. 6](#fig0030){ref-type="fig"}, or for additional different oligonucleotide concentrations (both panels in [Fig. 5](#fig0025){ref-type="fig"}). With the aid of the matrix/Pivot charts, the information for certain assays can be quickly and easily accessed, without lengthy search in copious tables. Exemplary overviews for soy 40-3-2 transgene and reference gene assays are depicted in [Fig. 7](#fig0035){ref-type="fig"}. Starting from an overview, suitable assay parameters can be visually identified, or, the other way round, unsuitable parameters excluded. In this example and from the parameters tested so far, best conditions for the assay would be as follows: Primers and probe according to Kuribara et al. [@bib0100], duplex PCR, high primer and probe concentrations ([Fig. 7](#fig0035){ref-type="fig"}). 3.5. Objective separation value for classification of assay performance {#sec0070} ----------------------------------------------------------------------- Initially we manually selected a separation category (none, moderate, good or very good separation) for each assay, mainly based on the separation between positive and negative droplets, including the amount of signals in-between---the so-called 'rain'. As this was a subjective measure prone to differences in analyzing from person to person (even the same person is likely to judge the same assay differently on another day), we strived for an objective way to classify assay performance. Based on the definition for rain from the 'definetherain' algorithm [@bib0025], we developed a measure for separation between positive and negative signals (droplets) by the determination of the separation factor *k* that would solve the following equation:$$k = \frac{Mean_{pos} - {Mean}_{neg}}{{SD}_{pos} + SD_{neg}}$$ The factor *k* was calculated automatically using Excel's built-in goal-seek function. In this equation, *k* is the separation value, MeanSignal is the mean fluorescence signal of the positive or negative droplet population, and SD is the standard deviation of the positive or negative droplet population\'s fluorescence signals. In conclusion, the higher the value of *k*, the better the separation of positive and negative signals (droplet populations). By comparison of the calculated values for the assays with the manual categorization, the classes could be objectively defined by the borders given in Supplementary Table 3 for FAM-labelled probes, or Supplementary Table 4 for HEX/VIC-labelled probes, respectively. Different classification values for FAM and HEX/VIC probes were selected because of the generally better separation observed in the FAM channel, compared to the HEX channel. The determined classification factor is influenced by the abundance of positive droplets but is in principle quite robust in reproducibly determining a category (Supplementary Fig. 1). Our Excel tool can analyze the data by either removing the rain droplets from further calculation, or by using a manually set threshold for calculation of copies per μL and GMO contents. However, the main purpose of the developed Excel tool is not the removal of rain droplets from further analysis \[done as in 5\] but rather identifying assays with good separation according to the factor *k*. The larger the separation factor, the better the separation and the easier the selection of an appropriate threshold. The determined separation value *k* is combined with other assay parameters into a single dataset. This dataset can be semi-automatically exported from the original Excel tool and pasted into another Excel tool called the 'matrix'. The matrix contains information about all performed assays (Supplementary Tables 1 and 2) including the separation values. The separation values can be classified semi-automatically into the four separation categories: none, moderate, good and very good (Supplementary Tables 3 and 4). The matrix data can be visualised using Excel's built-in Pivot diagram features. The separation categories are coded by different colours (e.g., [Fig. 6](#fig0030){ref-type="fig"}). The Excel matrix with the Pivot visualization is available upon request (or optionally available online). 3.6. Key features of the developed Excel tool {#sec0075} --------------------------------------------- We propose a workflow starting from a run file in QuantaSoft and ending in a performance factor *k* used for classification in the Pivot matrix (for details see Supplement: Short manuals for accompanying Excel files). The Excel tool for import, export and data analysis can automatically import raw fluorescence data from files in a given folder. After selection of the correct assay type (singleplex/duplex and used fluorescence channel), droplets are clustered iteratively based on their fluorescence signal into positive, negative and rain. The resulting copies per μL and GMO content are displayed. Based on the clustering, the performance factor *k* is also automatically calculated. All calculated assay parameters including *k* can be automatically exported as one dataset for later transfer to the experience matrix. This Excel tool can be used for additional calculations. The threshold setting for FAM and HEX/VIC channel can be separately switched from the 'definetherain' algorithm to manually set threshold, with immediate display of result on copies per μL and GMO content. The most powerful function of the Excel tool is the possibility to automatically generate data tables with GMO contents for 375 different combinations of FAM and HEX/VIC threshold values. One use of such Table is to directly study the magnitude of the effect caused by threshold setting. Another application is the empirical search for the best threshold analyzing a reference material with known GMO content and unknown samples in a run. 4. Discussion {#sec0080} ============= The introduction of Regulation (EC) No 619/2011 [@bib0035] has posed an additional challenge to GMO testing laboratories in the EU as quantification in the range of 0.1% has to be accurately achieved. Digital droplet PCR with its high sensitivity and independence from standard curves (measuring absolute DNA copy numbers) is therefore a very promising tool for GMO analysis (and other applications) especially at low DNA concentrations. As a valuable and powerful tool for quantitative GMO (DNA) analysis, it remains to be shown that ddPCR is really up to the task compared to qPCR. Several authors have shown that ddPCR can be used for quantification of certain GMOs [@bib0060], [@bib0065], [@bib0070], [@bib0105]. Our work intends to contribute to a better understanding of the dynamics of ddPCR in GMO analysis of food and feed, offering a more objective platform for sample evaluation. 4.1. Droplet recovery {#sec0085} --------------------- Sensitivity is directly dependent on the number of analyzable partitions: the more partitions, the better the maximal achievable sensitivity. When using a chamber based system with a fixed number of reaction cavities, the sensitivity is fixed, provided that all cavities are equally filled. In ddPCR, the master mix is distributed into a variable number of droplets [@bib0110]. The partition number is also an essential criterion of the Digital MIQE Guidelines [@bib0005] as it is directly linked to sensitivity. One of our goals when starting with ddPCR in our lab was therefore to establish a sufficient and stable number of analyzable droplets. This droplet recovery is represented by the number of software-accepted droplets per generated droplet population, i.e., the read-out of a single well of the PCR microtiter plate. The number of accepted droplets generated in our lab (around 16.000 in average) was in good accordance with or even considerably higher than values for other published GMO analysis [@bib0060], [@bib0105]. In our experience, a crucial manual step in ddPCR is the transfer of the fragile freshly generated droplets into the wells of the PCR plate with a pipette. This is due to the fact that transferring the mix with droplets with a constant low pipetting speed (suction) and an appropriate (steep) angle of filter tips touching gently the wall of the microtiter plates helps to minimize the mechanical disruption of droplets. The transition towards higher accepted droplet numbers (\>16,000) after an initial training period is also reflected in [Fig. 1](#fig0005){ref-type="fig"} (shift b). We nevertheless recommend using an automatic 8-channel pipette for minimising variations between different operators. The cost for the automatic pipette model is low compared to other ddPCR consumables. According to Bio-Rad (personal communication) an additional increase of generated droplets results from the fact that droplets generated with the 186--4008 cartridges are consequently smaller (0.85 nL) compared to the droplets obtained with the 186--3008 cartridges, when the same 20 μL volume is used for droplet generation. Nevertheless this information clearly contradicts the findings of Corbisier et al. [@bib0115] and Dong et al. [@bib0120] who measured a constant droplet size of 0.83--0.85 nL generated with the previous cartridges (186--3008). According to these findings the increased number of droplets with the 186--4008 cartridges is caused by the fact that these cartridges are more efficient and transform a larger amount of the 20 μL sample into droplets. In addition, the update of the software on the droplet reader allows the droplet reader to pick up a larger volume from the PCR plate and therefore also more droplets. 4.2. Threshold setting {#sec0090} ---------------------- Differentiation between droplets with successful PCR amplification (positive droplets) and droplets without amplification (negative droplets) is the basis for the subsequent calculation of absolute copy numbers. A threshold, which can be set either manually or automatically by the software, usually separates positive from negative droplets. In order to retain as much control as possible over the distinction between positive and negative signals (droplets) we favored manual over automatic threshold setting. When using automatic threshold setting it is highly recommended to double check results obtained. Some droplets however exist in the in-between and are neither clear positives nor negatives. These are usually called rain. As this rain can significantly alter the calculated copy numbers, attempts have been made to virtually eliminate the existing rain droplets from the calculation [@bib0015], [@bib0025]. Before mathematically excluding rain from the calculation, we believe that a more appropriate approach would be an optimized assay, where the setting of the threshold should have little consequences on the calculated copy number. The correlation between appropriate threshold setting and trueness in ddPCR analysis, although quite logical, has to the best of our knowledge, not been previously demonstrated. With our developed Excel tool, this could be demonstrated for ddPCR with transgene and reference gene assays in a duplex reaction ([Fig. 3](#fig0015){ref-type="fig"}, [Fig. 4](#fig0020){ref-type="fig"}). Relying on the built-in Excel algorithms and VBA programming, a data pool of raw ddPCR results, could be semi-automatically imported into Excel. The effect of rain and adjusted threshold settings on analysis of GMO events could thus be comprehensively evaluated. 4.3. Temperature gradients and oligonucleotide concentrations {#sec0095} ------------------------------------------------------------- The methods for GMO analysis published by the EURL-GMFF [@bib0040] are based on qPCR with singleplex assays for transgene and reference gene, usually run at 60 °C annealing/elongation temperature with defined oligonucleotide concentrations. Using these published methods for ddPCR, e.g., for ddPCR with duplex assays for transgene and reference gene, is a significant deviation from the published and validated methods. Nevertheless such deviations have merit when thereby better quantification of GMO contents is achieved. To improve the separation between positive and negative droplets, we lowered the annealing/extension temperature and raised the oligonucleotide concentration ([Fig. 5](#fig0025){ref-type="fig"}). Both procedures enhanced the separation in many---but not all---assays tested. We propose to use the assay protocol with the best separation that remains as close as possible to the validated PCR method. Nevertheless, the assays with the determined optimal reaction parameters may eventually have to be validated or verified. Such verification could include the determination of parameters such as precision, trueness and accuracy [@bib0125]. For certain assays and matrices, an increase in cycle number might also be advantageous [@bib0130], however, we have not empirically tackled this possibility in our lab so far. Whether and to which extent the supplier of the probes and/or each batch of the production has influence on the separation of positive and negative signals is yet unclear. We saw first hints for performance differences using probes from three different suppliers. Whether this was due to different internal quality controls, or may even be dependent on the produced lot, remains unclear and an open question for future testing. 4.4. Matrix and assay selection {#sec0100} ------------------------------- As the fluorescence values for negative and positive droplets are important measures for each ddPCR assay, the Digital MIQE Guidelines [@bib0005] state that examples of end-point fluorescence values or graphic readouts should be included in manuscripts or supplementary material. We expand on this requirement and suggest a matrix that combines these fluorescence values with an objective separation value for each tested assay ([Fig. 7](#fig0035){ref-type="fig"}). The condensed information can then be the starting point for narrowing down to specific settings, or for identification of optimization needs. In the depicted example of assays for 40-3-2 soy ([Fig. 7](#fig0035){ref-type="fig"}), the methods of the EURL-GMFF---used in duplex PCR instead of singleplex PCR---are not suitable (shown on the left, runs 312 and 316) as they have only red or orange symbols, depicting none or moderate separation, respectively. The same holds true for methods of the EURL-GMFF in singleplex assays for event 40-3-2 and lectin, respectively. The 40-3-2 method according to Kuribara [@bib0100] used as singleplex assays would yield sufficient separation (shown on the right, runs 357 and 341 for lectin), with the cost of more---error-prone---pipetting steps and the need for additional sample DNA. Increasing the oligonucleotide concentrations did however significantly improve separation, even in duplex assays (run 374). When the annealing/elongation temperatures are displayed (not shown in the figure), it is clearly visible that a decrease in temperature would improve the separation at normal oligonucleotide concentrations (runs 370, 374 and 391). In general, for detection of soy event 40-3-2, it is necessary to deviate from the validated qPCR protocols in one way or another, to achieve a sufficient separation of positive and negative signals (droplets). In this case, the higher oligonucleotide concentrations with the common annealing/elongation temperature of 60 °C would be the preferred assay parameters. 4.5. Objective separation value for classification of assay performance {#sec0105} ----------------------------------------------------------------------- The existing and published algorithms for definition of rain are so far limited to the FAM channel [@bib0015], [@bib0025]. We expanded this concept to the HEX/VIC channel and added an objective separation value. This novel objective separation value gives additional (and colour-coded) information for each assay on top of the fluorescence values for negative and positive droplets ([Fig. 6](#fig0030){ref-type="fig"}). The proposed separation factor *k* cannot be calculated directly from the data in QuantaSoft, as the SD needed for the positive and negative populations is not available. In consequence, a tool that gives these SD values has to be used. This could either be our Excel spreadsheet, or another tool that is able to cluster the fluorescence values of the droplet populations, e.g., the 'definetherain' algorithm [@bib0025], or another statistics package that is capable of analyzing datasets with thousands of fluorescence unit values. Unfortunately, the 'definetherain' algorithm is not designed to process data from the HEX/VIC channel. Our developed Excel spreadsheet supports (raw) data from the QX system, both FAM and HEX/VIC channel. The separation factor *k* takes into account both the absolute fluorescence difference of negative and positive droplets, as well as the scatter of negative and positive droplet populations, respectively. It therefore combines both assay quality criteria: fluorescence difference and variation in the droplet populations. The better the separation, the wider the range for correct threshold setting ([Fig. 4](#fig0020){ref-type="fig"}). This is also reflected in the objective separation factor *k*. For the three representative temperatures 60.7 °C, 60.0 °C, and 57.4 °C the corresponding separations factors *k* for the reference gene (Lec-1) are 2.7, 3.2 and 4.3, for the transgene (356043) 18.4, 17.6 and 16.8, respectively. Separation for the transgene in the FAM channel is at all temperatures very good, which is visible in the broad vertical green range of 2% difference ([Fig. 4](#fig0020){ref-type="fig"}). In contrast, separation in the HEX/VIC channel is weak (reflected by separation ratings none to moderate in the matrix, [Fig. 6](#fig0030){ref-type="fig"}), visible in the considerably narrower horizontal green range of 2% difference. With increasing separation factor *k* the corresponding green range widens ([Fig. 4](#fig0020){ref-type="fig"}, from top to bottom). Cooperation with other labs to generate more datasets would be appreciated. Researchers applying the developed spreadsheet can categorize their own data in order to get a good impression about the effects of different settings on their assays. We envision the presented approach to be used by researchers to investigate how the effects of different variables impact on performance in their laboratories. By pooling datasets from several laboratories, valuable conclusions could be drawn on reproducibility and repeatability of ddPCR. All generated datasets could subsequently be collected in a centralized and publicly available database or archive. 4.6. Key features of the developed Excel tool {#sec0110} --------------------------------------------- The Excel tool could be upgraded to support further requirements in ddPCR analysis. One possibility would be to integrate direct analysis of samples with optimized thresholds generated for a reference material, similar to the 'definetherain' algorithm [@bib0025], but for immediate calculation of GMO contents using information from both FAM and HEX/VIC channels. Excel and VBA programming might not be the best approach for future developments, instead, using a dedicated programming environment (like for example R or C\#) may be better suited for implementation of the presented algorithms. The authors are open to suggestions for cooperation to implement such a transformation. 5. Conclusions {#sec0115} ============== We developed an Excel based 'experience matrix' that reflects the assay parameters of GMO ddPCR tests performed in our laboratory. We therefore propose an objective droplet separation value which is based on both absolute fluorescence signal distance of positive and negative droplet populations and the variation within these droplet populations. The droplet separation value allows for easy and reproducible assay performance evaluation. The combination of separation value with the experience matrix simplifies the choice of adequate assay parameters for a given GMO event. For transferring existing real-time PCR assays to a ddPCR platform, we recommend testing several reaction parameters and analyzing these with our developed experience matrix. Such parameters would include annealing/extension temperature and oligonucleotide concentrations. Conflict of interest {#sec0120} ==================== The authors declare that there are no conflicts of interest. Authors' contributions {#sec0125} ====================== LG designed the experimental setup, performed the data analysis, developed the corresponding MS Excel spreadsheets and drafted the manuscript. AI added experimental data, analyzed data and helped to draft the manuscript. UB provided instrumentation and helped to draft the manuscript. SP participated in the design of the study, provided reference materials, analyzed data and helped to draft the manuscript. All authors read and approved the final manuscript. Appendix A. Supplementary data {#sec0135} ============================== The following are Supplementary data to this article: We thank Sandra Scheuring for excellent technical assistance. The presented work was funded by the Bavarian State Ministry of the Environment and Consumer Protection (ID 63165). Figures were built with ScintiFig [@bib0135]. Supplementary data associated with this article can be found, in the online version, at [http://dx.doi.org/10.1016/j.bdq.2015.12.003](10.1016/j.bdq.2015.12.003){#intr0005}. ![**Droplet recovery:** The figure shows the numbers of accepted droplets (ordinate) for experiments on GMO analysis in our laboratory (abscissa; the numbers were given consecutively for all 2800 PCR reactions performed, duplex reactions were counted only once). Green triangles represent droplet populations transferred with a manual 1-channel pipette to the 96-well PCR plate, blue diamonds represent droplet populations transferred with an automatic 8-channel pipette; both were handled by the same technician (Tech1). Red and purple squares represent automatically transferred populations by two other technicians (Tech2, Tech3). (1) and (2) are outliers, while (a) and (b) represent shifts (see Section [3.1](#sec0050){ref-type="sec"}).](gr1){#fig0005} ![**Temperature gradients for four GM soy assays -- Droplet view:** The figure shows the droplet populations for single assays run in duplex: fluorescence amplitude (ordinate) for each droplet (abscissa). The names and percentage of the soy events measured are given on the left. Blue and green dots represent the positive droplets (above the pink horizontal threshold) for transgene and reference gene, respectively. Grey dots represent the negative droplets. A temperature gradient was applied for both the transgene and reference gene assays.](gr2){#fig0010} ![**Discrimination between positive and negative droplets:** Measurement of soy event 356043 1% ERM (BF425c). The figure shows the droplet populations for single assays run in duplex: fluorescence amplitude (ordinate) for each droplet (abscissa). False colours represent the droplet concentration (blue for low and red for high concentrations). A temperature gradient from was applied for both the transgene and reference gene assays.](gr3){#fig0015} ![**Detailed effects of threshold setting and annealing temperature on measured GMO content:** Results for measurement of soy event 356043 10% ERM (BF425c) for three annealing temperatures (Figure 3). Thresholds for transgene and reference gene are given on the ordinate or abscissa, respectively. GMO percentages deviating by a maximum of 2% from manually pre-defined thresholds (coloured in orange), are marked in green. The underlying target gene concentrations are given for comparison of the border regions.](gr4){#fig0020} ![**Effect of annealing temperature and oligonucleotide concentration:** The upper part of the figure shows the droplet populations for single assays run in duplex. The PCR setup was done with two different oligonucleotide concentrations. A temperature gradient was applied for both the transgene and reference gene assays with both oligonucleotide concentrations. The lower part of the figure shows the corresponding condensed information as Pivot charts from the matrix. For symbols refer to [Fig. 6](#fig0030){ref-type="fig"}.](gr5){#fig0025} ![**Temperature gradients for four GM soy assays -- Matrix view:** The figure shows Pivot charts from our matrix. Coloured squares represent the approximate background and coloured triangles the approximate signal for the chosen Pivot options. Red, orange, light green and dark green represent none, moderate, good and very good separation, respectively.](gr6){#fig0030} ![**Matrix overview for soy 40-3-2 event and reference gene assays:** The figure shows Pivot charts from the matrix for the detection of soy event 40-3-2 under various conditions (single and duplex PCR, normal and high primer and probe concentrations, probes from different suppliers, all at 60 °C annealing/extension temperature). For symbols refer to [Fig. 6](#fig0030){ref-type="fig"}.](gr7){#fig0035}
{ "pile_set_name": "PubMed Central" }
Q: Testing big endian without real big endian processor I want to test my c code for big endian on Windows (On x86-64). How can we do this? A: Assuming that you don't have any actual big-endian hardware to hand, your best bet is to use a virtual machine such as QEMU to emulate a big-endian architecture such as SPARC or PowerPC. You'll then need to install a suitable operating system on your VM - perhaps Debian would suit your needs.
{ "pile_set_name": "StackExchange" }
Nowadays, a communications architecture in the communications industry is basically established according to seven layers of communications protocols of an Open System Interconnection (OSI) model. The seven-layer of communications protocols are: a physical layer, a link layer, a network layer, a transport layer, a session layer, a presentation layer, and an application layer. The CPRI protocol is a data transmission protocol applied at the link layer. Further, the CPRI protocol is formulated by communications equipment manufacturers and is a standard of an interface between a radio equipment controller (REC) and radio equipment (RE) that are in a radio base station. The CPRI protocol mainly includes three aspects: the 8b10b encoding and decoding protocol that is used to discover a link transmission error, and a scrambling and descrambling solution that is used to ensure a good signal randomness; the High-level Data Link Control (HDLC) protocol and the Ethernet (ETH) protocol that are used to establish a connection network at the network layer; and a control word format solution for control information required for link synchronization and link maintenance. In order to ensure correct transmission of a signal between an REC and an RE, it is necessary to ensure that the foregoing three aspects of the REC and the RE are in a good operation state. A CPRI negotiation state machine can reflect whether the foregoing three aspects are in a good operation state. Further, the CPRI negotiation state machine is disposed on both the REC and the RE. Before data transmission is performed between the REC and the RE, the CPRI negotiation state machine of the REC may negotiate with the CPRI negotiation state machine of the RE, and the data transmission between the REC and the RE starts only after it is confirmed that the foregoing three aspects are in a normal state. In the existing CPRI protocol, a negotiation process between the CPRI negotiation state machine of the REC and the CPRI negotiation state machine of the RE mainly includes L1 layer (physical layer) synchronization negotiation, CPRI protocol version number negotiation, and HDLC capability and ETH capability negotiation. After the CPRI negotiation state machine of the REC and the CPRI negotiation state machine of the RE reach an agreement on the foregoing three aspects through negotiation, they transit to a same normal working state. In this case, the data transmission between the REC and the RE starts. During the data transmission between the REC and the RE, periodic negotiation is performed between the CPRI negotiation state machine of the REC and the CPRI negotiation state machine of the RE. Once states of the two state machines are inconsistent, the data transmission between the REC and the RE stops. It can be seen that, the CPRI negotiation state machines can ensure correct transmission of a signal between the REC and the RE. At present, a CPRI negotiation state machine is implemented mainly by a hardware product, such as a chip. It is difficult to determine an evolution or change trend of the CPRI protocol, so CPRI protocols that can be supported by manufactured hardware products are very limited. After the CPRI protocol evolves or changes, an existing hardware product cannot be compatible with the latest CPRI protocol. The evolution of the existing CPRI protocol speeds up, which results in that the service life of the existing hardware product is greatly shortened and production costs of manufacturers are increased.
{ "pile_set_name": "USPTO Backgrounds" }
Ramblings on social science, social networks, statistics, data analysis, computing, game theory and alike Three days ago Nature published a note commenting on an recent heated social media discussions whether MS Word is better than LaTeX for writing scientific papers. The note refers to a PLOS article by Knauf & Nejasmic reporting a study on word-processor use. The overall result of that study is that participants who used Word took less time and made less mistakes in reproducing the probe text as compared to people who used LaTeX. I find it rather funny that Nature picked-up the topic. Such discussions always seemed rather futile to me (de gustibus non disputandum est and the fact that some solution A is better or more “efficient” than B does not necessarily lead to A becoming accepted, as is the case with QWERTY vs Dvorak keyboard layouts) and far away from anything scientific. As it goes for myself, I do not like Word nor its Linux counterparts (LibreOffice, Abiword etc), let’s call them WYSIWYGs. First and foremost because I believe they are very poor text editors (as compared to Vim or Emacs): it is cumbersome to navigate longer texts, search. The fact that it is convenient to read a piece of text in, say, Times New Roman does not mean that it is convenient to write using it. Second, when writing in WYSIWYGs I always have an impression that I am handcrafting something: formatting, styles and so on. It is like sculpturing: if you don’t like the result you need to get another piece of wood and start from the beginning. All that seems to counter the main purpuse for which the computers were developed in the first place, which is taking over “mechanistic” tasks and leave “creative” ones to the user. I like that the Nature note referred to Markdown as an emerging technology for writing [scientific] texts. If do not know, Markdown is a lightweight plain text format, not unlike Wikipedia markup. Texts written in Markdown can be processed to PDF, HTML, MSWord and so on. More and more people are using for writing articles or even books. It is simple (plain text) and allows to focus on writing. Last, the note still contains a popular misconception that one of the downsides of LaTeX is a lack of spell checker… Share this: Parallel coordinates plot is one of the tools for visualizing multivariate data. Every observation in a dataset is represented with a polyline that crosses a set of parallel axes corresponding to variables in the dataset. You can create such plots in R using a function parcoord in package MASS. For example, we can create such plot for the built-in dataset mtcars: This produces the plot below. The lines are colored using a blue-to-red color ramp according to the miles-per-gallon variable. What to do if some of the variables are categorical? One approach is to use polylines with different width. Another approach is to add some random noise (jitter) to the values. Titanic data is a crossclassification of Titanic passengers according to class, gender, age, and survival status (survived or not). Consequently, all variables are categorical. Let’s try the jittering approach. After converting the crossclassification (R table) to data frame we “blow it up” by repeating observations according to their frequency in the table. This produces the following (red lines are for passengers who did not survive): It is not so easy to read, is it. Did the majority of 1st class passengers (bottom category on leftmost axis) survived or not? Definitely most of women from that class did, but in aggregate? At this point it would be nice to, instead of drawing a bunch of lines, to draw segments for different groups of passengers. Later I learned that such plot exists and even has a name: alluvial diagram. They seem to be related to Sankey diagrams blogged about on R-bloggers recently, e.g. here. What is more, I was not alone in thinking how to create such a thing with R, see for example here. Later I found that what I need is a “parallel set” plot, as it was called, and implemented, on CrossValidated here. Thats look terrific to me, nevertheless, I still would prefer to: The axes to be vertical. If the variables correspond to measurements on different points in time, then we should have nice flows from left to right. If only the segments could be smooth curves, e.g. splines or Bezier curves… The function accepts data as (collection of) vectors or data frames. The xw argument specifies the position of the knots of xspline relative to the axes. If positive, the knot is further away from the axis, which will make the stripes go horizontal longer before turning towards the other axis. Argument gap.width specifies distances between categories on the axes. Another example is showing the whole Titanic data. Red stripes for those who did not survive. Share this: These are slides from the very first SER meeting – an R user group in Warsaw – that took place on February 27, 2014. I talked about various “lifehacking” tricks for R and focused how to use R with GNU make effectively. I will post some detailed examples in forthcoming posts. Here are the slides from my Sunbelt 2014 talk on collaboration in science. I talked about: Some general considerations regarding collaboration or the lack of it. I have an impression that we are quite good at formulating arguments able to explain why people would like to collaborate. It’s much less understood why we do not observe as much collaboration as those arguments might suggest. Some general considerations about potential data sources and their utility for studying collaboration and other types of social processes among scientists. In particular, I believe this can be usefully framed as a network boundary problem (Lauman & Marsden, 1989). Finally, I showed some preliminary results from studying co-authorship network of employees of the University of Warsaw. Among other things, we see quite some differences between departments in terms of propensity to co-author (also depending on the type of co-authored work) and network transitivity. Comments welcome. Share this: And so I wrote a post on the Future of ___ PhD yesterday. Today I just learned about this shocking story about a political science PhD looking to be employed as an assistant professor at the University of Wrocław and facing shady realities of (parts of) of Polish higher education… Share and beware. Share this: Fill-in the blank in the title of this post with a name of scientific discipline of choice. Nov 1 issue of NYT features a piece “The Repurposed Ph.D. Finding Life After Academia — and Not Feeling Bad About It”. The gloomy state of affairs described in the article mostly applies to humanities and social sciences, at least in the U.S., but I’m sure it applies to other countries as well. I’m sure it does to Poland too. More and more people are entering the job market with a PhD (at least in Poland as evidence shows). At the same time, available positions are scarce and the pays are low. It is somewhat heart-warming to know that people are self-organizing into groups like “Versatile Ph.D” to support each other in such difficult situation. The article links to several interesting pieces including the “The Future of the Humanities Ph.D. at Stanford” discussing the ways of modifying humanities PhD programs so that humanities training will remain relevant in the society and economy of today. Definitely a worthy read for higher education administrators and decision makers in Poland. Share this: Google Reader was one of my main way of reading Internet. It was great to read news and updates from many websites. For example, I had my own “R bloggers” folder within Google Reader long before Tal Galili created R-bloggers.com. Unfortunately, Google is killing the Reader on July 1. There are several alternatives to the Reader, just search for “google reader alternative”. Meanwhile, I switched to Feedly. It’s pretty cool, although there is a couple of things that annoy me a lot, e.g.: too many content (feed/item) recommendations and keyboard shortcuts are different than in Google Reader. The mobile app (I use Android) is also great although a bit heavy for my Samsung Ace. Nice features include being able to (1) push feed items to Instapaper or Evernote, (2) save selected items for later reading. And so, I just browsed my Feedly “Saved for later” folder and here are a couple of interesting items from last 30 days: Recent issue of Science brings a very cool paper by Luís M. A. Bettencourt explaining the scaling properties of cities: how things like GDP, crime, traffic congestion etc. depend on city size. Descriptively the relationships seem to follow a simple power-law relation (see this presentation by Geoffrey West). However, as the paper shows, explaining it is not that simple and involves considering many types of interactions and interdependencies. To finish on a somewhat less geeky note, Warsaw National Museum has a temporary exhibition of Mark Rothko featuring his works from National Gallery of Art in Washington DC, which is a first Polish exhibition of Rothko’s works ever. Accompanying the exhibition, there is a lovely childrend’s guide by Zosia Dzierżawska. Share this: R has a built-in collection of 657 colors that you can use in plotting functions by using color names. There are also various facilities to select color sequences more systematically: Color palettes and ramps available in packages RColorBrewer and colorRamps. R base functions colorRamp and colorRampPalette that you can use to create your own color sequences by interpolating a set of colors that you provide. R base functions rgb, hsv, and hcl that you can use to generate (almost) any color you want. When producing data visualizations, the choice of proper colors is often a compromise between the requirements dictated by the data visualisation itself and the overall style and color of the article/book/report that the visualization is going to be an element of. Choosing an optimal color palette is not so easy and its handy to have some reference. Inspired by a this sheet by Przemek Biecek I created a variant of an R color reference sheet showing different ways in which you can use and call colors in R when creating visualizations. The sheet fits A4 paper (two pages). On the first page it shows a matrix of all the 657 colors with their names. On the second page, on the left, all palettes from RColorBrewer package are displayed. On the right, selected color ramps available in base R (base package grDevices) and in the contributed package colorRamps. Miniatures below: Below is a gist with the code creating the sheet as a PDF “rcolorsheet.pdf”. Instead of directly reusing the Przemek’s code I have rewritten the parts that produce the first page (built-in color names) and the part with the ramps using the image function. I think it is much simpler, less low-level for-looping and a bit more extensible. For example, it is easy to extend the collection of color ramps by providing just additional function name in the form packagename::functionname to the funnames vector (any extra package would have to be loaded at the top of the script).
{ "pile_set_name": "Pile-CC" }
Rapid detection of hypoxia-inducible factor-1-active tumours: pretargeted imaging with a protein degrading in a mechanism similar to hypoxia-inducible factor-1alpha. Hypoxia-inducible factor-1 (HIF-1) plays an important role in malignant tumour progression. For the imaging of HIF-1-active tumours, we previously developed a protein, POS, which is effectively delivered to and selectively stabilized in HIF-1-active cells, and a radioiodinated biotin derivative, (3-(123)I-iodobenzoyl)norbiotinamide ((123)I-IBB), which can bind to the streptavidin moiety of POS. In this study, we aimed to investigate the feasibility of the pretargeting method using POS and (123)I-IBB for rapid imaging of HIF-1-active tumours. Tumour-implanted mice were pretargeted with POS. After 24 h, (125)I-IBB was administered and subsequently, the biodistribution of radioactivity was investigated at several time points. In vivo planar imaging, comparison between (125)I-IBB accumulation and HIF-1 transcriptional activity, and autoradiography were performed at 6 h after the administration of (125)I-IBB. The same sections that were used in autoradiographic analysis were subjected to HIF-1alpha immunohistochemistry. (125)I-IBB accumulation was observed in tumours of mice pretargeted with POS (1.6%ID/g at 6 h). This result is comparable to the data derived from (125)I-IBB-conjugated POS-treated mice (1.4%ID/g at 24 h). In vivo planar imaging provided clear tumour images. The tumoral accumulation of (125)I-IBB significantly correlated with HIF-1-dependent luciferase bioluminescence (R=0.84, p<0.01). The intratumoral distribution of (125)I-IBB was heterogeneous and was significantly correlated with HIF-1alpha-positive regions (R=0.58, p<0.0001). POS pretargeting with (123)I-IBB is a useful technique in the rapid imaging and detection of HIF-1-active regions in tumours.
{ "pile_set_name": "PubMed Abstracts" }
Pages Friday, February 3, 2012 DIY Labels! Have I mentioned before how much I love Pinterest? Seriously, I get "sew-n-so" inspired as I waste spend time lurking thru the gazillions of photos! Today's tutorials is pinterest-inspired!It seems I ALWAYS need labels, especially when I am fresh out of them! With this tutorial, I will never - ever have to order or wait for an order to arrive again. I can make them on the fly, and they are both time and money-savers...and that makes me very happy! These labels are just my testers there is still some tweeking that needs to be done to make the even better. I am very pleased with the results tho and they are amazingly easy to create! I followed a tutorial from Pinterest, but there were a few things I didn't like about that tutorial, besides the fact that I had a few of my own ideas that I wanted to try. If you remember, several years ago I created a tutorial on making your own labels, but my new method is 100% better! I went to my local Hobby Lobby store and bought some Iron-on transfer paper for Ink Jet Printers. This is an awesome product that allows you to use your ink jet printer and print onto a film that is on paper and then iron it onto fabric! I also bought a couple packages of Wrights Twill Tape in 2 sizes. I created my logo that I wanted on my label in Pixelator - you can use whichever program you have in your computer to create text with graphics. You can also use an online source, such as Picnik.Adding in wing-dings with just your name is also quite cute! Once you have your logo created, import it into a word processing document - I am using Pages on my mac, but windows-uses can use any Word Processing Document. Mirror image the design - it HAS to be mirror imaged to work. I created 4 columns and then copied and pasted as many times as I could for the page. Next step is to load the Print 'n Press paper into the printer. Make sure that you load the paper in correctly - instructions for that is in the packet of paper. Print your page and it will look like this: a whole sheet of mirror-imaged logos! Here's a closer look: for this 2nd label, I used wing-dings on my computer for the ornamental design! Also remember that the sky is the limit for colors you want to use - all colors and multiple colors in your logo will work just fine! The next step I took was to cut rows of the logos with a paper cutter. This just makes it easy to have your logos evenly spaced on the Twill Tape and make several at a time! I wouldn't make much more than four at a time. Lay the printed side down on the Twill Tape - you should be able to easily line up the logo onto the Twill Tape. If it's hard to see, then you can lay the Twill Tape on top of the printed transfer paper - it really doesn't matter. Using your iron, you will transfer the logo to the Twill Tape. Don't rub back and forth - just press and move to another area - you can also flip it over and press again. When you peel the paper away, you are left with the logo imprinted on the Twill Tape! It's just as easy as that! The logo will have a sort of a film on top - this makes it so that your label can be washed and the logo won't wash away! The nice thing about using Twill Tape is that all the edges are finished, you can cut off each label as you need it and then either fray check the cut sides, turn under and stitch, or zigzag to finish off. I hope I have inspired you to create your own unique labels! I know I am going to be making all kinds of them! I've been surfing on-line more than 3 hours as of late, but I by no means found any attention-grabbing article like yours. It is lovely value enough for me. In my view, if all web owners and bloggers made just right content as you did, the net will probably be much more useful than ever before.North Pole Village from Department 56 Santa's Hot Cocoa Café THIS IS GREAT! I just found this on Pinterest. You have access to my exact setup (Mac, Hobby Lobby, Ink Jet Printer, Wing-Dings. It's like it was made just for me. I appreciate the tutorial. I'm getting these supplies this week and labeling all my quilts I've made so far, which I have thus far failed to do. Thanks!
{ "pile_set_name": "Pile-CC" }
397 F.Supp.2d 334 (2005) Aida Esther RIVERA-QUIÑONES, et al., Plaintiffs, v. Victor RIVERA-GONZALEZ, et al., Defendants. No. CIV. 03-2326(RLA). United States District Court, D. Puerto Rico. October 28, 2005. *335 *336 *337 *338 José R. Olmo-Rodríguez, Esq., Olmo & Rodriguez-Matias, San Juan, PR, for Plaintiffs. Jo Ann Estades-Boyer, Esq., Maria Eugenia Villares-Seneriz, Esq., Department of Justice, Commonwealth of Puerto Rico, San Juan, PR, for Defendants. *339 ORDER DENYING MOTION TO DISMISS ACOSTA, District Judge. Several of the defendants sued in this action have moved the court to dismiss the complaint pursuant to the provisions of Rule 12(b)(1) and (6) Fed.R.Civ.P. The court having reviewed plaintiffs' response thereto hereby denies the request. BACKGROUND This is an action for money damages instituted by the relatives of decedent, JULIO ENRIQUE SANTOS RIVERA ("JULIO ENRIQUE"), who died on December 15, 2002 while incarcerated in a Puerto Rico state prison. Specifically, the plaintiffs are: decedent's two children who, under Puerto Rico law, inherit JULIO ENRIQUE's constitutional deprivation claim, his mother, stepfather and siblings. The complaint charges a violation of 42 U.S.C. § 1983 for alleged failure to provide JULIO ENRIQUE with adequate protection from attacks by other inmates as well as deliberate indifference to his medical needs. Plaintiffs also plead tort claims under our supplemental jurisdiction. In their motion to dismiss defendants argue that: (1) the complaint fails to state a valid claim; (2) plaintiffs lack standing under § 1983; (3) failure to exhaust administrative remedies; (4) plaintiffs cannot sue under the theory of respondeat superior; (5) plaintiffs have no due process claim; and (6) defendants are entitled to qualified immunity. SUBJECT MATTER JURISDICTION The proper vehicle for challenging a court's subject-matter jurisdiction is not Rule 12(b)(6) but rather Fed.R.Civ.P. 12(b)(1). In ruling on motions to dismiss for lack of subject matter jurisdiction the court is not constrained to the allegations in the pleadings as with Rule 12(b)(6) petitions. The plaintiff's jurisdictional allegations are given no presumptive weight and the court is required to address the merits of the jurisdictional claim by resolving the factual disputes between the parties. Further, the court may review extra-pleading material without transforming the petition into a summary judgment vehicle. Gonzalez v. United States, 284 F.3d 281, 288 (1st Cir.2002); Aversa v. United States, 99 F.3d 1200, 1210 (1st Cir.1996). Even though the court is not circumscribed to the allegations in the complaint in deciding a jurisdictional issue brought pursuant to Rule 12(b)(1) Fed.R.Civ.P. and that it may also take into consideration "extra-pleading material", 5A Charles A. Wright & Arthur R. Miller, Federal Practice and Procedure § 1350 (2d ed.1990) p. 213, "[w]here movant has challenged the factual allegations of the party invoking the district court's jurisdiction, the invoking party `must submit affidavits and other relevant evidence to resolve the factual dispute regarding jurisdiction.'" Johnson v. United States, 47 F.Supp.2d 1075, 1077 (S.D.Ind.1999) (citing Kontos v. United States Dept. of Labor, 826 F.2d 573, 576 (7th Cir.1987)). In ruling on a motion to dismiss for lack of subject matter jurisdiction under Fed.R.Civ.P. 12(b)(1), the district court must construe the complaint liberally, treating all well-pleaded facts as true and indulging all reasonable inferences in favor of the plaintiff. In addition, the court may consider whatever evidence has been submitted, such as the depositions and exhibits submitted in the case. Aversa v. United States, 99 F.3d at 1210-11 (citations omitted). See also, Shrieve v. United States, 16 F.Supp.2d 853, 855 (N.D.Ohio 1998) ("In ruling on such a motion, the district court may resolve factual *340 issues when necessary to resolve its jurisdiction.") Federal courts are courts of limited jurisdiction and hence, have the duty to examine their own authority to preside over the cases assigned. "It is black-letter law that a federal court has an obligation to inquire sua sponte into its own subject matter jurisdiction." McCulloch v. Velez, 364 F.3d 1, 5 (1st Cir.2004). See also, Bonas v. Town of N. Smithfield, 265 F.3d 69, 73 (1st Cir.2001) ("Federal courts are courts of limited jurisdiction, and therefore must be certain that they have explicit authority to decide a case"); Am. Fiber & Finishing, Inc. v. Tyco Healthcare Group LP, 362 F.3d 136, 138 (1st Cir.2004) ("In the absence of jurisdiction, a court is powerless to act."). If jurisdiction is questioned, the party asserting it has the burden of proving a right to litigate in this forum. McCulloch v. Velez, 364 F.3d at 6. "Once challenged, the party invoking diversity jurisdiction must prove [it] by a preponderance of the evidence." Garcia Perez v. Santaella, 364 F.3d 348, 350 (1st Cir.2004). See also, Mangual v. Rotger-Sabat, 317 F.3d 45, 56 (1st Cir.2003) (party invoking federal jurisdiction has burden of establishing it). EXHAUSTION OF ADMINISTRATIVE REMEDIES Defendants contend that we lack jurisdiction to entertain the claims asserted under 42 U.S.C. § 1983 for failure to exhaust administrative remedies as required by the Prison Reform Litigation Act of 1995 (PRLA). This statute, in pertinent part, reads: No action shall be brought with respect to prison conditions under section 1983 of this title, or any other Federal law, by a prisoner confined in jail, prison, or other correctional facility until such administrative remedies as are available are exhausted. 42 U.S.C. § 1997e(a). "[E]xhaustion in cases covered by § 1997e(a) is ... mandatory." Porter v. Nussle, 534 U.S. 516, 524, 122 S.Ct. 983, 988, 152 L.Ed.2d 12 (2002); Booth v. Churner, 532 U.S. 731, 739, 121 S.Ct. 1819, 149 L.Ed.2d 958 (2001). Applicability of the PLRA, however, is limited to individuals who are imprisoned at the time the suit is filed. Janes v. Hernandez, 215 F.3d 541, 543 (5th Cir.2000); Doe v. Washington County, 150 F.3d 920, 924 (8th Cir.1998); Kerr v. Puckett, 138 F.3d 321, 322-23 (7th Cir.1998). "[L]itigants ... who file prison condition actions after release from confinement are no longer `prisoners' for purposes of § 1997e(a) and, therefore, need not satisfy the exhaustion requirements of this provision." Greig v. Goord, 169 F.3d 165, 167 (2nd Cir.1999). It appearing for obvious reasons that at the time the complaint was filed decedent was no longer "confined" for purposes of the PRLA we find the exhaustion requirement inapposite in this case. FAILURE TO STATE A CLAIM In disposing of motions to dismiss pursuant to Rule 12(b)(6) Fed.R.Civ.P. the court will accept all factual allegations as true and will make all reasonable inferences in plaintiff's favor. Frazier v. Fairhaven Sch. Com., 276 F.3d 52, 56 (1st Cir.2002); Alternative Energy, Inc. v. St. Paul Fire and Marine Ins. Co., 267 F.3d 30, 33 (1st Cir.2001); Berezin v. Regency Sav. Bank, 234 F.3d 68, 70 (1st Cir.2000); Tompkins v. United Healthcare of New England, Inc., 203 F.3d 90, 92 (1st Cir.2000). *341 Our scope of review under this provision is a narrow one. Dismissal will only be granted if after having taken all well-pleaded allegations in the complaint as true, the Court finds that plaintiff is not entitled to relief under any theory. Brown v. Hot, Sexy and Safer Prods., Inc., 68 F.3d 525, 530 (1st Cir.1995) cert. denied, 516 U.S. 1159, 116 S.Ct. 1044, 134 L.Ed.2d 191 (1996); Vartanian v. Monsanto Co., 14 F.3d 697, 700 (1st Cir.1994). Further, our role is to examine the complaint to determine whether plaintiff has adduced sufficient facts to state a cognizable cause of action. Alternative Energy, 267 F.3d at 36. The complaint will be dismissed if the court finds that under the facts as pleaded plaintiff may not prevail on any possible theory. Berezin, 234 F.3d at 70; Tompkins, 203 F.3d at 93. § 1983 ELEMENTS The complaint charges violation of 42 U.S.C. § 1983 which reads: Every person who, under color of any statute, ordinance, regulation, custom or usage, of any State or Territory, subjects, or causes to be subjected, any citizen of the United States or other person within the jurisdiction thereof to the deprivation of any rights, privileges, or immunities secured by the Constitution and laws, shall be liable to the party injured in an action at law, suit in equity, or other proceeding for redress. Section 1983 does not create substantive rights but is rather a procedural mechanism for enforcing constitutional or statutory rights. Albright v. Oliver, 510 U.S. 266, 114 S.Ct. 807, 127 L.Ed.2d 114 (1994). Hence, it is plaintiffs' burden to identify the particular underlying constitutional or statutory right that is sought to be enforced via judicial proceedings. In order to prevail in a § 1983 claim plaintiff must bring forth evidence: (1) that defendant acted "under color of state law"; and (2) of deprivation of a federally protected right. Rogan v. City of Boston, 267 F.3d 24 (1st Cir.2001); DiMarco-Zappa v. Cabanillas, 238 F.3d 25, 33 (1st Cir.2001); Collins v. Nuzzo, 244 F.3d 246 (1st Cir.2001); Barreto-Rivera, 168 F.3d at 45. All parties concede that the defendants were acting within the scope of their duties as state officers at all relevant times. Therefore, the first element is satisfied. We must then ascertain whether decedent was deprived of any federally protected right as a result of the events which culminated in his demise. Standing Defendants argue that decedent's mother, stepfather and siblings have no § 1983 claim inasmuch as they did not personally suffer a constitutional deprivation. Constitutional deprivation suits must be brought by the individuals affected by the particular acts or omissions under attack. Nunez Gonzalez v. Vazquez Garced, 389 F.Supp.2d 214, 218-19 (D.P.R.2005); Reyes Vargas v. Rosello Gonzalez, 135 F.Supp.2d 305, 308 (D.P.R.2001). In this vein it has been held that relatives may not assert § 1983 claims for the death of a family member as a result of unconstitutional conduct unless the challenged action is directed at their family relationship. Robles Vazquez v. Tirado Garcia, 110 F.3d 204, 206 n. 4 (1st Cir.1997); Nunez Gonzalez; Reyes Vargas. However, when there is evidence of decedent having suffered as a result of the constitutional deprivation Puerto Rico law allows his heirs to assert this claim on decedent's behalf. "As such, [decedent's] son and legal heir, has standing to bring *342 the present § 1983 in his representative capacity." Gonzalez Rodriguez v. Alvarado, 134 F.Supp.2d 451, 454 (D.P.R.2001). As plaintiffs concede, in this particular case only ALINAID and JULIO GADIEL SANTOS MARRERO, decedent's children, have plead a § 1983 claim and limited to their parent's pain and suffering as a result of defendants' unconstitutional acts and/or omissions which purportedly resulted in his death. The remaining claims arising from decedent's demise are based on tort and may be prosecuted by the individual plaintiffs under our supplemental jurisdiction. Plaintiffs having asserted valid claims as to which there is original federal jurisdiction, defendants' request that these be dismissed is DENIED. The court will entertain all related local causes of action asserted by plaintiffs as provided for in 28 U.S.C. § 1367. Supervisory Liability Defendants further seek to dismiss the § 1983 causes of action alleging that "[none of the facts in the complaint can affirmatively link the constitutional violations alleged by the plaintiffs] to the supervisors' actions or inaction." Motion to Dismiss (docket No. 34) p. 16. The doctrine of respondeat superior, whereby liability is imposed on employers for the acts or omissions of their employees is inapposite in actions brought under § 1983. Supervisors will be held accountable under this provision solely "on the basis of [their] own acts or omissions". Barreto-Rivera, 168 F.3d at 48; Diaz v. Martinez, 112 F.3d 1, 4 (1st Cir.1997); Maldonado-Denis v. Castillo-Rodriguez, 23 F.3d 576, 581 (1st Cir.1994); Gutierrez-Rodriguez v. Cartagena, 882 F.2d 553, 562 (1st Cir.1989). Rather, "[s]uch liability can arise out of participation in a custom that leads to a violation of constitutional rights, or by acting with deliberate indifference to the constitutional rights of others." Diaz v. Martinez, 112 F.3d at 4 (citations omitted). Further, for plaintiffs to prevail they must bring evidence of a causal connection or relationship between the alleged misconduct and the supervisor's acts or omissions. "A supervisory officer may be held liable for the behavior of his subordinate officers where his action or inaction [is] affirmative[ly] link[ed]... to that behavior in the sense that it could be characterized as supervisory encouragement, condonation or acquiescence or gross negligence amounting to deliberate indifference." Wilson v. Town of Mendon, 294 F.3d 1, 6 (1st Cir.2002) (citations and internal quotations omitted); Figueroa-Torres v. Toledo-Davila, 232 F.3d 270, 279 (1st Cir.2000); Barreto-Rivera, 168 F.3d at 48; Maldonado-Denis, 23 F.3d at 582; Gutierrez-Rodriguez, 882 F.2d at 562. As further discussed below, a careful reading of the complaint leads to the inescapable conclusion that the § 1983 claims against the supervisors in the line of command both at the Department of Corrections as well as in the Health Department are specifically based on each individual's acts or omissions which are also linked to the claims of lack of security and medical care. Due Process The complaint alleges violations of the Fifth, Eighth and Fourteenth Amendments to the Constitution of the United States. Defendants argue that as a sentenced prisoner decedent's treatment and conditions of confinement fall within the purview of the Eighth Amendment rather than the Due Process Clause. Defendants' argument is correct but has no practical consequence on the issues presented in this litigation. *343 Conditions of confinement and the treatment afforded sentenced prisoners is subject to the Eighth Amendment restraints against "cruel and unusual punishments". Farmer v. Brennan, 511 U.S. 825, 832, 114 S.Ct. 1970, 1976, 128 L.Ed.2d 811, 822 (1994); Giroux v. Somerset County, 178 F.3d 28, 31 (1st Cir.1999). Pretrial detainees, on the other hand, "are protected under the Fourteenth Amendment Due Process Clause rather than the Eighth amendment; however, the standard to be applied is the same as that used in Eighth Amendment cases." Burrell v. Hampshire County, 307 F.3d 1, 7, (1st Cir.2002). "[C]onstitutional protection is available to pretrial detainees through the Due Process Clause of the Fourteenth Amendment and is `at least as great as the Eighth Amendment protections available to a convicted prisoner.'" Calderon-Ortiz v. Laboy-Alvarado, 300 F.3d 60, 64 (1st Cir.2002) (citing City of Revere v. Mass. Gen. Hosp., 463 U.S. 239, 244, 103 S.Ct. 2979, 77 L.Ed.2d 605 (1983)). "The rights implicated here [lack of medical care] are the due process protections afforded a pre-trial detainee under the Fourteenth Amendment" Gaudreault, 923 F.2d at 208; Ferris v. County of Kennebec, 44 F.Supp.2d 62, 67 n. 2; Jesionowski v. Beck, 937 F.Supp. 95, 101 (D.Mass.1996). "The boundaries of this duty have not been plotted exactly; however, it is clear that they extend at least as far as the protection that the Eighth Amendment gives to a convicted prisoner." Gaudreault, 923 F.2d at 208. Eighth Amendment Under the Eighth Amendment prison officials must guarantee the safety of prisoners including protection from attacks from fellow inmates. Farmer v. Brennan, 511 U.S. at 833, 114 S.Ct. at 1976, 128 L.Ed.2d at 822; Calderon-Ortiz, 300 F.3d at 64; Giroux, 178 F.3d at 32. Additionally, there is a duty to care for a prisoner's "serious medical needs." Estelle v. Gamble, 429 U.S. 97, 104, 97 S.Ct. 285, 50 L.Ed.2d 251 (1976); Garcia v. City of Boston, 253 F.3d 147, 150 (1st Cir.2001). "A prison official's deliberate indifference to a substantial risk of serious harm to an inmate violates the Eighth Amendment." Farmer v. Brennan, 511 U.S. at 828, 114 S.Ct. at 1979, 128 L.Ed.2d at 820 (citations omitted). The deliberate indifference standard is utilized for both safety and medical Eighth Amendment claims. See, Alsina-Ortiz v. Laboy, 400 F.3d 77, 80 (1st Cir.2005). "[A] prison official may be held liable under the Eighth Amendment for denying humane conditions of confinement only if he knows that inmates face a substantial risk of serious harm and disregards that risk by failing to take reasonable measures to abate it." Farmer v. Brennan, 511 U.S. at 847, 114 S.Ct. at 1984, 128 L.Ed.2d at 832. "Willful blindness and deliberate indifference are not mere negligence; these concepts are directed at a form of scienter in which the official culpably ignores or turns away from what is otherwise apparent." Alsina-Ortiz v. Laboy, 400 F.3d 77, 82 (1st Cir.2005). Under the "deliberate" requirement the official must be cognizant of circumstances which presuppose that the inmate is subject to substantial risk of serious harm. Burrell, 307 F.3d at 8. See also, Calderon-Ortiz, 300 F.3d at 64 ("plaintiffs must show: (1) the defendant knew of (2) a substantial risk (3) of a serious harm and (4) disregarded that risk"); Giroux, 178 F.3d at 32 (standard requires "an actual, subjective appreciation of risk.") Security The complaint describes in detail the horrid attack decedent was subjected to *344 whereby he was forcibly intoxicated with morphine by fellow prisoners which eventually caused his death by overdose. There are also allegations regarding defendants' failure to classify prisoners to avoid harm, inadequate supervision, failure to avoid the illegal entry of drugs, failure to conduct adequate searches and failure to adequately supervise the penal population all of which allowed inmate practices which resulted in danger to the lives and body integrity of prisoners. The facts as adequately pled establish that all defendants had sufficient information from which an inference of substantial risk of serious harm to the inmates, including plaintiff, could be drawn. In other words, based on the serious security deficiencies in the Ponce institution it is reasonable to conclude that defendants were or should have been aware of the unreasonable risk of assault and death by drug intoxication posed to inmates. Medical needs The complaint specifically alleges that decedent was suffering from a serious condition resulting from drug intoxication which required medical care. Plaintiffs further claim that defendants were aware that there was no "reasonably adequate emergency medical care, equipment or facilities" available to the inmates and yet took no measures to correct this situation. Complaint ¶ 45. Additionally, plaintiffs aver that the defendants were cognizant of the serious risks posed by drug intoxication in the prison population. Id. ¶ 46. According to the complaint the practice of not providing inmates "with necessary medical care, equipment, facilities or access to it" is long standing. Id. ¶ 40. Additionally, there is a "shortage of staff and equipment needed to provide inmates with access to medical care [which] poses a serious risk to the health of the prisoners." Opposition (docket No. 54) p. 5. See also, Complaint ¶ 41. The pleading also avers with particularity defendants' awareness of the risks of harm resulting from their failure to provide "reasonably adequate medical care, equipment or facilities" to the prison population, Complaint ¶ 45, as well as "the high number of drug intoxication cases at Ponce [prison]... and did nothing". Id. ¶ 49. See also, ¶ 45 ("took no measures to provide reasonably adequate emergency medical care or access to it"). Thus, we find that plaintiffs have met their burden of adequately pleading a § 1983 claim based on deliberate indifference to decedent's security and medical needs. QUALIFIED IMMUNITY Qualified immunity shields officials from having to pay for damages resulting from violations of § 1983 provided certain particular circumstances are present. "The doctrine of qualified immunity `provides a safe harbor for a wide range of mistaken judgments.'" Kauch v. Dep't for Children, Youth and Their Families, 321 F.3d 1, 4 (1st Cir.2003) (citing Hatch v. Dep't for Children, Youth and Their Families, 274 F.3d 12, 19 (1st Cir.2001)). The court will follow a three-part inquiry in ascertaining whether or not a defendant is entitled to protection. Initially, the court will consider "whether the plaintiff's allegations, if true, establish a constitutional violation." Whalen v. Mass. Trial Court, 397 F.3d 19, 23 (1st Cir.2005). If so, the court will proceed to determine "whether the right was clearly established at the time of the alleged violation." Id. "Finally, we ask whether a similarly situated reasonable official would have understood that the challenged action violated that right." Id. *345 Thus, "qualified immunity remains available to defendants who demonstrate that they acted objectively reasonably in applying clearly established law to the specific facts they faced." Burke v. Town of Walpole, 405 F.3d 66, 68 (1st cir.2005). In other words, whether "an objectively reasonable official in the defendants' position would not necessarily have understood that his action violated plaintiff's rights". Whalen, 397 F.3d at 28. Because qualified immunity is an affirmative defense it is defendant's burden to present evidence of its applicability. DiMarco-Zappa, 238 F.3d at 35. In their motion defendants do not contend that deliberate indifference to the safety and medical needs of inmates was a clearly established constitutional protection in December 2002 when the events giving rise to this litigation took place. Rather, they argue that they are entitled to the defense "because [defendants] were aware of the applicable law and regulations at the time the alleged acts took place, and acted according to them. The established policies did not in any way violate Plaintiffs' or decedent's rights in a reckless or deliberate indifference (sic) manner... Defendants are shielded from liability by the doctrine of qualified immunity since some of Plaintiffs' allegations do not invoke a constitutional right, and also because it was objectively reasonable for Defendants to believe that their actions would not violate Plaintiffs' rights." Motion to Dismiss (docket No. 34) pp. 18-19. However, the motion currently before the court is not a summary judgment petition but rather the request is premised on dismissal for failure to state a claim under Rule 12(b)(6) which mandates that the court accept all factual allegations of the complaint as true. As previously discussed the complaint specifically charges defendants with violations of decedent's Eighth Amendment rights based on their deliberate indifference to his security and medical needs. Based on the facts as plead the court cannot possibly infer defendants' objective belief that their conduct was constitutionally sound. CONCLUSION Based on the foregoing, Defendant' Motion to Dismiss (docket No. 34) is DENIED.[1] IT IS SO ORDERED. NOTES [1] See, Motion to Join filed by codefendant VICTOR RIVERA GONZALEZ (docket No. 44) and plaintiffs' Opposition (docket No. 54).
{ "pile_set_name": "FreeLaw" }
#include <errno.h> #include <stdlib.h> #include <string.h> #include <wchar.h> #include <locale.h> #include "../mb/mbctype.h" #include "../local.h" int wctomb( char *s, wchar_t wchar) { #ifdef _MB_CAPABLE mbstate_t *ps; ps = &(TLSGetCurrent()->MbState); return __WCTOMB (s, wchar, ps); #else /* not _MB_CAPABLE */ if (s == NULL) return 0; /* Verify that wchar is a valid single-byte character. */ if ((size_t)wchar >= 0x100) { _set_errno(EILSEQ); return -1; } *s = (char) wchar; return 1; #endif /* not _MB_CAPABLE */ } int _wctomb_r(char *s, wchar_t _wchar, mbstate_t *state) { return __WCTOMB(s, _wchar, state); } int __ascii_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { /* Avoids compiler warnings about comparisons that are always false due to limited range when sizeof(wchar_t) is 2 but sizeof(wint_t) is 4, as is the case on cygwin. */ wint_t wchar = _wchar; if (s == NULL) return 0; /* Unused parameters */ _CRT_UNUSED(state); #ifdef __CYGWIN__ if ((size_t)wchar >= 0x80) #else if ((size_t)wchar >= 0x100) #endif { _set_errno(EILSEQ); return -1; } *s = (char)wchar; return 1; } #ifdef _MB_CAPABLE /* for some conversions, we use the __count field as a place to store a state value */ #define __state __count int __utf8_wctomb( char *s, wchar_t _wchar, mbstate_t *state) { wint_t wchar = _wchar; int ret = 0; if (s == NULL) return 0; /* UTF-8 encoding is not state-dependent */ if (sizeof(wchar_t) == 2 && state->__count == -4 && (wchar < 0xdc00 || wchar > 0xdfff)) { /* There's a leftover lone high surrogate. Write out the CESU-8 value of the surrogate and proceed to convert the given character. Note to return extra 3 bytes. */ wchar_t tmp; tmp = (state->__value.__wchb[0] << 16 | state->__value.__wchb[1] << 8) - (0x10000 >> 10 | 0xd80d); *s++ = 0xe0 | ((tmp & 0xf000) >> 12); *s++ = 0x80 | ((tmp & 0xfc0) >> 6); *s++ = 0x80 | (tmp & 0x3f); state->__count = 0; ret = 3; } if (wchar <= 0x7f) { *s = wchar; return ret + 1; } if (wchar >= 0x80 && wchar <= 0x7ff) { *s++ = 0xc0 | ((wchar & 0x7c0) >> 6); *s = 0x80 | (wchar & 0x3f); return ret + 2; } if (wchar >= 0x800 && wchar <= 0xffff) { /* No UTF-16 surrogate handling in UCS-4 */ if (sizeof(wchar_t) == 2 && wchar >= 0xd800 && wchar <= 0xdfff) { wint_t tmp; if (wchar <= 0xdbff) { /* First half of a surrogate pair. Store the state and return ret + 0. */ tmp = ((wchar & 0x3ff) << 10) + 0x10000; state->__value.__wchb[0] = (tmp >> 16) & 0xff; state->__value.__wchb[1] = (tmp >> 8) & 0xff; state->__count = -4; *s = (0xf0 | ((tmp & 0x1c0000) >> 18)); return ret; } if (state->__count == -4) { /* Second half of a surrogate pair. Reconstruct the full Unicode value and return the trailing three bytes of the UTF-8 character. */ tmp = (state->__value.__wchb[0] << 16) | (state->__value.__wchb[1] << 8) | (wchar & 0x3ff); state->__count = 0; *s++ = 0xf0 | ((tmp & 0x1c0000) >> 18); *s++ = 0x80 | ((tmp & 0x3f000) >> 12); *s++ = 0x80 | ((tmp & 0xfc0) >> 6); *s = 0x80 | (tmp & 0x3f); return 4; } /* Otherwise translate into CESU-8 value. */ } *s++ = 0xe0 | ((wchar & 0xf000) >> 12); *s++ = 0x80 | ((wchar & 0xfc0) >> 6); *s = 0x80 | (wchar & 0x3f); return ret + 3; } if (wchar >= 0x10000 && wchar <= 0x10ffff) { *s++ = 0xf0 | ((wchar & 0x1c0000) >> 18); *s++ = 0x80 | ((wchar & 0x3f000) >> 12); *s++ = 0x80 | ((wchar & 0xfc0) >> 6); *s = 0x80 | (wchar & 0x3f); return 4; } _set_errno(EILSEQ); return -1; } /* Cygwin defines its own doublebyte charset conversion functions because the underlying OS requires wchar_t == UTF-16. */ #ifndef __CYGWIN__ int __sjis_wctomb( char *s, wchar_t _wchar, mbstate_t *state) { wint_t wchar = _wchar; unsigned char char2 = (unsigned char)wchar; unsigned char char1 = (unsigned char)(wchar >> 8); if (s == NULL) return 0; /* not state-dependent */ if (char1 != 0x00) { /* first byte is non-zero..validate multi-byte char */ if (_issjis1(char1) && _issjis2(char2)) { *s++ = (char)char1; *s = (char)char2; return 2; } else { _set_errno(EILSEQ); return -1; } } *s = (char)wchar; return 1; } int __eucjp_wctomb( char *s, wchar_t _wchar, mbstate_t *state) { wint_t wchar = _wchar; unsigned char char2 = (unsigned char)wchar; unsigned char char1 = (unsigned char)(wchar >> 8); if (s == NULL) return 0; /* not state-dependent */ if (char1 != 0x00) { /* first byte is non-zero..validate multi-byte char */ if (_iseucjp1(char1) && _iseucjp2(char2)) { *s++ = (char)char1; *s = (char)char2; return 2; } else if (_iseucjp2(char1) && _iseucjp2(char2 | 0x80)) { *s++ = (char)0x8f; *s++ = (char)char1; *s = (char)(char2 | 0x80); return 3; } else { _set_errno(EILSEQ); return -1; } } *s = (char)wchar; return 1; } int __jis_wctomb( char *s, wchar_t _wchar, mbstate_t *state) { wint_t wchar = _wchar; int cnt = 0; unsigned char char2 = (unsigned char)wchar; unsigned char char1 = (unsigned char)(wchar >> 8); if (s == NULL) return 1; /* state-dependent */ if (char1 != 0x00) { /* first byte is non-zero..validate multi-byte char */ if (_isjis(char1) && _isjis(char2)) { if (state->__state == 0) { /* must switch from ASCII to JIS state */ state->__state = 1; *s++ = ESC_CHAR; *s++ = '$'; *s++ = 'B'; cnt = 3; } *s++ = (char)char1; *s = (char)char2; return cnt + 2; } _set_errno(EILSEQ); return -1; } if (state->__state != 0) { /* must switch from JIS to ASCII state */ state->__state = 0; *s++ = ESC_CHAR; *s++ = '('; *s++ = 'B'; cnt = 3; } *s = (char)char2; return cnt + 1; } #endif /* !__CYGWIN__ */ #ifdef _MB_EXTENDED_CHARSETS_ISO static int ___iso_wctomb(char *s, wchar_t _wchar, int iso_idx, mbstate_t *state) { wint_t wchar = _wchar; if (s == NULL) return 0; /* wchars <= 0x9f translate to all ISO charsets directly. */ if (wchar >= 0xa0) { if (iso_idx >= 0) { unsigned char mb; for (mb = 0; mb < 0x60; ++mb) if (__iso_8859_conv[iso_idx][mb] == wchar) { *s = (char)(mb + 0xa0); return 1; } _set_errno(EILSEQ); return -1; } } if ((size_t)wchar >= 0x100) { _set_errno(EILSEQ); return -1; } *s = (char)wchar; return 1; } int __iso_8859_1_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, -1, state); } int __iso_8859_2_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 0, state); } int __iso_8859_3_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 1, state); } int __iso_8859_4_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 2, state); } int __iso_8859_5_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 3, state); } int __iso_8859_6_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 4, state); } int __iso_8859_7_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 5, state); } int __iso_8859_8_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 6, state); } int __iso_8859_9_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 7, state); } int __iso_8859_10_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 8, state); } int __iso_8859_11_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 9, state); } int __iso_8859_13_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 10, state); } int __iso_8859_14_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 11, state); } int __iso_8859_15_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 12, state); } int __iso_8859_16_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___iso_wctomb(s, _wchar, 13, state); } static wctomb_p __iso_8859_wctomb[17] = { NULL, __iso_8859_1_wctomb, __iso_8859_2_wctomb, __iso_8859_3_wctomb, __iso_8859_4_wctomb, __iso_8859_5_wctomb, __iso_8859_6_wctomb, __iso_8859_7_wctomb, __iso_8859_8_wctomb, __iso_8859_9_wctomb, __iso_8859_10_wctomb, __iso_8859_11_wctomb, NULL, /* No ISO 8859-12 */ __iso_8859_13_wctomb, __iso_8859_14_wctomb, __iso_8859_15_wctomb, __iso_8859_16_wctomb }; /* val *MUST* be valid! All checks for validity are supposed to be performed before calling this function. */ wctomb_p __iso_wctomb(int val) { return __iso_8859_wctomb[val]; } #endif /* _MB_EXTENDED_CHARSETS_ISO */ #ifdef _MB_EXTENDED_CHARSETS_WINDOWS static int ___cp_wctomb(char *s, wchar_t _wchar, int cp_idx, mbstate_t *state) { wint_t wchar = _wchar; if (s == NULL) return 0; if (wchar >= 0x80) { if (cp_idx >= 0) { unsigned char mb; for (mb = 0; mb < 0x80; ++mb) if (__cp_conv[cp_idx][mb] == wchar) { *s = (char)(mb + 0x80); return 1; } _set_errno(EILSEQ); return -1; } } if ((size_t)wchar >= 0x100) { _set_errno(EILSEQ); return -1; } *s = (char)wchar; return 1; } static int __cp_437_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 0, state); } static int __cp_720_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 1, state); } static int __cp_737_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 2, state); } static int __cp_775_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 3, state); } static int __cp_850_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 4, state); } static int __cp_852_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 5, state); } static int __cp_855_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 6, state); } static int __cp_857_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 7, state); } static int __cp_858_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 8, state); } static int __cp_862_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 9, state); } static int __cp_866_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 10, state); } static int __cp_874_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 11, state); } static int __cp_1125_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 12, state); } static int __cp_1250_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 13, state); } static int __cp_1251_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 14, state); } static int __cp_1252_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 15, state); } static int __cp_1253_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 16, state); } static int __cp_1254_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 17, state); } static int __cp_1255_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 18, state); } static int __cp_1256_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 19, state); } static int __cp_1257_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 20, state); } static int __cp_1258_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 21, state); } static int __cp_20866_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 22, state); } static int __cp_21866_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 23, state); } static int __cp_101_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 24, state); } static int __cp_102_wctomb(char *s, wchar_t _wchar, mbstate_t *state) { return ___cp_wctomb(s, _wchar, 25, state); } static wctomb_p __cp_xxx_wctomb[26] = { __cp_437_wctomb, __cp_720_wctomb, __cp_737_wctomb, __cp_775_wctomb, __cp_850_wctomb, __cp_852_wctomb, __cp_855_wctomb, __cp_857_wctomb, __cp_858_wctomb, __cp_862_wctomb, __cp_866_wctomb, __cp_874_wctomb, __cp_1125_wctomb, __cp_1250_wctomb, __cp_1251_wctomb, __cp_1252_wctomb, __cp_1253_wctomb, __cp_1254_wctomb, __cp_1255_wctomb, __cp_1256_wctomb, __cp_1257_wctomb, __cp_1258_wctomb, __cp_20866_wctomb, __cp_21866_wctomb, __cp_101_wctomb, __cp_102_wctomb }; /* val *MUST* be valid! All checks for validity are supposed to be performed before calling this function. */ wctomb_p __cp_wctomb(int val) { return __cp_xxx_wctomb[__cp_val_index(val)]; } #endif /* _MB_EXTENDED_CHARSETS_WINDOWS */ #endif /* _MB_CAPABLE */
{ "pile_set_name": "Github" }
; RUN: opt -loop-vectorize -pass-remarks-analysis=loop-vectorize < %s 2>&1 | FileCheck %s ; 1 extern int map[]; ; 2 extern int out[]; ; 3 ; 4 void f(int a, int n) { ; 5 for (int i = 0; i < n; ++i) { ; 6 out[i] = a; ; 7 a = map[a]; ; 8 } ; 9 } ; CHECK: remark: /tmp/s.c:5:3: loop not vectorized: value that could not be identified as reduction is used outside the loop ; %a.addr.08 is the phi corresponding to the remark. It does not have debug ; location attached. In this case we should use the debug location of the ; loop rather than emitting <unknown>:0:0: ; ModuleID = '/tmp/s.c' source_filename = "/tmp/s.c" target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128" @out = external local_unnamed_addr global [0 x i32], align 4 @map = external local_unnamed_addr global [0 x i32], align 4 ; Function Attrs: norecurse nounwind ssp uwtable define void @f(i32 %a, i32 %n) local_unnamed_addr #0 !dbg !6 { entry: %cmp7 = icmp sgt i32 %n, 0, !dbg !8 br i1 %cmp7, label %for.body.preheader, label %for.cond.cleanup, !dbg !9 for.body.preheader: ; preds = %entry %wide.trip.count = zext i32 %n to i64, !dbg !9 br label %for.body, !dbg !10 for.cond.cleanup: ; preds = %for.body, %entry ret void, !dbg !11 for.body: ; preds = %for.body.preheader, %for.body %indvars.iv = phi i64 [ %indvars.iv.next, %for.body ], [ 0, %for.body.preheader ] %a.addr.08 = phi i32 [ %0, %for.body ], [ %a, %for.body.preheader ] %arrayidx = getelementptr inbounds [0 x i32], [0 x i32]* @out, i64 0, i64 %indvars.iv, !dbg !10 store i32 %a.addr.08, i32* %arrayidx, align 4, !dbg !12, !tbaa !13 %idxprom1 = sext i32 %a.addr.08 to i64, !dbg !17 %arrayidx2 = getelementptr inbounds [0 x i32], [0 x i32]* @map, i64 0, i64 %idxprom1, !dbg !17 %0 = load i32, i32* %arrayidx2, align 4, !dbg !17, !tbaa !13 %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1, !dbg !9 %exitcond = icmp eq i64 %indvars.iv.next, %wide.trip.count, !dbg !9 br i1 %exitcond, label %for.cond.cleanup, label %for.body, !dbg !9, !llvm.loop !18 } attributes #0 = { norecurse nounwind ssp uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="core2" "target-features"="+cx16,+fxsr,+mmx,+sse,+sse2,+sse3,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!3, !4} !llvm.ident = !{!5} !0 = distinct !DICompileUnit(language: DW_LANG_C99, file: !1, producer: "clang version 4.0.0 (trunk 281293) (llvm/trunk 281290)", isOptimized: true, runtimeVersion: 0, emissionKind: NoDebug, enums: !2) !1 = !DIFile(filename: "/tmp/s.c", directory: "/tmp") !2 = !{} !3 = !{i32 2, !"Debug Info Version", i32 3} !4 = !{i32 1, !"PIC Level", i32 2} !5 = !{!"clang version 4.0.0 (trunk 281293) (llvm/trunk 281290)"} !6 = distinct !DISubprogram(name: "f", scope: !1, file: !1, line: 4, type: !7, isLocal: false, isDefinition: true, scopeLine: 4, flags: DIFlagPrototyped, isOptimized: true, unit: !0, retainedNodes: !2) !7 = !DISubroutineType(types: !2) !8 = !DILocation(line: 5, column: 21, scope: !6) !9 = !DILocation(line: 5, column: 3, scope: !6) !10 = !DILocation(line: 6, column: 5, scope: !6) !11 = !DILocation(line: 9, column: 1, scope: !6) !12 = !DILocation(line: 6, column: 12, scope: !6) !13 = !{!14, !14, i64 0} !14 = !{!"int", !15, i64 0} !15 = !{!"omnipotent char", !16, i64 0} !16 = !{!"Simple C/C++ TBAA"} !17 = !DILocation(line: 7, column: 9, scope: !6) !18 = distinct !{!18, !9}
{ "pile_set_name": "Github" }
LONDON, July 26, 2013 (AFP) – Tour de France director Christian Prudhomme rejected a call Friday from a leading British politician to stage a parallel women’s version of the race. Harriet Harman, deputy leader of the main opposition Labour Party, wrote an open letter to Prudhomme last week urging him to look at staging a women’s event at next year’s Grand Depart, the opening stage of the tour, which is being staged in the northern English county of Yorkshire. Harman, who has campaigned for women’s rights throughout her career, saw her letter to Prudhomme backed by a 70,000 strong petition. But Prudhomme said simply bolting on a women’s race to a Tour that is already full to capacity was not practical. “It would have been better for (Harman) to talk to us at the end of one of the stages or after another race,” said Prudhomme on a visit to Yorkshire on Friday. “We are not the only organizers of cycling in the world. “Also, it would have been much easier to talk to us directly instead of a petition and (finding out by) opening your mailbox one morning and you don’t know what has happened. “We are open to everything. Having women’s races is very important for sure. (But) the Tour is huge and you cannot have it bigger and bigger and bigger down the road—it is impossible.”
{ "pile_set_name": "Pile-CC" }
The Other End of Sunset Friday, May 07, 2010 Images, a song, and a scar Came in close, I heard a voiceStanding stretching every nerveHad to listen had no choiceI did not believe the informationI just had to trust imagination -- Peter Gabriel Hello, OtherEnders. I was cleaning up a couple of old computers lying around my house. A couple of them were Jeanne's machines, from various times -- an old Mac she used to play games and a Windows machine she used for work and other miscellaneous stuff. I found a few pictures of her. Since I don't have very many, I'm pretty happy to have a few more. But I'm also pretty sad. Thought you might like to see them... It's clearly impossible to take a good picture of yourself. I look awful -- what is that stupid facial hair thing? She's smiling her fake smile -- probably saying "oh shut up and smile" to me -- and she's wearing my sunglasses. I don't recognize where this photo was taken... I wonder what we were doing. Jeanne loved to laugh. You can see that in this picture, of her with two of her team members at Schwab -- she's laughing almost hysterically. That's the JR I try to remember. Don't be fooled by the radio,the TV, or the magazines.They show you photographsof how your life should be.They're just someone else'sfantasy.--Styx About About Me I'm a technology type. I have pointy hair. No, I am not the pointy-haired boss. I work in Hollywood, like Indian food, and am taller than I sound. I am partial to very brightly colored clothing, and think that there's always room for Jello(tm). I care a lot about politics. You might think you know my politics based on my favorite books, and you might be wrong. I grew up in a small town in Arkansas, went to college in Oklahoma, and graduate school at Princeton. Yes, I have a funny accent. My book (Getting Organized in the Google Era) is on sale, as of March, 2010.
{ "pile_set_name": "Pile-CC" }
29 F.Supp. 502 (1939) KENEALY v. TEXAS CO. District Court, S. D. New York. October 5, 1939. *503 Jacob Rassner, of New York City, for plaintiff. Tompkins, Boal & Tompkins, of New York City, for defendant. COXE, District Judge. This is a motion by the plaintiff for an examination before trial of an officer of the defendant, and for the production for inspection of various documents, records and photographs, believed to be in the defendant's possession. The motion is made under Rules 26 and 34 of the Federal Rules of Civil Procedure, 28 U.S.C.A. following section 723c. The action is brought under the Jones Act, 46 U.S.C.A. § 688, for damages for personal injuries alleged to have been sustained by the plaintiff while employed as a seaman on one of the defendant's vessels. The notice of motion describes the particular information sought to be obtained by the examination, and specifies in somewhat general language the different documents, records and photographs which the plaintiff desires to have produced for inspection. The defendant does not challenge the right of the plaintiff to the examination but objects to being required to produce for inspection the following: Item 1. All statements of fellow employees aboard the vessel as to the accident. Item 3. All records in the logs including the medical log of the vessel. Item 4. All photographs made by or on behalf of the defendant prior to the institution of the above entitled action. Prior to the making of the present motion, the attorney for the plaintiff was served with a notice for the examination of the plaintiff before trial under Rule 26. The plaintiff failed to appear at the time set for this examination, and is still in default. The defendant insists, therefore, that any examination of the defendant should be deferred until the plaintiff has submitted himself for examination pursuant to the notice already given. The contested portion of the motion is governed by Rule 34, which provides that "upon motion of any party showing good cause therefor * * * the court * * * may (1) order any party to produce and permit the inspection * * * of any designated documents, papers, books, * * * photographs, * * * not privileged, which constitute or contain evidence material to any matter involved in the action * * *". Under this rule, the moving party is required to make a showing of "good cause" in support of the motion. This means some adequate reason for the desired production and inspection. The court may then order the production and inspection of "designated documents, papers, books * * * photographs * *", provided they are "not privileged", and also provided they "constitute or contain evidence material to any matter involved in the action". The language of the rule is so clear it hardly admits of construction. "Designated" documents, etc., are those which can be identified with some reasonable degree of particularity. It was surely not intended by the use of the word "designated" to permit a roving inspection of a promiscuous mass of documents, etc., thought to be in the possession, custody or control of the opposing party. Piest v. Tide Water Oil Co., D.C., 26 F.Supp. 295. It is also plain that the only documents which the court may order produced for inspection are those which constitute or *504 contain material evidence in the case. This is in line with the settled law prior to the new rules. People ex rel. Lemon v. Supreme Court, 245 N.Y. 24, 156 N.E. 84, 52 A.L.R. 200; The Morro Castle[1], 1934. It has been recently held, also, in connection with Rule 34 that the materiality of the documents, etc., if challenged, must first be passed on by the court before the documents, etc., are submitted for inspection to the opposing party. United States v. Aluminum Co. of America, D.C., 26 F. Supp. 711. The defendant challenges item 1 of the notice only on the ground of materiality. It is unnecessary, therefore, to consider whether the documents specified have been sufficiently identified to be classed as "designated". Item 1 requests the production of "all statements of fellow employees aboard the vessel as to the accident". This is understood to refer to statements made after the accident by other employees of the defendant regarding their knowledge of the facts relating to the accident. Statements of this kind plainly do not "constitute or contain evidence material to any matter involved in the action"; they are not evidence but are at most merely memoranda available for use at the trial when the respective persons making the statements are called to testify. The plaintiff says that he should be permitted to have the statements in order that he may properly cross-examine the various witnesses as they are produced. This is substantially the argument advanced and rejected by Chief Judge Cardozo in People ex rel. Lemon v. Supreme Court, supra, and I do not think that it is even open under the language of Rule 34. I realize that there are expressions to the contrary in a number of recent District Court cases, Bough v. Lee, S.D.N.Y. March 28, 1939, 28 F.Supp. 673; Bough v. Lee, S.D.N.Y. June 24, 1939, 29 F.Supp. 498; Kulich v. Murray, S.D.N.Y. June 13, 1939, 28 F.Supp. 675; Price v. Levitt, E.D.N.Y. Aug. 18, 1939, 29 F.Supp. 164; and although I have great respect for the opinions of the judges who decided those cases, I still think that statements such as the ones now in question are without the letter as well as the spirit of Rule 34. It follows that the objection of the defendant to item 1 is sustained. The two remaining items objected to by the defendant require little discussion. The plaintiff is unquestionably entitled to the log records insofar as they relate to any issue in the case. As so limited, the objection to item 3 is overruled. Item 4 asks for the production of "all photographs made prior to the institution of the suit". This is entirely too broad, and should be limited to photographs of the particular place where the accident is alleged to have taken place. It has been held in this district that examinations under rule 26 should ordinarily take place in the order in which they are demanded. Bough v. Lee, D.C. S.D.N.Y. March 28, 1939, 28 F.Supp. 673; Grauer v. Schenley Products Co., Inc., D.C.S.D.N.Y. Nov. 10, 1938, 26 F.Supp. 768. This is not, however, an inflexible rule, and may be varied in particular cases. In the present case, I do not think it will serve any useful purpose to defer the examination of the defendant, or the production of the required documents. The defendant is still left with its remedy against the plaintiff under Rule 37(d) if there has been a wilful failure to appear after service of a proper notice. The motion of the plaintiff is granted, except with respect to items 1, 3 and 4; it is denied as to item 1; and is granted as to items 3 and 4 in limited form, as above indicated. NOTES [1] No opinion for publication.
{ "pile_set_name": "FreeLaw" }
Q: How can I add an extra operator overload for a class outside of it's original header? I am using DirectXMath.h where all multiplications and operations are done with XMVECTOR(a SIMD wrapper) and for storage is used XMFLOAT3 which contains 3 floats.However in this specific piece of code I really need to add a * operator for XMFLOAT3(for both XMFLOAT3*XMFLOAT3 and XMFLOAT3*float).Can I do that?Or must I tamper with the DirectXMath headers in the SDK? A: Yes, you could define your overload, but only as a free function, not as a member function. So you could do something like this (assuming this is the overload you're interested in): XMFLOAT3 operator*(XMFLOAT3 a, XMFLOAT3 b) { // whatever } A: Of course, in C++ you can provide an operator overload as a free function, like this: XMFLOAT3 operator*(XMFLOAT3 left, XMFLOAT3 right) { ... } if this will be employed in performance-sensitive code, check if pass-by-value vs pass-by-const reference makes any difference in emitted code/performance. A: XMFLOAT3 operator*(const XMFLOAT3& a, const XMFLOAT3& b){ XMFLOAT3 ans; ... return ans; } Mind you that this returns a copy of the answer and does not modify any of the 2 operands. This is true to the semantics of the * operator.
{ "pile_set_name": "StackExchange" }
--- abstract: 'In this work we present a mimetic spectral element discretization for the 2D incompressible Navier-Stokes equations that in the limit of vanishing dissipation exactly preserves mass, kinetic energy, enstrophy and total vorticity on unstructured grids. The essential ingredients to achieve this are: (i) a velocity-vorticity formulation in rotational form, (ii) a sequence of function spaces capable of exactly satisfying the divergence free nature of the velocity field, and (iii) a conserving time integrator. Proofs for the exact discrete conservation properties are presented together with numerical test cases on highly irregular grids.' address: - 'Eindhoven University of Technology, Department of Mechanical Engineering, P.O. Box 513, 5600 MB Eindhoven, The Netherlands' - 'Delft University of Technology, Faculty of Aerospace Engineering, P.O. Box 5058, 2600 GB Delft, The Netherlands' author: - 'A. Palha' - 'M. Gerritsma' bibliography: - './library\_clean.bib' title: 'A mass, energy, enstrophy and vorticity conserving (MEEVC) mimetic spectral element discretization for the 2D incompressible Navier-Stokes equations' --- energy conserving discretization ,mimetic discretization ,enstrophy conserving discretization ,spectral element method ,incompressible Navier-Stokes equations Introduction ============ Relevance of structure preserving methods ----------------------------------------- Structure-preserving discretizations are known for their robustness and accuracy. They conserve fundamental properties of the equations (mass, momentum, kinetic energy, etc). For example, it is well known that the application of conventional discretization techniques to inviscid flows generates artificial energy dissipation that pollutes the energy spectrum. For these reasons, structure-preserving discretizations have recently gained popularity. For a long time, in the development of general circulation models used in weather forecast, it has been noticed that care must be taken in the construction of the discretization of physical laws. Phillips [@Phillips1959] verified that the long-time integration of non-linear convection terms resulted in the breakdown of numerical simulations independently of the time step, due to the amplification of weak instabilities. Later, Arakawa [@Arakawa1966] proved that such instabilities can be avoided if the integral of the square of the advected quantity is conserved (kinetic energy, enstrophy in 2D, for example). Staggered finite difference discretizations that avoid these instabilities have been introduced both by Harlow and Welch [@Harlow1965] and Arakawa and his collaborators [@Arakawa1977; @Mesinger1976]. Lilly [@Lilly1965] showed that these discretizations could conserve momentum, energy and circulation. At the same time, Piacseck and Williams [@Piacsek1970] connected the conservation of energy with the preservation of skew-symmetry of the convection operator at the discrete level. These ideas of staggering the discrete physical quantities and of preserving the skew-symmetry of the convection operator have been successfully explored by several authors aiming to construct more robust and accurate numerical methods. In this work we focus on the incompressible Navier-Stokes equations, particularly convection-dominated flow problems (e.g. wind turbine wake aerodynamics). It is widely known that in the absence of external forces and viscosity these equations contain important symmetries or invariants, e.g. [@Arnold1966; @Arnold1992; @Majda2001; @Foias2001], such as conservation of kinetic energy. A straightforward discretization using standard methods does not guarantee the conservation of these invariants. Discrete energy conservation is important from a physical point of view, especially for turbulent flow simulations when Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) are used. In these cases, the accurate reproduction of the energy spectrum is essential to generate the associated energy cascade. The numerical diffusion inherent (and sometimes essential) to many discretizations can dominate both the molecular diffusion contribution (DNS) and the sub-grid model contribution (LES). This negatively affects the energy spectrum and consequently the energy cascade. Energy-conserving discretizations ensure that all diffusion is modelled and not a product of discretization errors. For this reason, many authors have shown that energy-conserving schemes are essential both for DNS (e.g. [@Perot1993; @Verstappen1995; @Le1997; @Verstappen1998; @Verstappen2003]) and LES simulations (e.g. [@Mahesh2004; @Mittal1997; @Benhamadouche2002; @Nagarajan2003; @Felten2006; @Ham2007]). In addition, discrete kinetic energy conservation provides a non-linear stability bound to the solution (e.g. [@Sadourny1975; @Sanderse2013]). This bound relaxes the characteristic stability constraints of standard methods, allowing the choice of mesh and time step size to be based only on accuracy requirements. This is particularly relevant for LES where mesh sizes have to be kept as large as possible, due to the still significant computational effort required for this approach. Energy-conserving methods generate well-behaved global errors and adequate physical behaviour, even on coarse meshes. In two-dimensions, enstrophy is another conserved quantity of the flow when external forces and viscosity are not present. The classical Arakawa scheme [@Arakawa1966] and derived schemes exploit both the conservation of energy and enstrophy. By doing so, these methods have far better long time simulation properties than competing methods that do not conserve these two quantities. Overview of structure preserving methods ---------------------------------------- Two of the most used approaches to construct energy preserving discretizations are: (i) staggering and (ii) skew-symmetrization of the convective operator. Most staggered grid methods date back to the pioneering work of Harlow and Welch [@Harlow1965], and Arakawa and colleagues [@Arakawa1977; @Mesinger1976]. These methods employ a discretization that distributes the different physical quantities (pressure, velocity, vorticity, etc) at different locations in the mesh (vertices, faces, cell centres). It can be shown that, by doing so, important conservation properties can be maintained. Due to its success, much attention has been given to this approach and several extensions to the original work have been made by several authors. Morinishi [@Morinishi1998] has developed a high-order version on Cartesian grids. Extensions to non-uniform meshes have been presented for example in [@Wesseling1999] for structured quadrilateral meshes and by several authors [@Apanovich1988; @Choudhury1990; @Perot2000; @Mullen2009] for simplicial meshes. The advantages of exploiting the skew-symmetry of the convection operator at the discrete level, were first identified by Piacseck [@Piacsek1970]. Following his ideas, Verstappen and Veldman [@Verstappen; @Verstappen1998; @Verstappen2003] and later Knicker [@Knikker2009] constructed high order discretizations on Cartesian grids with substantially improved properties. Kok [@Kok2009] presents another formulation valid on curvilinear grids. The extension to simplicial grids has been reported for example in the works of Vasilyev [@Vasilyev2000], Veldman [@Veldman2008] and van’t Hof [@VanHof2012]. At another level, Perot [@perot43discrete] suggested that a connection exists between the skew-self-adjoint formulation presented by Tadmor [@Tadmor1984] and box schemes. Box schemes or Keller box schemes are a class of numerical methods originally introduced by Wendroff [@wendroff1960] for hyperbolic problems and later popularized by Keller [@keller1971; @keller1978] for parabolic problems. This method is face-based and space and time are coupled by introducing a space-time control volume. This method is known to be physically accurate and successful application has been reported in [@Croisille2002; @Croisille2005; @Gustafsson2006; @Ranjan2013]. More recently, this method has been shown to be multisymplectic by Ascher [@Ascher2005] and Frank [@Frank2006]. Perot, [@Perot2007], also established a relation between box schemes and discrete calculus and generalized it to arbitrary meshes. Although most of the literature on the simulation of incompressible Navier-Stokes flows has been performed using finite difference and finite volume methods, finite element methods have also been actively used and shared many of the developments already discussed for finite differences and finite volumes. One of the initial challenges in using finite elements for flow simulations lies in the fact that specific finite element subspaces must be used to discretize the different physical quantities. The finite element subspaces must satisfy the Ladyzhenskaya-Babuska-Brezzi (LBB) (or inf-sup) compatibility condition (see [@brezzi1991mixed]), otherwise instability ensues. Several families of suitable finite elements have been proposed in the literature, the most common being the Taylor-Hood family (Taylor and Hood [@TaylorHood1973]) and the Crouzeix-Raviart family (Crouzeix and Raviart [@Crouzeix]). The underlying idea behind these different finite element families is to use different polynomial orders for velocity and pressure. In essence, this is intimately related to the staggering approach already discussed in the context of finite differences and finite volumes. Staggering is explicitly mentioned in some finite element work, for example [@KoprivaKolias; @Liu2007; @Liu2008; @Chung2012; @Tavelli2015]. Another important aspect when using finite elements is the weak formulation used for the convection term. Some of these forms (rotational and skew-symmetric) can retain important symmetries of the original system of equations, this has been reported for example in [@Guevremont1990; @Blaisdell1996; @Ronquist1996; @Wilhelm2000]. Ensuring conservation in finite element discretizations is not straightforward, nevertheless examples in the literature exist, e.g. [@Liu2000; @Bernsen2006]. Also within the context of finite elements, Rebholz and co-authors, e.g. [@Rebholz2007; @Olshanskii2010], have developed several velocity-vorticity discretizations for the 3D Navier-Stokes equations. These discretizations are capable of conserving both energy and helicity. One final aspect when discussing existing structure preserving methods is time integration. Many of the conserving methods reported in the literature present proofs regarding spatial discretization. Nevertheless, when discrete time evolution is taken into account many time integrators destroy the nice properties of the spatial discretization. The relevance of time integration for the construction of structure preserving discretizations has been analyzed by Sanderse [@Sanderse2013]. Overview of mimetic discretizations ----------------------------------- Over the years numerical analysts have developed numerical schemes which preserve some of the structure of the differential models they aim to approximate, so in that respect the whole structure preserving idea is not new. One of the contributions of mimetic methods is to identify the proper language in which to encode these structures/symmetries is the language of differential geometry. Another novel aspect of mimetic discretizations is the identification of the metric-free part of differential models, which can (and should) be conveniently described in terms of algebraic topology. A general introduction and overview to spatial and temporal mimetic/geometric methods can be found in [@Christiansen2011; @perot43discrete; @Budd2003; @Hairer2006]. The relation between differential geometry and algebraic topology in physical theories was first established by Tonti [@tonti1975formal]. Around the same time Dodziuk [@Dodziuk76] set up a finite difference framework for harmonic functions based on Hodge theory. Both Tonti and Dodziuk introduce differential forms and cochain spaces as the building blocks for their theory. The relation between differential forms and cochains is established by the Whitney map ($k$-cochains $\rightarrow$ $k$-forms) and the de Rham map ($k$-forms $\rightarrow$ $k$-cochains). The interpolation of cochains to differential forms on a triangular grid was already established by Whitney, [@Whitney57]. These interpolatory forms are now known as the [*Whitney forms*]{}. Hyman and Scovel [@HymanScovel90] set up the discrete framework in terms of cochains, which are the natural building blocks of finite volume methods. Later Bochev and Hyman [@bochev2006principles] extended this work and derived discrete operators such as the discrete wedge product, the discrete codifferential, the discrete inner product, etc. In a finite difference/volume context Robidoux, Hyman, Steinberg and Shashkov, [@HymanShashkovSteinberg97; @HymanShashkovSteinberg2002; @HYmanSteinberg2004; @RobidouxAdjointGradients1996; @RobidouxThesis; @bookShashkov; @Steinberg1996; @SteibergZingano2009] used symmetry considerations to discretize diffusion problems on rough grids and with non-smooth anisotropic diffusion coefficients. In a more recent paper by Robidoux and Steinberg [@Robidoux2011] a discrete vector calculus in a finite difference setting is presented. Here the differential operators grad, curl and div are exactly represented at the discrete level and the numerical approximations are all contained in the constitutive relations, which are already polluted by modeling and experimental error. For mimetic finite differences, see also Brezzi et al. [@BrezziBuffaLipnikov2009; @brezzi2010]. The application of mimetic ideas to unstructured staggered grids has been extensively studied by Perot, [@Perot2000; @ZhangSchmidtPerot2002; @perot2006mimetic; @PerotSubramanian2007a; @PerotSubramanian2007]. Especially in [@perot43discrete] where the rationale of preserving symmetries in numerical algorithms is lucidly described. The most *geometric approach* is described in the work by Desbrun et al. [@desbrun2005discrete; @ElcottTongetal2007; @MullenCraneetal2009; @PavlovMullenetal2010] and the thesis by Hirani [@Hirani_phd_2003]. The *Japanese papers* by Bossavit, [@bossavit_japan_computational_1; @bossavit_japan_computational_2; @bossavit_japan_computational_3; @bossavit_japan_computational_4; @bossavit_japan_computational_5], serve as an excellent introduction and motivation for the use of differential forms in the description of physics and the use in numerical modeling. The field of application is electromagnetism, but these papers are sufficiently general to extend to other physical theories. In a series of papers by Arnold, Falk and Winther, [@arnold:Quads; @arnold2006finite; @arnold2010finite], a finite element exterior calculus framework is developed. Higher order methods are also described by Rapetti [@Rapetti2007; @Rapetti2009] and Hiptmair [@hiptmair2001]. Possible extensions to spectral methods were described by Robidoux, [@robidoux-polynomial]. A different approach for constructing arbitrary order mimetic finite elements has been proposed by the authors [@Palha2014; @gerritsmaicosahom2012; @Rebelo2014; @palhaAdvectionIcosahom2014; @kreeft::stokes; @bouman::icosahom2009; @palha::icosahom2009]. Extensions of these ideas to polyhedral meshes have been proposed by Ern, Bonelle and co-authors in [@Bonelle2015; @Bonelle2014; @Bonelle2015a; @Bonelle2016] and by Brezzi and co-authors in [@BeiraodaVeiga2014; @Brezzi2014; @BeiraodaVeiga2016; @DaVeiga2015]. These approaches provide more geometrical flexibility while maintaining fundamental structure preserving properties. Mimetic isogeometric discretizations have been introduced by Buffa et al. [@BuffaDeFalcoSangalli2011], Evans and Hughes [@Evans2013a], and Hiemstra et al. [@Hiemstra2014]. Another approach develops a discretization of the physical field laws based on a discrete variational principle for the discrete Lagrangian action. This approach has been used in the past to construct variational integrators for Lagrangian systems, e.g. [@Kouranbaeva2000; @Marsden2003]. Recently, Kraus and Maj [@Kraus2015] have used the method of formal Lagrangians to derive generalized Lagrangians for non-Lagrangian systems of equations. This allows to apply variational techniques to construct structure preserving discretizations on a much wider range of systems. Outline of paper ---------------- In this work we present a new exact mass, energy, enstrophy and vorticity conserving (MEEVC) mimetic spectral element solver for the 2D incompressible Navier-Stokes equations. The essential ingredients to achieve this are: (i) a velocity-vorticity formulation in rotational form, (ii) a sequence of function spaces capable of exactly satisfying the divergence free nature of the velocity field, and (iii) a conserving time integrator. This results in a set of two decoupled equations: one for the evolution of velocity and another one for the evolution of vorticity. In [Section \[sec::spatial\_discretization\]]{} we present the spatial discretization. We start by introducting the $({\vec{u}},{\omega})$ formulation based in the rotational form in [Section \[subsec::the\_v\_omega\_formulation\_in\_the\_rotational\_form\]]{} and then in [Section \[subsec::finite\_element\_discretization\]]{} the finite element discretization is discussed. This is followed by the temporal discretization in [Section \[sec::temporal\_discretization\]]{}. Once we have introduced the numerical discretization its conservation properties are proved in [Section \[sec::conservation\_properties\_and\_time\_reversibility\]]{}. In [Section \[sec::numerical\_test\_cases\]]{} the method is applied and tested on two test cases. We start by testing the accuracy of the method with a Taylor-Green vortex, for which the analytical solution is known. We finalize the test cases with an inviscid shear layer roll-up test to illustrate the conservation properties of the method. In [Section \[sec::conclusions\]]{} we conclude with a discussion of the merits and limitations of this solver and future extensions. Spatial discretization {#sec::spatial_discretization} ====================== The $({\vec{u}},{\omega})$ formulation in rotational form {#subsec::the_v_omega_formulation_in_the_rotational_form} --------------------------------------------------------- The evolution of 2D viscous incompressible flows is governed by the Navier-Stokes equations, which are most commonly expressed as a set of conservation laws for momentum and mass involving the velocity ${\vec{u}}$ and pressure $p$: $$\begin{dcases} \frac{\partial{\vec{u}}}{\partial t} + \left({\vec{u}}\cdot\nabla\right){\vec{u}}+ \nabla p = \nu\Delta{\vec{u}}+ \vec{s}, \\ \nabla\cdot{\vec{u}}= 0, \end{dcases} \label{eq::ns_convective_form}$$ with $\nu$ the kinematic viscosity, $\vec{s}$ the body force per unit mass, and $\Delta = \nabla\cdot\nabla$ the Laplace operator. These equations are valid on the fluid domain $\Omega$, together with suitable initial and boundary conditions. In this work we consider only periodic boundary conditions and we set $\vec{s}=0$. The form of the Navier-Stokes equations presented in is the so called *convective form*. Its name stems from the particular form of the nonlinear term $ \left({\vec{u}}\cdot\nabla\right){\vec{u}}$, which underlines its convective nature. This form is not unique. Using well known vector calculus identities it is possible to rewrite the nonlinear term in three other forms, see for example Zang [@Zang1991], Morinishi [@Morinishi1998], and R[ø]{}nquist [@Ronquist1996]. The first alternative, called *divergence form* or *conservative form*, expresses the nonlinear term as a divergence: $$\begin{dcases} \frac{\partial{\vec{u}}}{\partial t} + \nabla\cdot\left({\vec{u}}\otimes{\vec{u}}\right)+ \nabla p = \nu\Delta{\vec{u}}, \\ \nabla\cdot{\vec{u}}= 0. \end{dcases} \label{eq::ns_divergence_form}$$ The second alternative is obtained as a linear combination of the convective and divergence forms, producing a skew-symmetric nonlinear term, thus its name *skew-symmetric form*: $$\begin{dcases} \frac{\partial{\vec{u}}}{\partial t} + \frac{1}{2} \left({\vec{u}}\cdot\nabla\right){\vec{u}}+ \frac{1}{2}\nabla\cdot\left({\vec{u}}\otimes{\vec{u}}\right)+ \nabla p = \nu\Delta{\vec{u}}, \\ \nabla\cdot{\vec{u}}= 0. \end{dcases} \label{eq::ns_skew_symmetric_form}$$ The third and final alternative we consider here, the *rotational form*, makes use of the vorticity ${\omega}:=\nabla\times{\vec{u}}$: $$\begin{dcases} \frac{\partial{\vec{u}}}{\partial t} + \omega\times{\vec{u}}+ \nabla {\bar{p}}= \nu\Delta{\vec{u}}, \\ \nabla\cdot{\vec{u}}= 0, \end{dcases} \label{eq::ns_rotational_form}$$ where the static pressure $p$ has been replaced by the total pressure ${\bar{p}}:= \frac{1}{2}{\vec{u}}\cdot{\vec{u}}+ p$. A different, but equivalent, approach used to describe fluid flow problems resorts to expressing the momentum equation in terms of the vorticity ${\omega}$. By taking the curl of the momentum equation and using the kinematic definition ${\omega}:= \nabla\times{\vec{u}}$ we can obtain the flow equations based on vorticity transport: $$\begin{dcases} \frac{\partial{\omega}}{\partial t} + \frac{1}{2}\left({\vec{u}}\cdot\nabla\right){\omega}+ \frac{1}{2}\nabla\cdot\left({\vec{u}}\,{\omega}\right) = \nu\Delta{\omega}, \\ \nabla\cdot{\vec{u}}= 0, \\ {\omega}= \nabla\times{\vec{u}}\,. \end{dcases} \label{eq::ns_vorticity_transport}$$ This velocity-vorticity $({\vec{u}},{\omega})$ formulation of the Navier-Stokes equations is of particular interest for vortex dominated flows, see for example Gatski [@Gatski1991] for an overview and Daube [@Daube1992] and Clercx [@Clercx1997a] for applications. All these different ways of expressing the governing equations of fluid flow are equivalent at the continuous level. As stated before, one can start with and derive all other sets of equations simply by using well known vector calculus identities and definitions. The often overlooked aspect is that each of these formulations leads to a different discretization, with its own set of properties, e.g. [@Zang1991; @Gatski1991; @Ronquist1996; @Morinishi1998]. Only in the limit of vanishing mesh size ($h \rightarrow 0$) are the discretizations expected to be equivalent. This introduces an additional degree of freedom associated to the choice of formulation that, combined with the discretization approach to use, will determine the final properties of the method. In this work we construct a $({\vec{u}},{\omega})$ formulation by combining the rotational form with the vorticity transport equation , similar to the work of Benzi et al. [@Benzi2012] and Lee et al. [@Lee2011]: $$\begin{dcases} \frac{\partial{\vec{u}}}{\partial t} + \omega\times{\vec{u}}+ \nabla {\bar{p}}= -\nu\nabla\times{\omega}, \\ \frac{\partial{\omega}}{\partial t} + \frac{1}{2}\left({\vec{u}}\cdot\nabla\right){\omega}+ \frac{1}{2}\nabla\cdot\left({\vec{u}}\,{\omega}\right) = \nu\Delta{\omega}, \\ \nabla\cdot{\vec{u}}= 0\,, \end{dcases} \label{eq::ns_meevc_form}$$ where we use the vector calculus identity $\Delta{\vec{u}}= \nabla\left(\nabla\cdot{\vec{u}}\right) - \nabla\times\nabla\times{\vec{u}}$ to derive the equality $\Delta{\vec{u}}= -\nabla\times{\omega}$. An important aspect we wish to stress at this point is that although at the continuous level the kinematic definition ${\omega}:= \nabla\times{\vec{u}}$ is valid, at the discrete level it is not always guaranteed that this identity holds. In fact, in the discretization presented here this identity is satisfied only approximately. This, as will be seen, enables the construction of a mass, energy, enstrophy and vorticity conserving discretization. Finite element discretization {#subsec::finite_element_discretization} ----------------------------- In this work we set out to construct a finite element discretization for the Navier-Stokes equations as given by . In particular we use a mixed finite element formulation, for more details on the mixed finite elements see for example [@brezzi1991mixed]. The first step for developing this discretization is the construction of the weak form of : $$\begin{dcases} \text{Find } {\vec{u}}\in {H(\mathrm{div},\Omega)}, {\bar{p}}\in {L^{2}(\Omega)}\text{ and } {\omega}\in {H(\mathrm{curl},\Omega)}\text{ such that:}\\ \langle\frac{\partial{\vec{u}}}{\partial t},{\vec{v}}\rangle_{\Omega} + \langle\omega\times{\vec{u}},{\vec{v}}\rangle_{\Omega} - \langle {\bar{p}},\nabla\cdot{\vec{v}}\rangle_{\Omega} = -\nu\langle\nabla\times{\omega},{\vec{v}}\rangle_{\Omega},\quad \forall {\vec{v}}\in {H(\mathrm{div},\Omega)}, \\ \langle\frac{\partial{\omega}}{\partial t},{\xi}\rangle_{\Omega} - \frac{1}{2}\langle{\omega},\nabla\cdot\left({\vec{u}}\,{\xi}\right)\rangle_{\Omega} + \frac{1}{2}\langle\nabla\cdot\left({\vec{u}}\,{\omega},\right),{\xi}\rangle_{\Omega} = \nu\langle\nabla\times{\omega},\nabla\times{\xi}\rangle_{\Omega}, \quad\forall {\xi}\in {H(\mathrm{curl},\Omega)}, \\ \langle\nabla\cdot{\vec{u}},{q}\rangle_{\Omega} = 0, \quad\forall {q}\in {L^{2}(\Omega)}\,, \end{dcases} \label{eq::ns_meevc_weak_form_continuous}$$ where we have used integration by parts and the periodic boundary conditions to obtain the identities $\langle {\bar{p}},\nabla\cdot{\vec{v}}\rangle_{\Omega} = - \langle \nabla{\bar{p}},{\vec{v}}\rangle_{\Omega}$, $\langle{\omega},\nabla\cdot\left({\vec{u}}\,{\xi}\right)\rangle_{\Omega} = -\langle\left({\vec{u}}\cdot\nabla\right){\omega},{\xi}\rangle_{\Omega}$ and $\langle\nabla\times{\omega},\nabla\times{\xi}\rangle_{\Omega} = \langle\Delta{\omega},{\xi}\rangle_{\Omega}$. The space ${L^{2}(\Omega)}$ corresponds to square integrable functions and the spaces ${H(\mathrm{div},\Omega)}$ and ${H(\mathrm{curl},\Omega)}$ contain square integrable functions whose divergence and curl are also square integrable. The second step is the definition of conforming finite dimensional function spaces, where we will seek our discrete solutions for velocity ${\vec{u}_{h}}$, pressure ${\bar{p}_{h}}$ and vorticity ${\omega_{h}}$: $${\vec{u}_{h}}\in {U_{h}}\subset {H(\mathrm{div},\Omega)}, \quad {\bar{p}_{h}}\in {Q_{h}}\subset {L^{2}(\Omega)}\quad \mathrm{and} \quad {\omega_{h}}\in {W_{h}}\subset {H(\mathrm{curl},\Omega)}.$$ As usual, each of these finite dimensional function spaces, ${U_{h}}$, ${Q_{h}}$ and ${W_{h}}$, has an associated finite set of basis functions, ${\vec{\epsilon}^{\,U}_{i}}$, ${\epsilon^{Q}_{i}}$, ${\epsilon^{W}_{i}}$, such that $${U_{h}}= \mathrm{span}\{{\vec{\epsilon}^{\,U}_{1}}, \dots,{\vec{\epsilon}^{\,U}_{{d_{U}}}}\}, \quad {Q_{h}}= \mathrm{span}\{{\epsilon^{Q}_{1}}, \dots,{\epsilon^{Q}_{{d_{Q}}}}\}\quad\mathrm{and}\quad{W_{h}}= \mathrm{span}\{{\epsilon^{W}_{1}}, \dots,{\epsilon^{W}_{{d_{W}}}}\},$$ where ${d_{U}}$, ${d_{Q}}$ and ${d_{W}}$ denote the dimension of the discrete function spaces and therefore correspond to the number of degrees of freedom for each of the unknowns. As a consequence, the approximate solutions for velocity, pressure and vorticity can be expressed as a linear combination of these basis functions $${\vec{u}_{h}}:= \sum_{i=1}^{{d_{U}}}{u_{i}}\,{\vec{\epsilon}^{\,U}_{i}}, \quad {\bar{p}_{h}}:= \sum_{i=1}^{{d_{Q}}}{p_{i}}\,{\epsilon^{Q}_{i}} \quad\mathrm{and}\quad {\omega_{h}}:= \sum_{i=1}^{{d_{W}}}{\omega_{i}}\,{\epsilon^{W}_{i}}, \label{eq:basis_expansion}$$ with ${u_{i}}$, ${p_{i}}$ and ${\omega_{i}}$ the degrees of freedom of velocity, total pressure and vorticity, respectively. Since the Navier-Stokes equations form a time dependent set of equations, in general these coefficients will be time dependent, ${u_{i}} = {u_{i}}(t)$, ${p_{i}}={p_{i}}(t)$ and ${\omega_{i}}={\omega_{i}}(t)$. The choice of the finite dimensional function spaces dictates the properties of the discretization. In order to have exact conservation of mass we must exactly satisfy the divergence free constraint at the discrete level. A sufficient condition that guarantees divergence-free velocities, e.g. [@arnold2010finite; @Cockburn2006; @Buffa2011], is: $$\left\{\nabla\cdot{\vec{u}_{h}}\,|\, {\vec{u}_{h}}\in{U_{h}}\right\} \subseteq {Q_{h}}\,. \label{eq:divu_subspace_q}$$ In other words, the divergence operator must map ${U_{h}}$ into ${Q_{h}}$: $${U_{h}}\stackrel{\nabla\cdot}{\longrightarrow}{Q_{h}}\,. \label{eq:div_subcomplex}$$ If we set $${U_{h}}= {\mathrm{RT}_{N}}\quad \mathrm{and} \quad {Q_{h}}={\mathrm{DG}_{N-1}}\,,$$ where ${\mathrm{RT}_{N}}$ are the Raviart-Thomas elements of degree $N$, see [@RaviartThomas1977; @kirby2012], and ${\mathrm{DG}_{N-1}}$ are the discontinuous Lagrange elements of degree $(N-1)$, see [@kirby2012], we satisfy , see for example [@arnold2010finite] for a proof. Additionally, this pair of finite elements satisfies the LBB stability condition, see for example [@RaviartThomas1977; @arnold2010finite]. What remains to define is the finite element space associated to the vorticity, ${W_{h}}$. Since we wish to exactly represent the diffusion term in the momentum equation $\langle\nabla\times{\omega},{\vec{v}_{h}}\rangle_{\Omega}$ the space ${W_{h}}$ must satisfy a relation similar to $$\left\{\nabla\times{\omega_{h}}\,|\, {\omega_{h}}\in{W_{h}}\right\} \subseteq {U_{h}}\,. \label{eq:curlw_subspace_u}$$ In other words, the curl operator must map ${W_{h}}$ into ${U_{h}}$: $${W_{h}}\stackrel{\nabla\times}{\longrightarrow}{U_{h}}\,. \label{eq:curl_subcomplex}$$ The Lagrange elements of degree $N$, denoted by ${\mathrm{CG}_{N}}$ in [@kirby2012], satisfy , see [@arnold2010finite], therefore we set $${W_{h}}= {\mathrm{CG}_{N}}\,.$$ Together, the combination of these three finite element spaces forms a Hilbert subcomplex $$0 \longrightarrow {W_{h}}\stackrel{\nabla\times}{\longrightarrow}{U_{h}}\stackrel{\nabla\cdot}{\longrightarrow}{Q_{h}}\longrightarrow 0\,,$$ that mimics the 2D Hilbert complex associated to the continuous functional spaces: $$0 \longrightarrow {H(\mathrm{curl},\Omega)}\stackrel{\nabla\times}{\longrightarrow}{H(\mathrm{div},\Omega)}\stackrel{\nabla\cdot}{\longrightarrow}{L^{2}(\Omega)}\longrightarrow 0\,.$$ The Hilbert complex is an important structure that is intimately related to the de Rham complex of differential forms. The construction of a discrete subcomplex is an important requirement to obtain stable and accurate finite element discretizations, see for example for a detailed discussion. The finite element spatial discretization used in this work to construct an approximate solution of is then obtained by the following weak formulation $$\begin{dcases} \text{Find } {\vec{u}_{h}}\in {\mathrm{RT}_{N}}, {\bar{p}_{h}}\in {\mathrm{DG}_{N-1}}\text{ and } {\omega_{h}}\in {\mathrm{CG}_{N}}\text{ such that:}\\ \langle\frac{\partial{\vec{u}_{h}}}{\partial t},{\vec{v}_{h}}\rangle_{\Omega} + \langle{\omega_{h}}\times{\vec{u}_{h}},{\vec{v}_{h}}\rangle_{\Omega} - \langle {\bar{p}_{h}},\nabla\cdot{\vec{v}_{h}}\rangle_{\Omega} = -\nu\langle\nabla\times{\omega_{h}},{\vec{v}_{h}}\rangle_{\Omega},\quad \forall {\vec{v}_{h}}\in {\mathrm{RT}_{N}}, \\ \langle\frac{\partial{\omega_{h}}}{\partial t},{\xi_{h}}\rangle_{\Omega} - \frac{1}{2}\langle{\omega_{h}},\nabla\cdot\left({\vec{u}_{h}}\,{\xi_{h}}\right)\rangle_{\Omega} + \frac{1}{2}\langle\nabla\cdot\left({\vec{u}_{h}}\,{\omega_{h}}\right),{\xi_{h}}\rangle_{\Omega} = \nu\langle\nabla\times{\omega_{h}},\nabla\times{\xi_{h}}\rangle_{\Omega}, \quad\forall {\xi_{h}}\in {\mathrm{CG}_{N}}, \\ \langle\nabla\cdot{\vec{u}_{h}},{q_{h}}\rangle_{\Omega} = 0, \quad\forall {q_{h}}\in {\mathrm{DG}_{N-1}}\,. \end{dcases} \label{eq::ns_meevc_weak_form_discrete}$$ Using the expansions for ${\vec{u}_{h}}$, ${\bar{p}_{h}}$ and ${\omega_{h}}$ in , can be rewritten as $$\begin{dcases} \text{Find } {\boldsymbol{u}}\in \mathbb{R}^{{d_{U}}}, {\boldsymbol{\bar{p}}}\in \mathbb{R}^{{d_{Q}}} \text{ and } {\boldsymbol{\omega}}\in \mathbb{R}^{{d_{W}}} \text{ such that:}\\ \sum_{i=1}^{{d_{U}}}\frac{\mathrm{d}{u_{i}}}{\mathrm{d}t}\langle{\vec{\epsilon}^{\,U}_{i}},{\vec{\epsilon}^{\,U}_{j}}\rangle_{\Omega} + \sum_{i=1}^{{d_{U}}}{u_{i}}\langle{\omega_{h}}\times{\vec{\epsilon}^{\,U}_{i}},{\vec{\epsilon}^{\,U}_{j}}\rangle_{\Omega} - \sum_{k=1}^{{d_{Q}}}{p_{k}}\langle {\epsilon^{Q}_{k}},\nabla\cdot{\vec{\epsilon}^{\,U}_{j}}\rangle_{\Omega} = -\nu\langle\nabla\times{\omega_{h}},{\vec{\epsilon}^{\,U}_{j}}\rangle_{\Omega},\quad j=1,\dots,{d_{U}}, \\ \sum_{i=1}^{{d_{W}}}\frac{\mathrm{d}{\omega_{i}}}{\mathrm{d}t}\langle{\epsilon^{W}_{i}},{\epsilon^{W}_{j}}\rangle_{\Omega} - \sum_{i=1}^{{d_{W}}}\frac{{\omega_{i}}}{2}\langle{\epsilon^{W}_{i}},\nabla\cdot\left({\vec{u}_{h}}\,{\epsilon^{W}_{j}}\right)\rangle_{\Omega} + \sum_{i=1}^{{d_{W}}}\frac{{\omega_{i}}}{2}\langle\nabla\cdot\left({\vec{u}_{h}}\,{\epsilon^{W}_{i}}\right),{\epsilon^{W}_{j}}\rangle_{\Omega} = \nu\sum_{i=1}^{{d_{W}}}{\omega_{i}}\langle\nabla\times{\epsilon^{W}_{i}},\nabla\times{\epsilon^{W}_{j}}\rangle_{\Omega}, \quad j=1,\dots,{d_{W}}, \\ \sum_{i=1}^{{d_{U}}}{u_{i}}\langle\nabla\cdot{\vec{\epsilon}^{\,U}_{i}},{\epsilon^{Q}_{j}}\rangle_{\Omega} = 0, \quad j = 1,\dots,{d_{Q}}\,, \end{dcases} \label{eq::ns_meevc_weak_form_discrete_expansion}$$ with ${\boldsymbol{u}}:= [{u_{1}},\dots,{u_{{d_{U}}}}]^{\top}$, ${\boldsymbol{\bar{p}}}:= [{p_{1}},\dots,{p_{{d_{Q}}}}]^{\top}$ and ${\boldsymbol{\omega}}:= [{\omega_{1}},\dots,{\omega_{{d_{W}}}}]^{\top}$. Using matrix notation, can be expressed more compactly as $$\begin{dcases} \text{Find } {\boldsymbol{u}}\in \mathbb{R}^{{d_{U}}}, {\boldsymbol{\bar{p}}}\in \mathbb{R}^{{d_{Q}}} \text{ and } {\boldsymbol{\omega}}\in \mathbb{R}^{{d_{W}}} \text{ such that:}\\ {\boldsymbol{\mathsf{M}}} \frac{\mathrm{d}{\boldsymbol{u}}}{\mathrm{d}t} + {\boldsymbol{\mathsf{R}}}\,{\boldsymbol{u}}- {\boldsymbol{\mathsf{P}}}\,{\boldsymbol{\bar{p}}}= -\nu\,\boldsymbol{l}, \\ {\boldsymbol{\mathsf{N}}}\frac{\mathrm{d}{\boldsymbol{\omega}}}{\mathrm{d}t} - \frac{1}{2}{\boldsymbol{\mathsf{W}}}\,{\boldsymbol{\omega}}+ \frac{1}{2}{\boldsymbol{\mathsf{W}}}^{\top}{\boldsymbol{\omega}}= \nu\,{\boldsymbol{\mathsf{L}}}\,{\boldsymbol{\omega}}, \\ {\boldsymbol{\mathsf{D}}}\,{\boldsymbol{u}}= 0, \end{dcases} \label{eq::ns_meevc_weak_form_discrete_matrix_notation}$$ The coefficients of the matrices ${\boldsymbol{\mathsf{M}}}$, ${\boldsymbol{\mathsf{R}}}$ and ${\boldsymbol{\mathsf{P}}}$, and the column vector $\boldsymbol{l}$ are given by $${\mathsf{M}}_{ij} := \langle{\vec{\epsilon}^{\,U}_{j}},{\vec{\epsilon}^{\,U}_{i}}\rangle_{\Omega}, \quad {\mathsf{R}}_{ij} := \langle{\omega_{h}}\times{\vec{\epsilon}^{\,U}_{j}},{\vec{\epsilon}^{\,U}_{i}}\rangle_{\Omega}, \quad {\mathsf{P}}_{ij} := \langle {\epsilon^{Q}_{j}},\nabla\cdot{\vec{\epsilon}^{\,U}_{i}}\rangle_{\Omega}\quad\mathrm{and}\quad l_{i} := \langle\nabla\times{\omega_{h}},{\vec{\epsilon}^{\,U}_{i}}\rangle_{\Omega}. \label{eq:matrix_coefficients_1}$$ Similarly, the coefficients of the matrices ${\boldsymbol{\mathsf{N}}}$, ${\boldsymbol{\mathsf{W}}}$, ${\boldsymbol{\mathsf{L}}}$ and ${\boldsymbol{\mathsf{D}}}$ are given by $${\mathsf{N}}_{ij} := \langle{\epsilon^{W}_{j}},{\epsilon^{W}_{i}}\rangle_{\Omega}, \quad {\mathsf{W}}_{ij} := \langle{\epsilon^{W}_{j}},\nabla\cdot\left({\vec{u}_{h}}\,{\epsilon^{W}_{i}}\right)\rangle_{\Omega}, \quad {\boldsymbol{\mathsf{L}}}_{ij} := \langle\nabla\times{\epsilon^{W}_{j}},\nabla\times{\epsilon^{W}_{i}}\rangle_{\Omega} \quad \mathrm{and}\quad {\boldsymbol{\mathsf{D}}}_{ij} :=\langle\nabla\cdot{\vec{\epsilon}^{\,U}_{j}},{\epsilon^{Q}_{i}}\rangle_{\Omega}. \label{eq:matrix_coefficients_2}$$ Temporal discretization {#sec::temporal_discretization} ======================= In this section we present the time discretization. The choice of a time discretization is essential in preserving invariants. Not all time integrators satisfy conservation of energy, even though the spatial discretization imposes conservation of kinetic energy as a function of time, see [@Sanderse2013]. In this work we choose to use a Gauss method. This time integrator is a type of collocation method based on Gauss quadrature. It is known to be an implicit Runge-Kutta method and to have optimal convergence order $2s$ for $s$ stages. Two of its most attractive properties are that (i) it conserves energy when applied to the discretization of the Navier-Stokes equations, see [@Sanderse2013], and (ii) it is time-reversible, see [@Hairer2006]. For a more detailed discussion of its properties and construction see [@Hairer2006]. Although any Gauss integrator could be used, we choose the lowest order, $s=1$, also known as the *midpoint rule*, because it enables the construction of an explicit staggered integrator in time. When applied to the solution of a 1D ordinary differential equation of the form $$\begin{dcases} \frac{\mathrm{d}f}{\mathrm{d}t} = g(f(t),t), \\ f(0) = f_{0}, \end{dcases}$$ the one stage Gauss integrator results in the following implicit time stepping scheme $$\frac{f^{k} - f^{k-1}}{\Delta t} = g\left(\frac{f^{k}+f^{k-1}}{2},t+\frac{\Delta t}{2}\right), \quad k=1,\dots,M, \label{eq:gauss_integrator_1D}$$ where $f^{0} = f_{0}$, $\Delta t$ is the time step and $M$ is the number of time steps. The direct application of to the discrete weak form results in $$\begin{dcases} \text{Find } {{\vec{u}_{h}}^{\,k+1}}\in {\mathrm{RT}_{N}}, {{\bar{p}_{h}}^{\,k+1}}\in {\mathrm{DG}_{N-1}}\text{ and } {{\omega_{h}}^{\,k+1}}\in {\mathrm{CG}_{N}}\text{ such that:}\\ \langle\frac{{{\vec{u}_{h}}^{\,k+1}}- {{\vec{u}_{h}}^{\,k}}}{\Delta t},{\vec{v}_{h}}\rangle_{\Omega} + \langle{\tilde{\omega}_{h}^{\,k+\frac{1}{2}}}\times\frac{{{\vec{u}_{h}}^{\,k+1}}+{{\vec{u}_{h}}^{\,k}}}{2},{\vec{v}_{h}}\rangle_{\Omega} - \langle {\tilde{\bar{p}}_{h}^{\,k+\frac{1}{2}}},\nabla\cdot{\vec{v}_{h}}\rangle_{\Omega} = -\nu\langle\nabla\times{\tilde{\omega}_{h}^{\,k+\frac{1}{2}}},{\vec{v}_{h}}\rangle_{\Omega},\quad \forall {\vec{v}_{h}}\in {\mathrm{RT}_{N}}, \\ \langle\frac{{{\omega_{h}}^{\,k+1}}-{{\omega_{h}}^{\,k}}}{\Delta t},{\xi_{h}}\rangle_{\Omega} - \frac{1}{2}\langle\frac{{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}}{2},\nabla\cdot\left({\tilde{\vec{u}}_{h}^{\,k+\frac{1}{2}}}\,{\xi_{h}}\right)\rangle_{\Omega} + \frac{1}{2}\langle\nabla\cdot\left({\tilde{\vec{u}}_{h}^{\,k+\frac{1}{2}}}\,\frac{{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}}{2}\right),{\xi_{h}}\rangle_{\Omega} = \nu\langle\nabla\times\frac{{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}}{2},\nabla\times{\xi_{h}}\rangle_{\Omega}, \quad\forall {\xi_{h}}\in {\mathrm{CG}_{N}}, \\ \langle\nabla\cdot{{\vec{u}_{h}}^{\,k+1}},{q_{h}}\rangle_{\Omega} = 0, \quad\forall {q_{h}}\in {\mathrm{DG}_{N-1}}\,, \end{dcases} \label{eq::ns_meevc_weak_form_discrete_naive_gauss}$$ where, for compactness of notation, we have set $${\tilde{\vec{u}}_{h}^{\,k+\frac{1}{2}}}:= \frac{{{\vec{u}_{h}}^{\,k+1}}+ {{\vec{u}_{h}}^{\,k}}}{2} \quad \mathrm{and}\quad {\tilde{\omega}_{h}^{\,k+\frac{1}{2}}}:= \frac{{{\omega_{h}}^{\,k+1}}+ {{\omega_{h}}^{\,k}}}{2}. \label{eq:mid_steps_compact_notation}$$ The time stepping scheme in consists of a coupled system of nonlinear equations. Therefore its solution will necessarily require an iterative procedure, which is computationally expensive. To overcome this drawback, instead of defining all the unknown physical quantities ${\vec{u}_{h}}$, ${\omega_{h}}$ and ${\bar{p}_{h}}$, at the same time instants $t^{k}$ we choose to stagger them in time. In this way it is possible to transform into two systems of quasi-linear equations. The unknown vorticity and total pressure are defined at the integer time instants ${{\omega_{h}}^{\,k}}$, ${{\bar{p}_{h}}^{\,k}}$ and the unknown velocity is defined at the intermediate time instants ${\vec{u}_{h}^{\,k+\frac{1}{2}}}$, see [Figure \[fig:time\_stepping\]]{}. Taking into account this staggered approach, can be rewritten as $$\begin{dcases} \text{Find } {\vec{u}_{h}^{\,k+\frac{3}{2}}}\in {\mathrm{RT}_{N}}, {{\bar{p}_{h}}^{\,k+1}}\in {\mathrm{DG}_{N-1}}\text{ and } {{\omega_{h}}^{\,k+1}}\in {\mathrm{CG}_{N}}\text{ such that:}\\ \langle\frac{{\vec{u}_{h}^{\,k+\frac{3}{2}}}- {\vec{u}_{h}^{\,k+\frac{1}{2}}}}{\Delta t},{\vec{v}_{h}}\rangle_{\Omega} + \langle{{\omega_{h}}^{\,k+1}}\times\frac{{\vec{u}_{h}^{\,k+\frac{3}{2}}}+{\vec{u}_{h}^{\,k+\frac{1}{2}}}}{2},{\vec{v}_{h}}\rangle_{\Omega} - \langle {{\bar{p}_{h}}^{\,k+1}},\nabla\cdot{\vec{v}_{h}}\rangle_{\Omega} = -\nu\langle\nabla\times{{\omega_{h}}^{\,k+1}},{\vec{v}_{h}}\rangle_{\Omega},\quad \forall {\vec{v}_{h}}\in {\mathrm{RT}_{N}}, \\ \langle\frac{{{\omega_{h}}^{\,k+1}}-{{\omega_{h}}^{\,k}}}{\Delta t},{\xi_{h}}\rangle_{\Omega} - \frac{1}{2}\langle\frac{{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}}{2},\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,{\xi_{h}}\right)\rangle_{\Omega} + \frac{1}{2}\langle\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,\frac{{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}}{2}\right),{\xi_{h}}\rangle_{\Omega} = \nu\langle\nabla\times\frac{{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}}{2},\nabla\times{\xi_{h}}\rangle_{\Omega}, \quad\forall {\xi_{h}}\in {\mathrm{CG}_{N}}, \\ \langle\nabla\cdot{\vec{u}_{h}^{\,k+\frac{3}{2}}},{q_{h}}\rangle_{\Omega} = 0, \quad\forall {q_{h}}\in {\mathrm{DG}_{N-1}}\,, \end{dcases} \label{eq::ns_meevc_weak_form_discrete_staggered_gauss}$$ where ${\vec{u}_{h}^{\,k+\frac{1}{2}}}$ and ${{\omega_{h}}^{\,k}}$ are known at the start of each time step. Using , it is possible to rewrite in a compact matrix notation $$\begin{dcases} \text{Find } {\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}\in \mathbb{R}^{{d_{U}}}, {{\boldsymbol{\bar{p}}}^{\,k+1}}\in \mathbb{R}^{{d_{Q}}} \text{ and } {{\boldsymbol{\omega}}^{\,k+1}}\in \mathbb{R}^{{d_{W}}} \text{ such that:}\\ {\boldsymbol{\mathsf{M}}} \frac{{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}- {\boldsymbol{u}^{\,k+\frac{1}{2}}}}{\Delta t} + {\boldsymbol{\mathsf{R}}}^{k+1}\,\frac{{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}+ {\boldsymbol{u}^{\,k+\frac{1}{2}}}}{2} - {\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}= -\nu\,\boldsymbol{l}^{k+1}, \\ {\boldsymbol{\mathsf{N}}}\frac{{{\boldsymbol{\omega}}^{\,k+1}}- {{\boldsymbol{\omega}}^{\,k}}}{\Delta t} - \frac{1}{2}{\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}\,\frac{{{\boldsymbol{\omega}}^{\,k+1}}+ {{\boldsymbol{\omega}}^{\,k}}}{2} + \frac{1}{2}\left({\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}\right)^{\top}\frac{{{\boldsymbol{\omega}}^{\,k+1}}+ {{\boldsymbol{\omega}}^{\,k}}}{2} = \nu\,{\boldsymbol{\mathsf{L}}}\,\frac{{{\boldsymbol{\omega}}^{\,k+1}}+ {{\boldsymbol{\omega}}^{\,k}}}{2}, \\ {\boldsymbol{\mathsf{D}}}\,{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}= 0, \end{dcases} \label{eq::ns_meevc_weak_form_discrete_staggered_gauss_matrix_notation}$$ where all matrix operators are as in and , with the exception of ${\boldsymbol{\mathsf{R}}}^{k+1}$, ${\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}$ and $\boldsymbol{l}^{k+1}$, the coefficients of which are $${\mathsf{R}}^{k+1}_{ij} := \langle{{\omega_{h}}^{\,k+1}}\times{\vec{\epsilon}^{\,U}_{j}},{\vec{\epsilon}^{\,U}_{i}}\rangle_{\Omega}, \quad l^{k+1}_{i} := \langle\nabla\times{{\omega_{h}}^{\,k+1}},{\vec{\epsilon}^{\,U}_{i}}\rangle_{\Omega}\quad\mathrm{and}\quad {\mathsf{W}}^{k+\frac{1}{2}}_{ij} := \langle{\epsilon^{W}_{j}},\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,{\epsilon^{W}_{i}}\right)\rangle_{\Omega}. \label{eq:matrix_coefficients_staggered}$$ To start the iteration procedure ${\vec{u}_{h}^{\,\frac{1}{2}}}$ and ${{\omega_{h}}^{\,0}}$ are required. Since only ${\vec{u}_{h}^{\,0}}$ and ${{\omega_{h}}^{\,0}}$ are known, the first time step needs to be implicit, according to . Once ${\vec{u}_{h}^{\,1}}$ is known, can be used to retrieve ${\vec{u}_{h}^{\,\frac{1}{2}}}$. The remaining time steps can then be computed explicitly with . ![Diagram of the time stepping. Left: all unknowns at the same time instant, as in . Right: staggered in time unknowns, as in .[]{data-label="fig:time_stepping"}](./figures/time_stepping/time_stepping) Conservation properties and time reversibility {#sec::conservation_properties_and_time_reversibility} ============================================== It is known that in the absence of external forces and viscosity the incompressible Navier-Stokes equations contain important symmetries or invariants, e.g. [@Arnold1966; @Arnold1992; @Majda2001; @Foias2001]. Conservation of mass, energy, enstrophy and total vorticity are such invariants. A discretization for the incompressible Navier-Stokes equations has been proposed in . In the following sections we prove its conservation properties, with respect to mass, energy, enstrophy and vorticity. Additionally, we also prove time reversibility. Mass and vorticity conservation {#sec:mass_vorticity_conservation} ------------------------------- Mass conservation is given by the divergence of the velocity field. Since with this discretization the discrete velocity field is exactly divergence free mass is conserved. Regarding vorticity conservation, the time evolution of vorticity is governed by the vorticity transport equation, : $$\frac{\partial {\omega}}{\partial t} +\frac{1}{2}\left({\vec{u}}\cdot\nabla\right){\omega}+ \frac{1}{2}\nabla\cdot\left({\vec{u}}{\omega}\right) = \nu \Delta{\omega}.$$ This equation, by , is discretized as $$\begin{dcases} \text{Find } {{{\omega_{h}}^{\,k+1}}}\in {\mathrm{CG}_{N}}\text{ such that:}\\ \langle\frac{{{{\omega_{h}}^{\,k+1}}}-{{{\omega_{h}}^{\,k}}}}{\Delta t},{\xi_{h}}\rangle_{\Omega} - \frac{1}{2}\langle\frac{{{{\omega_{h}}^{\,k+1}}}+{{{\omega_{h}}^{\,k}}}}{2},\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,{\xi_{h}}\right)\rangle_{\Omega} + \frac{1}{2}\langle\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,\frac{{{{\omega_{h}}^{\,k+1}}}+{{{\omega_{h}}^{\,k}}}}{2}\right),{\xi_{h}}\rangle_{\Omega} = \nu\langle\nabla\times\frac{{{{\omega_{h}}^{\,k+1}}}+{{{\omega_{h}}^{\,k}}}}{2},\nabla\times{\xi_{h}}\rangle_{\Omega}, \quad\forall {\xi_{h}}\in {\mathrm{CG}_{N}}, \end{dcases} \label{eq::generic_transport_equation}$$ Conservation of vorticity ${{\omega_{h}}}$ corresponds to $$\int_{\Omega}{{{\omega_{h}}^{\,k+1}}}:= \langle{{{\omega_{h}}^{\,k+1}}},1\rangle = \langle{{{\omega_{h}}^{\,k}}},1\rangle := \int_{\Omega}{{{\omega_{h}}^{\,k}}}.$$ Therefore, the proof of conservation of vorticity is obtained from evaluating for the special case ${\xi_{h}}= 1$: $$\langle\frac{{{{\omega_{h}}^{\,k+1}}}-{{{\omega_{h}}^{\,k}}}}{\Delta t},1\rangle_{\Omega} - \frac{1}{2}\langle\frac{{{{\omega_{h}}^{\,k+1}}}+{{{\omega_{h}}^{\,k}}}}{2},\nabla\cdot{\vec{u}_{h}^{\,k+\frac{1}{2}}}\rangle_{\Omega} + \frac{1}{2}\langle\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,\frac{{{{\omega_{h}}^{\,k+1}}}+{{{\omega_{h}}^{\,k}}}}{2}\right),1\rangle_{\Omega} = \nu\langle\nabla\times\frac{{{{\omega_{h}}^{\,k+1}}}+{{{\omega_{h}}^{\,k}}}}{2},\nabla\times1\rangle_{\Omega} . \label{eq:proof_conservation_mass_vorticity_1}$$ Since the function spaces have been chosen such that $\nabla\cdot{\vec{u}_{h}}= 0$ is satisfied exactly, see [Section \[subsec::finite\_element\_discretization\]]{}, equation becomes $$\langle{{{\omega_{h}}^{\,k+1}}},1\rangle_{\Omega} + \frac{\Delta t}{2}\langle\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,\frac{{{{\omega_{h}}^{\,k+1}}}+{{{\omega_{h}}^{\,k}}}}{2}\right),1\rangle_{\Omega} = \langle{{{\omega_{h}}^{\,k}}},1\rangle_{\Omega} . \label{eq:proof_conservation_mass_vorticity_2}$$ The second term on the left hand side can be rewritten as $$\langle\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,\frac{{{{\omega_{h}}^{\,k+1}}}+{{{\omega_{h}}^{\,k}}}}{2}\right),1\rangle_{\Omega} := \int_{\Omega} \nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,\frac{{{{\omega_{h}}^{\,k+1}}}+{{{\omega_{h}}^{\,k}}}}{2}\right) = \int_{\partial\Omega} \left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,\frac{{{{\omega_{h}}^{\,k+1}}}+{{{\omega_{h}}^{\,k}}}}{2}\right)\cdot\vec{n}, \label{eq:proof_conservation_mass_vorticity_3}$$ with $\vec{n}$ the outward unit normal at the boundary of $\Omega$. At the continuous level, the boundary integral in is trivially equal to zero for periodic boundary conditions, which is the case we will consider here. Note that since the domain is periodic, the nodes of the mesh at opposite sides of domain must coincide, otherwise the periodicity is lost. Since ${{\omega_{h}}}\in {\mathrm{CG}_{N}}$, it is continuous across elements. In a similar way, the normal component of ${\vec{u}_{h}}$ is continuous across elements because ${\vec{u}_{h}}\in {\mathrm{RT}_{N}}$, see for example [@RaviartThomas1977; @arnold2010finite; @kirby2012]. Therefore, in the case of periodic boundary conditions the boundary integral in will still be exactly equal to zero and becomes $$\langle{{{\omega_{h}}^{\,k+1}}},1\rangle_{\Omega} = \langle{{{\omega_{h}}^{\,k}}},1\rangle_{\Omega},\label{eq:proof_conservation_mass_vorticity_4}$$ which proves that vorticity ${\mathcal{W}_{h}^{k}}$ is conserved because $${\mathcal{W}_{h}^{k+1}}:= \langle{\omega_{h}}^{k+1},1\rangle_{\Omega} = \langle{\omega_{h}}^{k},1\rangle_{\Omega} =:{\mathcal{W}_{h}^{k}}$$ is conserved. Kinetic energy conservation {#sec::kinetic_energy_conservation} --------------------------- Kinetic energy conservation is one of the two *secondary conservation* properties of the numerical solver proposed in this work. Here we prove that total kinetic energy ${\mathcal{K}}$ is conserved by . At the continuous level, kinetic energy is defined as proportional to the $L^{2}(\Omega)$ norm of the velocity velocity $${\mathcal{K}}:= \frac{1}{2}\|{\vec{u}}\|_{L^{2}(\Omega)} := \frac{1}{2}\langle{\vec{u}},{\vec{u}}\rangle,$$ is conserved in the inviscid limit $\nu = 0$, see for example [@Majda2001; @Foias2001]. This definition can be directly extended to the discrete level as $${\mathcal{K}_{h}}:= \frac{1}{2}\|{\vec{u}_{h}}\|_{L^{2}(\Omega)} := \frac{1}{2}\langle{\vec{u}_{h}},{\vec{u}_{h}}\rangle.$$ Therefore, kinetic energy conservation at the discrete level corresponds to $${\mathcal{K}_{h}^{k+\frac{3}{2}}}:= \frac{1}{2}\langle{\vec{u}_{h}^{\,k+\frac{3}{2}}},{\vec{u}_{h}^{\,k+\frac{3}{2}}}\rangle = \frac{1}{2}\langle{\vec{u}_{h}^{\,k+\frac{1}{2}}},{\vec{u}_{h}^{\,k+\frac{1}{2}}}\rangle := {\mathcal{K}_{h}^{k+\frac{1}{2}}}. \label{eq:energy_conservation_definition}$$ To prove this identity we take the first equation in (momentum equation) in the inviscid limit $\nu = 0$ and choose ${\vec{v}_{h}}= \frac{1}{2}{\vec{u}_{h}^{\,k+\frac{3}{2}}}+ \frac{1}{2}{\vec{u}_{h}^{\,k+\frac{1}{2}}}$, resulting in $$\frac{1}{2}\langle\frac{{\vec{u}_{h}^{\,k+\frac{3}{2}}}- {\vec{u}_{h}^{\,k+\frac{1}{2}}}}{\Delta t},{\vec{u}_{h}^{\,k+\frac{3}{2}}}+ {\vec{u}_{h}^{\,k+\frac{1}{2}}}\rangle_{\Omega} + \frac{1}{2}\langle{{\omega_{h}}^{\,k+1}}\times\frac{{\vec{u}_{h}^{\,k+\frac{3}{2}}}+{\vec{u}_{h}^{\,k+\frac{1}{2}}}}{2},{\vec{u}_{h}^{\,k+\frac{3}{2}}}+ {\vec{u}_{h}^{\,k+\frac{1}{2}}}\rangle_{\Omega} - \frac{1}{2}\langle {{\bar{p}_{h}}^{\,k+1}},\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{3}{2}}}+ {\vec{u}_{h}^{\,k+\frac{1}{2}}}\right)\rangle_{\Omega} =0. \label{eq:energy_conservation_proof_1}$$ The term involving the total pressure ${\bar{p}_{h}}$ is identically zero because the velocity field is divergence free at every time step, due to the particular choice of function spaces, as discussed in [Section \[subsec::finite\_element\_discretization\]]{}. After rearranging, equation then becomes $$\frac{1}{2}\langle{\vec{u}_{h}^{\,k+\frac{3}{2}}},{\vec{u}_{h}^{\,k+\frac{3}{2}}}\rangle_{\Omega} + \frac{\Delta t}{2}\,\langle{{\omega_{h}}^{\,k+1}}\times\frac{{\vec{u}_{h}^{\,k+\frac{3}{2}}}+{\vec{u}_{h}^{\,k+\frac{1}{2}}}}{2},{\vec{u}_{h}^{\,k+\frac{3}{2}}}+ {\vec{u}_{h}^{\,k+\frac{1}{2}}}\rangle_{\Omega} = \frac{1}{2}\langle{\vec{u}_{h}^{\,k+\frac{1}{2}}},{\vec{u}_{h}^{\,k+\frac{1}{2}}}\rangle_{\Omega}. \label{eq:energy_conservation_proof_1b}$$ Because of Lemma 1.3 in [@TemamNS], becomes $$\frac{1}{2}\langle{\vec{u}_{h}^{\,k+\frac{3}{2}}},{\vec{u}_{h}^{\,k+\frac{3}{2}}}\rangle_{\Omega} = \frac{1}{2}\langle{\vec{u}_{h}^{\,k+\frac{1}{2}}},{\vec{u}_{h}^{\,k+\frac{1}{2}}}\rangle_{\Omega},$$ which proves . Enstrophy conservation {#sec::enstrophy_conservation} ---------------------- The second *secondary conservation* property of the numerical solver presented here is enstrophy conservation. We then proceed to prove conservation of enstrophy ${\mathcal{E}}$ at the discrete level. Similarly to kinetic energy, enstrophy is defined as proportional to the $L^{2}(\Omega)$ norm of vorticity $${\mathcal{E}}:= \frac{1}{2} \|{\omega}\|_{L^{2}(\Omega)} := \langle{\omega},{\omega}\rangle_{\Omega},$$ and is also a conserved quantity in the inviscid limit $\nu = 0$, see for example [@Majda2001; @Foias2001]. This definition can be straightforwardly extended to the discrete level as $${\mathcal{E}_{h}}:= \frac{1}{2} \|{\omega_{h}}\|_{L^{2}(\Omega)} := \langle{\omega_{h}},{\omega_{h}}\rangle_{\Omega},$$ This directly implies that conservation of enstrophy at the discrete level requires that $${\mathcal{E}_{h}^{k+1}}:= \frac{1}{2}\langle{{\omega_{h}}^{\,k+1}},{{\omega_{h}}^{\,k+1}}\rangle = \frac{1}{2}\langle{{\omega_{h}}^{\,k}},{{\omega_{h}}^{\,k}}\rangle := {\mathcal{E}_{h}^{k}}.$$ The proof of enstrophy conservation follows a similar approach to the one used for the proof of energy conservation. In this case we start with the second equation in (vorticity transport equation) in the inviscid limit $\nu = 0$ and choose ${\xi_{h}}= \frac{1}{2}{{\omega_{h}}^{\,k+1}}+ \frac{1}{2}{{\omega_{h}}^{\,k}}$, obtaining $$\frac{1}{2}\langle\frac{{{\omega_{h}}^{\,k+1}}-{{\omega_{h}}^{\,k}}}{\Delta t},{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}\rangle_{\Omega} - \frac{1}{4}\langle\frac{{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}}{2},\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,\left({{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}\right)\right)\rangle_{\Omega} + \frac{1}{4}\langle\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,\frac{{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}}{2}\right),{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}\rangle_{\Omega} = 0. \label{eq:enstrophy_conservation_proof_1}$$ The last two terms on the left hand side of are equal and have opposite signs, therefore cancelling each other. Conservation of enstrophy then follows directly. Time reversibility ------------------ It is well known that the Navier-Stokes equations in the inviscid limit $\nu = 0$ are time reversible, e.g. [@Majda2001; @Duponcheel2008]. This important property has served in the past as a benchmark for numerical flow solvers [@Duponcheel2008] and for the construction of improved LES subgrid models [@CARATI2001], for example. It is intimately related to the numerical dissipation and therefore it is desirable to satisfy it. To prove time reversibility of the proposed scheme we follow the same approach presented by Hairer et al. in [@Hairer2006]. Our time integrator $\Phi_{\Delta t}$ is a *one-step* method since it uses only the information of the previous time step: $$\Phi_{\Delta t} ({\vec{u}_{h}^{\,k+\frac{1}{2}}},{{\omega_{h}}^{\,k}}) = ({\vec{u}_{h}^{\,k+\frac{3}{2}}},{{\omega_{h}}^{\,k+1}}).$$ A numerical one-step method $\Phi_{\Delta t}$ is *time-reversible*, if it satisfies, see Hairer et al. [@Hairer2006], $$\Phi_{\Delta t}\circ\Phi_{-\Delta t} = id \qquad \text{or equivalently} \qquad \Phi_{\Delta t} = \Phi^{-1}_{-\Delta t}\,.$$ To prove time reversibility of our method we will show that $\Phi_{\Delta t}\circ\Phi_{-\Delta t} = id$. Start with known velocity ${\vec{u}_{h}^{\,k+\frac{1}{2}}}$ at time instant $t^{k+\frac{1}{2}}$ and known vorticity ${{\omega_{h}}^{\,k}}$ at time instant $t^{k}$. Advance both quantities one time step to obtain the velocity ${\vec{u}_{h}^{\,k+\frac{3}{2}}}$ at time instant $t^{k+\frac{3}{2}}$ and the vorticity ${{\omega_{h}}^{\,k+1}}$ at the time instant $t^{k+1}$. Reverse time and compute one time step. If the numerical method is time reversible the initial velocity and vorticity fields must be retrieved, as represented in [Figure \[fig:time\_reversibility\]]{}. First we prove the reversibility of the vorticity time step and then the reversibility of the velocity time step. ![Diagram of time reversibility. Left: forward step. Right: backward step.[]{data-label="fig:time_reversibility"}](./figures/time_reversibility/time_reversibility) ### Reversibility of vorticity time step In the inviscid limit $\nu = 0$, the forward vorticity time step, [Figure \[fig:time\_reversibility\]]{} left, which advances the vorticity field from the time instant $t^{k}$ to the time instant $t^{k+1}$ is given by the second equation in (vorticity transport equation) $$\begin{dcases} \text{Find } {{\omega_{h}}^{\,k+1}}\in {\mathrm{CG}_{N}}\text{ such that:}\\ \langle\frac{{{\omega_{h}}^{\,k+1}}-{{\omega_{h}}^{\,k}}}{\Delta t},{\xi_{h}}\rangle_{\Omega} - \frac{1}{2}\langle\frac{{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}}{2},\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,{\xi_{h}}\right)\rangle_{\Omega} + \frac{1}{2}\langle\nabla\cdot\left({\vec{u}_{h}^{\,k+\frac{1}{2}}}\,\frac{{{\omega_{h}}^{\,k+1}}+{{\omega_{h}}^{\,k}}}{2}\right),{\xi_{h}}\rangle_{\Omega} = 0,\quad\forall{\xi_{h}}\in{\mathrm{CG}_{N}}. \end{dcases}\label{eq:time_reversibility_vorticity_proof_1}$$ Using , this expression can be rewritten in matrix notation as an algebraic system of equations: $${\boldsymbol{\mathsf{N}}}\frac{{{\boldsymbol{\omega}}^{\,k+1}}- {{\boldsymbol{\omega}}^{\,k}}}{\Delta t} - \frac{1}{2}{\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}\,\frac{{{\boldsymbol{\omega}}^{\,k+1}}+ {{\boldsymbol{\omega}}^{\,k}}}{2} + \frac{1}{2}\left({\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}\right)^{\top}\frac{{{\boldsymbol{\omega}}^{\,k+1}}+ {{\boldsymbol{\omega}}^{\,k}}}{2} = 0.\label{eq:time_reversibility_vorticity_proof_1b}$$ Rearranging this expression it is possible to solve for ${{\boldsymbol{\omega}}^{\,k+1}}$ $${{\boldsymbol{\omega}}^{\,k+1}}= \left({\boldsymbol{\mathsf{N}}} - \frac{\Delta t}{4}{\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}} + \frac{\Delta t}{4}\left({\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}\right)^{\top}\right)^{-1}\left({\boldsymbol{\mathsf{N}}} + \frac{\Delta t}{4}{\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}} - \frac{\Delta t}{4}\left({\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}\right)^{\top}\right){{\boldsymbol{\omega}}^{\,k}}. \label{eq:time_reversibility_vorticity_proof_2}$$ If we now introduce the compact notation $${\boldsymbol{\mathsf{A}}}_{-}:= {\boldsymbol{\mathsf{N}}} - \frac{\Delta t}{4}{\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}} + \frac{\Delta t}{4}\left({\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}\right)^{\top} \quad\mathrm{and}\quad{\boldsymbol{\mathsf{A}}}_{+}:={\boldsymbol{\mathsf{N}}} + \frac{\Delta t}{4}{\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}} - \frac{\Delta t}{4}\left({\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}\right)^{\top},$$ equation becomes $${{\boldsymbol{\omega}}^{\,k+1}}= {\boldsymbol{\mathsf{A}}}_{-}^{-1}{\boldsymbol{\mathsf{A}}}_{+}\,{{\boldsymbol{\omega}}^{\,k}}. \label{eq:vorticity_reversibility_forward_compact}$$ The backward time step for vorticity, [Figure \[fig:time\_reversibility\]]{} right, is obtained by substituting $\Delta t$ by $-\Delta t$. Substituting this transformation in and using leads to an expression identical to $${\boldsymbol{\mathsf{N}}}\frac{{{\boldsymbol{\omega}}^{\,k}}- {{\boldsymbol{\omega}}^{\,k+1}}}{-\Delta t} - \frac{1}{2}{\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}\,\frac{{{\boldsymbol{\omega}}^{\,k}}+ {{\boldsymbol{\omega}}^{\,k+1}}}{2} + \frac{1}{2}\left({\boldsymbol{\mathsf{W}}}^{k+\frac{1}{2}}\right)^{\top}\frac{{{\boldsymbol{\omega}}^{\,k}}+ {{\boldsymbol{\omega}}^{\,k+1}}}{2} = 0.\label{eq:time_reversibility_vorticity_proof_3}$$ This expression can be rearranged to yield a result similar to $${{\boldsymbol{\omega}}^{\,k}}= {\boldsymbol{\mathsf{A}}}_{+}^{-1}{\boldsymbol{\mathsf{A}}}_{-}\,{{\boldsymbol{\omega}}^{\,k+1}}. \label{eq:vorticity_reversibility_forward_compact_2}$$ Combining the forward time step with the backward time step results in $${{\boldsymbol{\omega}}^{\,k}}= {\boldsymbol{\mathsf{A}}}_{+}^{-1}{\boldsymbol{\mathsf{A}}}_{-}\left({\boldsymbol{\mathsf{A}}}_{-}^{-1}{\boldsymbol{\mathsf{A}}}_{+}{{\boldsymbol{\omega}}^{\,k}}\right) = {{\boldsymbol{\omega}}^{\,k}}.$$ Thus showing the reversibility of the vorticity time step. ### Reversibility of velocity time step For the reversibility of the velocity time step consider first the forward time step, [Figure \[fig:time\_reversibility\]]{} left, in the inviscid limit $\nu = 0$. The first equation in (momentum equation) computes the evolution of the velocity field from time instant $t^{k+\frac{1}{2}}$ to the time instant $t^{k+\frac{3}{2}}$ $$\begin{dcases} \text{Find } {\vec{u}_{h}^{\,k+\frac{3}{2}}}\in {\mathrm{RT}_{N}}, {{\bar{p}_{h}}^{\,k+1}}\in {\mathrm{DG}_{N-1}}\text{ such that:}\\ \langle\frac{{\vec{u}_{h}^{\,k+\frac{3}{2}}}- {\vec{u}_{h}^{\,k+\frac{1}{2}}}}{\Delta t},{\vec{v}_{h}}\rangle_{\Omega} + \langle{{\omega_{h}}^{\,k+1}}\times\frac{{\vec{u}_{h}^{\,k+\frac{3}{2}}}+{\vec{u}_{h}^{\,k+\frac{1}{2}}}}{2},{\vec{v}_{h}}\rangle_{\Omega} - \langle {{\bar{p}_{h}}^{\,k+1}},\nabla\cdot{\vec{v}_{h}}\rangle_{\Omega} = 0,\quad \forall {\vec{v}_{h}}\in {\mathrm{RT}_{N}}, \\ \langle\nabla\cdot{\vec{u}_{h}^{\,k+\frac{3}{2}}},{q_{h}}\rangle_{\Omega} = 0, \quad\forall {q_{h}}\in {\mathrm{DG}_{N-1}}\,. \end{dcases} \label{eq::velocity_reversibility_proof_1}$$ Using we can write this expression as an algebraic system of equations $$\begin{dcases} {\boldsymbol{\mathsf{M}}} \frac{{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}- {\boldsymbol{u}^{\,k+\frac{1}{2}}}}{\Delta t} + {\boldsymbol{\mathsf{R}}}^{k+1}\,\frac{{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}+ {\boldsymbol{u}^{\,k+\frac{1}{2}}}}{2} - {\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}=0, \\ {\boldsymbol{\mathsf{D}}}\,{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}= 0, \end{dcases} \label{eq::velocity_reversibility_proof_2}$$ of which ${\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}$ and ${{\boldsymbol{\bar{p}}}^{\,k+1}}$ are the solution. Once ${{\boldsymbol{\bar{p}}}^{\,k+1}}$ is known, it is possible to write an explicit expression for ${\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}$ as a function of ${\boldsymbol{u}^{\,k+\frac{1}{2}}}$ and ${{\boldsymbol{\bar{p}}}^{\,k+1}}$ $${\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}= \left({\boldsymbol{\mathsf{M}}} +\frac{\Delta t}{2} {\boldsymbol{\mathsf{R}}}^{k+1}\right)^{-1}\left({\boldsymbol{\mathsf{M}}} -\frac{\Delta t}{2} {\boldsymbol{\mathsf{R}}}^{k+1}\right) {\boldsymbol{u}^{\,k+\frac{1}{2}}}+ \Delta t \,\left({\boldsymbol{\mathsf{M}}} +\frac{\Delta t}{2} {\boldsymbol{\mathsf{R}}}^{k+1}\right)^{-1}{\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}. \label{eq:velocity_reversibility_proof_3}$$ Introducing the compact notation $${\boldsymbol{\mathsf{B}}}_{+} := {\boldsymbol{\mathsf{M}}} +\frac{\Delta t}{2} {\boldsymbol{\mathsf{R}}}^{k+1} \quad\mathrm{and}\quad {\boldsymbol{\mathsf{B}}}_{-} := {\boldsymbol{\mathsf{M}}} -\frac{\Delta t}{2} {\boldsymbol{\mathsf{R}}}^{k+1},$$ equation becomes $${\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}= {\boldsymbol{\mathsf{B}}}_{+}^{-1}{\boldsymbol{\mathsf{B}}}_{-}\,{\boldsymbol{u}^{\,k+\frac{1}{2}}}+ \Delta t \,{\boldsymbol{\mathsf{B}}}_{+}^{-1}{\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}. \label{eq:velocity_reversibility_proof_4}$$ To compute the backward time step for velocity, [Figure \[fig:time\_reversibility\]]{} right, we proceed in the same manner as for vorticity: reverse the time step by replacing $\Delta t$ by $-\Delta t$. We can now apply this transformation to and use to obtain an expression identical to $$\begin{dcases} {\boldsymbol{\mathsf{M}}} \frac{{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}- {\boldsymbol{u}^{\,k+\frac{1}{2}}}}{-\Delta t} + {\boldsymbol{\mathsf{R}}}^{k+1}\,\frac{{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}+ {\boldsymbol{u}^{\,k+\frac{1}{2}}}}{2} - {\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}=0, \\ {\boldsymbol{\mathsf{D}}}\,\left({\boldsymbol{u}^{\,k+\frac{1}{2}}}\right) = 0, \end{dcases} \label{eq::velocity_reversibility_proof_5}$$ If we assume for now that ${{\boldsymbol{\bar{p}}}^{\,k+1}}$ is known and equal to the one obtained in the forward step, it is possible to use to write an expression equivalent to but for the backward step $${\boldsymbol{u}^{\,k+\frac{1}{2}}}= {\boldsymbol{\mathsf{B}}}_{-}^{-1}{\boldsymbol{\mathsf{B}}}_{+}\,{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}- \Delta t \,{\boldsymbol{\mathsf{B}}}_{-}^{-1}{\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}. \label{eq:velocity_reversibility_proof_6}$$ Replacing into yields $${\boldsymbol{u}^{\,k+\frac{1}{2}}}= {\boldsymbol{\mathsf{B}}}_{-}^{-1}{\boldsymbol{\mathsf{B}}}_{+}\,\left({\boldsymbol{\mathsf{B}}}_{+}^{-1}{\boldsymbol{\mathsf{B}}}_{-}\,{\boldsymbol{u}^{\,k+\frac{1}{2}}}+ \Delta t \,{\boldsymbol{\mathsf{B}}}_{+}^{-1}{\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}\right) + \Delta t \,{\boldsymbol{\mathsf{B}}}_{-}^{-1}{\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}, \label{eq:velocity_reversibility_proof_7}$$ which can be simplified to $${\boldsymbol{u}^{\,k+\frac{1}{2}}}={\boldsymbol{u}^{\,k+\frac{1}{2}}}+\Delta t\,{\boldsymbol{\mathsf{B}}}_{-}^{-1}{\boldsymbol{\mathsf{P}}}\,\left({{\boldsymbol{\bar{p}}}^{\,k+1}}- {{\boldsymbol{\bar{p}}}^{\,k+1}}\right). \label{eq:velocity_reversibility_proof_7}$$ The last term on the right will be trivially equal to zero proving reversibility of the velocity time step. This term cancels out because we assumed the total pressure computed with the backward time step is equal to the one computed in the forward step. For this reason, to finalise the proof we need to show that indeed the forward step total pressure is the solution for the pressure in the backward step. We start with a modified version of $$\begin{dcases} {\boldsymbol{\mathsf{M}}} \frac{{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}- {\boldsymbol{u}^{\,k+\frac{1}{2}}}}{\Delta t} + {\boldsymbol{\mathsf{R}}}^{k+1}\,\frac{{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}+ {\boldsymbol{u}^{\,k+\frac{1}{2}}}}{2} - {\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}=0, \\ {\boldsymbol{\mathsf{D}}}\,\left({\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}+ {\boldsymbol{u}^{\,k+\frac{1}{2}}}\right) = 0. \end{dcases} \label{eq::velocity_reversibility_proof_8}$$ This system of equations is equivalent to $\eqref{eq::velocity_reversibility_proof_2}$ because the divergence of the discrete velocity is zero, therefore ${\boldsymbol{\mathsf{D}}}\,{\vec{u}_{h}^{\,k+\frac{1}{2}}}= 0$. Rearranging gives $$\begin{dcases} \left({\boldsymbol{\mathsf{M}}} + \frac{\Delta t}{2}{\boldsymbol{\mathsf{R}}}^{k+1}\right)\,{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}- \left({\boldsymbol{\mathsf{M}}} - \frac{\Delta t}{2}{\boldsymbol{\mathsf{R}}}^{k+1}\right)\,{\boldsymbol{u}^{\,k+\frac{1}{2}}}- \Delta t\,{\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}=0, \\ {\boldsymbol{\mathsf{D}}}\,\left({\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}+ {\boldsymbol{u}^{\,k+\frac{1}{2}}}\right) = 0. \end{dcases} \label{eq::velocity_reversibility_proof_8}$$ In the same way the backwards step results in the following system of equations $$\begin{dcases} -\left({\boldsymbol{\mathsf{M}}} - \frac{\Delta t}{2}{\boldsymbol{\mathsf{R}}}^{k+1}\right)\,{\boldsymbol{u}^{\,k+\frac{1}{2}}}+ \left({\boldsymbol{\mathsf{M}}} + \frac{\Delta t}{2}{\boldsymbol{\mathsf{R}}}^{k+1}\right)\,{\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}- \Delta t\,{\boldsymbol{\mathsf{P}}}\,{{\boldsymbol{\bar{p}}}^{\,k+1}}=0, \\ {\boldsymbol{\mathsf{D}}}\,\left({\boldsymbol{u}_{h}^{\,k+\frac{3}{2}}}+ {\boldsymbol{u}^{\,k+\frac{1}{2}}}\right) = 0. \end{dcases} \label{eq::velocity_reversibility_proof_9}$$ The system is the same as , therefore the pressure is necessarily the same in both cases. Thus proving the reversibility of the velocity time step. Numerical test cases {#sec::numerical_test_cases} ==================== In order to verify the numerical properties of the proposed method we apply it to the solution of two well known test cases: (i) Taylor-Green vortex and (ii) inviscid shear layer roll-up. With the first test the convergence properties of the method will be assessed and with the second the time reversibility and mass, energy, enstrophy, and vorticity conservation will be verified. Order of accuracy study: Taylor-Green vortex {#subsec::taylor_green_vortex} -------------------------------------------- In order to verify the accuracy of the proposed numerical method we compare the numerical results against an analytical solution of the Navier-Stokes equations. A suitable analytical solution is the Taylor-Green vortex, given by: $$\begin{dcases} u_{x} (x,y,t) = -\sin(\pi x)\cos(\pi y)\,e^{-2\pi^{2} \nu t}, \\ u_{y} (x,y,t) = \cos(\pi x)\sin(\pi y)\,e^{-2\pi^{2} \nu t}, \\ p(x,y,t) = \frac{1}{4}\,(\cos(2\pi x)+ \cos(2\pi y))\,e^{4\pi^{2} \nu t}, \\ \omega (x,y,t) = -2\pi\sin(\pi x)\sin(\pi y)\,e^{-2\pi^{2} \nu t}. \end{dcases} \label{eq:taylor_green_exact_solution}$$ The solution is defined on the domain $\Omega = [0,2]\times[0,2]$, with periodic boundary conditions. The initial condition for both the velocity ${\vec{u}_{h}^{\,0}}$ and vorticity ${{\omega_{h}}^{\,0}}$ are given by the exact solution . The kinematic viscosity is set to $\nu = 0.01$. For this study we consider the evolution of the solution from $t=0$ to $t=1$. The first study focusses on the time convergence. For this we have used 1024 triangular elements and polynomial basis functions of degree $p=4$ and time steps equal to $\Delta t = 1, \frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}$. As can be observed in [Figure \[fig:convergence\_plots\]]{} left, this method achieves a first order convergence rate, as opposed to a second order convergence rate of an implicit formulation, see [@Sanderse2013]. Regarding the spatial convergence we have tested the convergence rate of discretizations with basis functions of different polynomial degree, $p=1,2,4$. In order not to pollute the spatial convergence rate with the temporal integration error, we have used different time steps: $\Delta t = 2.5\times 10^{-2}$ for $p=1$, $\Delta t = 1.0\times 10^{-3}$ for $p=2$ and $\Delta t = 1.0\times 10^{-4}$ for $p=4$. As can be seen in [Figure \[fig:convergence\_plots\]]{}, this method has a convergence rate of $p$-th order for basis functions of polynomial degree $p$. ![Convergence plots of velocity error, $\|{\vec{u}}-{\vec{u}_{h}}\|_{L^{2}(\Omega)}$. Left: error in velocity as function of time step size $\Delta t$. Right: error in velocity as function of mesh size, $h$, for basis functions of polynomial degree $p=1,2,4$.[]{data-label="fig:convergence_plots"}](./figures/plots/convergence_plots/dt_convergence "fig:"){width="45.00000%"} ![Convergence plots of velocity error, $\|{\vec{u}}-{\vec{u}_{h}}\|_{L^{2}(\Omega)}$. Left: error in velocity as function of time step size $\Delta t$. Right: error in velocity as function of mesh size, $h$, for basis functions of polynomial degree $p=1,2,4$.[]{data-label="fig:convergence_plots"}](./figures/plots/convergence_plots/h_convergence "fig:"){width="45.00000%"} Time reversibility and mass, energy, enstrophy, and vorticity conservation: inviscid shear layer roll-up {#subsec::inviscid_shear_layer_roll_up} -------------------------------------------------------------------------------------------------------- The second group of tests focusses on the conservation properties of the proposed numerical method. For this set of tests we consider inviscid flow $\nu=0$. We consider here the simulation of the roll-up of a shear layer, see e.g. [@Minion1997; @Knikker2009; @Hokpunna2010; @Sanderse2013]. This solution is particularly challenging because during the evolution large vorticity gradients develop. Several methods are known to *blow up*, e.g. [@Minion1997]. We consider on the periodic domain $\Omega = [0,2\pi]\times[0,2\pi]$ the following initial conditions for velocity ${\vec{u}}$ $$u_{x}(x,y) = \begin{dcases} \tanh\left(\frac{y-\frac{\pi}{2}}{\delta}\right), & y\leq\pi, \\ \tanh\left(\frac{\frac{3\pi}{2}-y}{\delta}\right), & y>\pi, \end{dcases} \qquad\qquad\qquad u_{y}(x,y) = \epsilon\sin(x),$$ and vorticity ${\omega}$ $$\omega(x,y) = \begin{dcases} \frac{1}{\delta}\,\text{sech}^2\left(\frac{y-\frac{\pi }{2}}{\delta }\right), & y\leq\pi, \\ -\frac{1}{\delta}\,\text{sech}^2\left(\frac{\frac{3 \pi }{2}-y}{\delta }\right), & y>\pi, \end{dcases}$$ with $\delta = \frac{\pi}{15}$ and $\epsilon=0.05$ as in [@Knikker2009; @Sanderse2013]. The small perturbation $\epsilon$ in the $y$-component of the velocity field will trigger the roll-up of the shear layer. We show the contour lines of vorticity at $t=4$ and $t=8$, [Figure \[fig:vorticity\_evolution\_comparison\_plots\_contour\_1\]]{} and [Figure \[fig:vorticity\_evolution\_comparison\_plots\_contour\_2\]]{} respectively. The plots on the left side of [Figure \[fig:vorticity\_evolution\_comparison\_plots\_contour\_1\]]{} and [Figure \[fig:vorticity\_evolution\_comparison\_plots\_contour\_2\]]{} correspond to a spatial discretization of 6400 triangular elements of polynomial degree $p=1$ and time step size $\Delta t = 0.1$. On the right we present the results for a spatial discretization of 6400 triangular elements of polynomial degree $p=4$ and time step size $\Delta t = 0.01$. An example mesh with 1600 elements is shown in [Figure \[fig:mesh\]]{}. ![Example mesh with 1600 triangular elements.[]{data-label="fig:mesh"}](./figures/plots/mesh/plots/mesh.pdf){width="40.00000%"} Although it is possible to observe oscillations in the vorticity plots, especially for $p=1$, none of them *blows up*, as is reported in [@Minion1997]. These oscillations necessarily appear whenever the flow generates structures at a scale smaller than the resolution of the mesh. Since energy and enstrophy are conserved, there is no possibility for the numerical simulation to dissipate the small scale information. This is a feature observed in all conserving inviscid solvers without a small scale model. ![Vorticity field of inviscid shear layer roll-up at $t=4$ obtained with 6400 triangular finite elements, contour lines ($\pm 1, \pm 2, \pm 3, \pm 4, \pm 5, \pm 6$) as in [@Knikker2009]. Left: solution with basis functions of polynomial degree $p=1$ and time step $\Delta t = 0.1$. Right: solution with basis functions of polynomial degree $p=4$ and time step $\Delta t = 0.01$.[]{data-label="fig:vorticity_evolution_comparison_plots_contour_1"}](./figures/plots/shear_layer_evolution/plots/r600/contour/contour_without_w_0/pdf/shear_layer_vorticity_contour_p_1_t_4_0.pdf "fig:"){width="45.00000%"} ![Vorticity field of inviscid shear layer roll-up at $t=4$ obtained with 6400 triangular finite elements, contour lines ($\pm 1, \pm 2, \pm 3, \pm 4, \pm 5, \pm 6$) as in [@Knikker2009]. Left: solution with basis functions of polynomial degree $p=1$ and time step $\Delta t = 0.1$. Right: solution with basis functions of polynomial degree $p=4$ and time step $\Delta t = 0.01$.[]{data-label="fig:vorticity_evolution_comparison_plots_contour_1"}](./figures/plots/shear_layer_evolution/plots/r600/contour/contour_without_w_0/pdf/shear_layer_vorticity_contour_p_4_t_4_0.pdf "fig:"){width="45.00000%"} ![Vorticity field of inviscid shear layer roll-up at $t=8$ obtained with 6400 triangular finite elements, contour lines ($\pm 1, \pm 2, \pm 3, \pm 4, \pm 5, \pm 6$) as in [@Knikker2009]. Left: solution with basis functions of polynomial degree $p=1$ and time step $\Delta t = 0.1$s. Right: solution with basis functions of polynomial degree $p=4$ and time step $\Delta t = 0.01$s.[]{data-label="fig:vorticity_evolution_comparison_plots_contour_2"}](./figures/plots/shear_layer_evolution/plots/r600/contour/contour_without_w_0/pdf/shear_layer_vorticity_contour_p_1_t_8_0.pdf "fig:"){width="45.00000%"} ![Vorticity field of inviscid shear layer roll-up at $t=8$ obtained with 6400 triangular finite elements, contour lines ($\pm 1, \pm 2, \pm 3, \pm 4, \pm 5, \pm 6$) as in [@Knikker2009]. Left: solution with basis functions of polynomial degree $p=1$ and time step $\Delta t = 0.1$s. Right: solution with basis functions of polynomial degree $p=4$ and time step $\Delta t = 0.01$s.[]{data-label="fig:vorticity_evolution_comparison_plots_contour_2"}](./figures/plots/shear_layer_evolution/plots/r600/contour/contour_without_w_0/pdf/shear_layer_vorticity_contour_p_4_t_8_0.pdf "fig:"){width="45.00000%"} In order to assess the conservation properties of the proposed method we computed the evolution of the shear-layer problem from $t=0$ to $t=16$, using different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. We have used a coarse grid with 6400 triangular finite elements and basis functions of polynomial degree $p=1$ for the spatial discretization. In [Figure \[fig:energy\_enstrophy\_conservation\_plots\]]{} left we show the evolution of the kinetic energy error with respect to the initial kinetic energy, $\frac{{\mathcal{K}}(0)-{\mathcal{K}_{h}}(t)}{{\mathcal{K}}(0)}$. This result confirms conservation of kinetic energy, as proven in [Section \[sec::kinetic\_energy\_conservation\]]{}. A similar study was performed for enstrophy in [Figure \[fig:energy\_enstrophy\_conservation\_plots\]]{} right. The magnitude of the error observed is of machine precision, verifying conservation of enstrophy, as proven in [Section \[sec::enstrophy\_conservation\]]{}. The evolution of the error of total vorticity is shown in [Figure \[fig:vorticity\_divergence\_conservation\_plots\]]{} left. The error of vorticity shows again an error of the order of machine precision. In [Figure \[fig:vorticity\_divergence\_conservation\_plots\]]{} right we show the divergence of the velocity field. As can be seen, this discretization results in a velocity field that is divergence free. ![Simulation of shear-layer problem with 6400 triangular finite elements, basis functions of polynomial degree $p=1$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. Left: kinetic energy error with respect to initial kinetic energy, $\frac{{\mathcal{K}}(0)-{\mathcal{K}_{h}}(t)}{{\mathcal{K}}(0)}$. Right: enstrophy error with respect to initial enstrophy, $\frac{{\mathcal{E}}(0)-{\mathcal{E}_{h}}(t)}{{\mathcal{E}}(0)}$.[]{data-label="fig:energy_enstrophy_conservation_plots"}](./figures/plots/conservation_meev_plots/plots/energy_conservation_1 "fig:"){width="40.00000%"} ![Simulation of shear-layer problem with 6400 triangular finite elements, basis functions of polynomial degree $p=1$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. Left: kinetic energy error with respect to initial kinetic energy, $\frac{{\mathcal{K}}(0)-{\mathcal{K}_{h}}(t)}{{\mathcal{K}}(0)}$. Right: enstrophy error with respect to initial enstrophy, $\frac{{\mathcal{E}}(0)-{\mathcal{E}_{h}}(t)}{{\mathcal{E}}(0)}$.[]{data-label="fig:energy_enstrophy_conservation_plots"}](./figures/plots/conservation_meev_plots/plots/enstrophy_conservation_1 "fig:"){width="40.00000%"} ![Simulation of shear-layer problem with 6400 triangular finite elements, basis functions of polynomial degree $p=1$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. Left: total vorticity error with respect to initial total vorticity, ${\mathcal{W}}(0)-{\mathcal{W}_{h}}(t)$. Right: evolution of the divergence of the velocity field.[]{data-label="fig:vorticity_divergence_conservation_plots"}](./figures/plots/conservation_meev_plots/plots/vorticity_conservation_1 "fig:"){width="40.00000%"} ![Simulation of shear-layer problem with 6400 triangular finite elements, basis functions of polynomial degree $p=1$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. Left: total vorticity error with respect to initial total vorticity, ${\mathcal{W}}(0)-{\mathcal{W}_{h}}(t)$. Right: evolution of the divergence of the velocity field.[]{data-label="fig:vorticity_divergence_conservation_plots"}](./figures/plots/conservation_meev_plots/plots/mass_conservation_1 "fig:"){width="40.00000%"} In order to assess the conservation properties of the proposed method for long time simulations we computed the evolution of the shear-layer problem from $t=0$ to $t=128$, using different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. We have used a very coarse grid with 100 triangular finite elements and basis functions of polynomial degree $p=4$ for the spatial discretization. In [Figure \[fig:energy\_enstrophy\_conservation\_plots\_long\_time\]]{} left we show the evolution of the kinetic energy error with respect to the initial kinetic energy, $\frac{{\mathcal{K}}(0)-{\mathcal{K}_{h}}(t)}{{\mathcal{K}}(0)}$. This result confirms conservation of kinetic energy, as proven in [Section \[sec::kinetic\_energy\_conservation\]]{}. The same study was performed for enstrophy in [Figure \[fig:energy\_enstrophy\_conservation\_plots\_long\_time\]]{} right. The error observed is very small, verifying conservation of enstrophy, as proven in [Section \[sec::enstrophy\_conservation\]]{}. The evolution of error for total vorticity is shown in [Figure \[fig:vorticity\_divergence\_conservation\_plots\_long\_time\]]{} left. The error of total vorticity shows that this method is capable of conserving the total vorticity on long time simulations. In [Figure \[fig:vorticity\_divergence\_conservation\_plots\_long\_time\]]{} right we show the divergence of the velocity field. As can be seen, this discretization results in a velocity field that is divergence free. Similar results are obtained for $p=1$. ![Simulation of shear-layer problem with 100 triangular finite elements, basis functions of polynomial degree $p=4$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. Left: kinetic energy error with respect to initial kinetic energy, $\frac{{\mathcal{K}}(0)-{\mathcal{K}_{h}}(t)}{{\mathcal{K}}(0)}$. Right: enstrophy error with respect to initial enstrophy, $\frac{{\mathcal{E}}(0)-{\mathcal{E}_{h}}(t)}{{\mathcal{E}}(0)}$.[]{data-label="fig:energy_enstrophy_conservation_plots_long_time"}](./figures/plots/conservation_meev_plots/plots/long_time_simulation_p_4/energy_conservation_1 "fig:"){width="40.00000%"} ![Simulation of shear-layer problem with 100 triangular finite elements, basis functions of polynomial degree $p=4$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. Left: kinetic energy error with respect to initial kinetic energy, $\frac{{\mathcal{K}}(0)-{\mathcal{K}_{h}}(t)}{{\mathcal{K}}(0)}$. Right: enstrophy error with respect to initial enstrophy, $\frac{{\mathcal{E}}(0)-{\mathcal{E}_{h}}(t)}{{\mathcal{E}}(0)}$.[]{data-label="fig:energy_enstrophy_conservation_plots_long_time"}](./figures/plots/conservation_meev_plots/plots/long_time_simulation_p_4/enstrophy_conservation_1 "fig:"){width="40.00000%"} ![Simulation of shear-layer problem with 100 triangular finite elements, basis functions of polynomial degree $p=4$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. Left: total vorticity error with respect to initial total vorticity, ${\mathcal{W}}(0)-{\mathcal{W}_{h}}(t)$. Right: evolution of the divergence of the velocity field.[]{data-label="fig:vorticity_divergence_conservation_plots_long_time"}](./figures/plots/conservation_meev_plots/plots/long_time_simulation_p_4/vorticity_conservation_1 "fig:"){width="40.00000%"} ![Simulation of shear-layer problem with 100 triangular finite elements, basis functions of polynomial degree $p=4$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. Left: total vorticity error with respect to initial total vorticity, ${\mathcal{W}}(0)-{\mathcal{W}_{h}}(t)$. Right: evolution of the divergence of the velocity field.[]{data-label="fig:vorticity_divergence_conservation_plots_long_time"}](./figures/plots/conservation_meev_plots/plots/long_time_simulation_p_4/mass_conservation_1 "fig:"){width="40.00000%"} The final test addresses time reversibility. In order to investigate this property of the solver, we let the flow evolve from $t=0$ to $t=8$ and then reversed the time evolution and evolved again for the same time, corresponding to the evolution from $t=8$ to $t=0$. We have performed this study with 6400 triangular finite elements of polynomial degree $p=1$ and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$. In [Figure \[fig:time\_reversibility\_plots\]]{} we show the error between the initial vorticity field and the final vorticity field. As can be seen, the errors are in the order of $10^{-12}$, showing the reversibility of the method up to machine accuracy. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Error between the initial vorticity field and the vorticity field computed by evolving the inviscid Navier-Stokes equations to $t=8$s and then reversing the time step until $t=0$s is reached again. The results shown correspond to a mesh of 6400 triangular finite elements, polynomial degree $p=1$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$ (starting from the bottom right and going clockwise).[]{data-label="fig:time_reversibility_plots"}](./figures/plots/reversibility_plots/plots/shear_layer_reversibility_pcolor_dt_0_125.png "fig:"){width="45.00000%"} \#2 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Error between the initial vorticity field and the vorticity field computed by evolving the inviscid Navier-Stokes equations to $t=8$s and then reversing the time step until $t=0$s is reached again. The results shown correspond to a mesh of 6400 triangular finite elements, polynomial degree $p=1$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$ (starting from the bottom right and going clockwise).[]{data-label="fig:time_reversibility_plots"}](./figures/plots/reversibility_plots/plots/shear_layer_reversibility_pcolor_dt_0_250.png "fig:"){width="45.00000%"} \#2 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Error between the initial vorticity field and the vorticity field computed by evolving the inviscid Navier-Stokes equations to $t=8$s and then reversing the time step until $t=0$s is reached again. The results shown correspond to a mesh of 6400 triangular finite elements, polynomial degree $p=1$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$ (starting from the bottom right and going clockwise).[]{data-label="fig:time_reversibility_plots"}](./figures/plots/reversibility_plots/plots/shear_layer_reversibility_pcolor_dt_0_500.png "fig:"){width="45.00000%"} \#2 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Error between the initial vorticity field and the vorticity field computed by evolving the inviscid Navier-Stokes equations to $t=8$s and then reversing the time step until $t=0$s is reached again. The results shown correspond to a mesh of 6400 triangular finite elements, polynomial degree $p=1$, and different time steps $\Delta t = 1,\frac{1}{2}, \frac{1}{4},\frac{1}{8}$ (starting from the bottom right and going clockwise).[]{data-label="fig:time_reversibility_plots"}](./figures/plots/reversibility_plots/plots/shear_layer_reversibility_pcolor_dt_1_000.png "fig:"){width="45.00000%"} \#2 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Conclusions {#sec::conclusions} =========== This paper presents a new arbitrary order finite element solver for the Navier-Stokes equations. The advantages of this method are: (i) high order character of the spatial discretization, (ii) geometric flexibility, allowing for unstructured triangular grids, (iii) exact preservation of conservation laws for mass, kinetic energy, enstrophy and vorticity, and (iv) time reversibility. The construction of a numerical flow solver with these properties leads to a very robust method capable of performing very under-resolved simulations without the characteristic blow up of standard discretizations. For a simpler Taylor-Green analytical solution, the convergence properties of the proposed method have been shown. With the more challenging shear layer roll-up test case, we have shown the robustness of the method. Conservation of kinetic energy, enstrophy and vorticity up to machine accuracy was shown for this test case, even for low order approximations. Nevertheless, the proposed method still allows further improvement, mainly in terms of time integration. One of the advantages of the method is the fact that the time integration method avoids the solution of a fully coupled nonlinear system of equations. Instead two quasi-linear systems of equations are solved, without the need for expensive iterations for each time step. The downside of this approach is that, for now, the time integration convergence rate is limited to first order. In the future we intend to compare this approach to a fully implicit formulation using higher order Gauss integration. Another aspect which is currently being investigated is the treatment of solid boundaries. \#1
{ "pile_set_name": "ArXiv" }
Month: April 2018 Set deep in the Valley of the Sun, the lush and sprawling megalopolis has a problem the rivers Jennifer Afshar and her husband, John, propagandized their bicycles across the grass and paused to savor the sunshine, while their two boys went to look at the duck pond. Other adolescents were playing football or doing jokes in the skate common, and pedigrees picnicked on cloaks or fired up a barbecue across from the swimming pool. We moved here from Los Angeles, to get away from the rising cost of living and the traffic, alleged Jennifer. When we saw this park, we thought they were punking us it was so good. Theres low-toned atrocity, the home owners association takes great care of the grass and trees we like it. The Afshars live in the squeaky-clean suburb of Anthem, Arizona. Its part of a beings conurbation of satellite townships bordering Phoenix, and is a classic example of why this metropolitan or megapolitan locality is tempting fate. Twenty years ago, Anthem leapt out of maiden desert, a community masterplanned from scratch with academies, shops, diners and comfortable residences numerous behind high walls and electronic gates and its own country club and golf course. It now has a population of 30, 000. To look around Anthem would be to imagine “were not receiving” such thing as a irrigate scarcity. But the lush vegetation and ponds is not occur naturally. Phoenix gets less than eight inches of rainfall each year; most of the water supply for central and southern Arizona is pumped from Lake Mead, fed by the Colorado river over 300 miles away. Anthems private make paid a neighbourhood Native American tribe to lease some of its historic spray freedoms, and pipes its sea from the nearby Lake Pleasant reservoir too filled by the Colorado. That river is drying up. This winter, snow in the Rocky Mountains, which feeds the Colorado, was 70% lower than average. Last-place month, the American government is estimated that two one-thirds of Arizona is currently facing severe to extreme drought; last-place summer 50 flights were grounded at Phoenix airport because the heat which made 47 C( 116 F) attained the breath very thin to take off safely. The heat island influence saves temperatures in Phoenix above 37 C( 98 F) at night in summer. Phoenix and its smothering country is known as the Valley of the Sun, and downtown Phoenix which in 2017 pas Philadelphia as Americas fifth-largest city is easily walkable, with diners, tables and an evening buzz. But it is a modern temple to towering cement, and commits highway to endless sprawling that stretches up to 35 miles away to homes like Anthem. The neighbourhood is still flourishing and is dangerously overstretched, experts warn. There are plans for substantial further growth and there exactly isnt the sea to patronize that, articulates climate researcher Jonathan Overpeck, who co-authored a 2017 report that associated declining flows in the Colorado river to climate change. The Phoenix metro area is on the cusp of being dangerously overextended. Its the urban bullseye for global warming in north America. One of those plans is Bill Gatess brand-new smart municipality. The Microsoft founder recently vested $ 80 m( 57 m) in new developments conglomerate that aims to construct 80,000 new dwellings on undeveloped land west of Phoenix, and a brand-new freeway all the way to Las Vegas. Despite year-round sunshine, Arizona only extracts 2-5% of its exertion from solar power. Picture: Deirdre Hamill/ APAnother firm wants to build a master-planned parish, like Anthem, south of Tucson, and posed after the hilltop cities of Tuscany. It fantasizes five golf course, a vineyard, parks, lakes and 28,000 dwellings. The promotional video does not contain any detailed information on where all the ocean will come from, but boasts: This is the American dream: what it is you miss you can have. What these cities require is ocean. The Phoenix area draws from groundwater, from big flows to the east, and from the mighty Colorado. The Hoover Dam accommodates lots of the Colorado's flow in the immense Lake Mead basin, but the river itself is sorely depleted. That sea has now dropped to within a few feet of positions that California, Nevada and Arizona, which all rely on it, weigh as official scarcities. Lake Powell, the reservoir at the other extremity of the Grand Canyon, similarly averages half its historic levels. And hitherto despite the federal Bureau of Reclamation reporting in 2012 that droughts of five or more times would happen every decade over the next 50 times, greater Phoenix has not certified any spray rules. Nor has the position authority decided its official drought event proposal. Theres a tremendous fight over all this, answers Jim Holway, vice president of the Central Arizona Water Conservation District. Climate change is having an impact but that's a contentious, unsettled matter in the countries of the western US. Sprawl is the norm As a hummingbird scooted to a shrub near Tollways swimming pool Phoenix from above is a off-color mosaic of back-garden pools Holway has pointed out that the Valley of the Sun may have to choose between agriculture and farther urbanisation. Twenty years ago, when he moved here, his home inspected out on to citrus woodlands and flower farms. Now the valley is dominated by mega-farms germinating winter vegetables for exportation and thirsting alfalfa for the cattles feed marketplace. Do we want to grow houses or pastures? he asks. The conversation in Arizona even moves periodically to the preposterous ideas of sucking liquid from the Great Lakes 1,700 miles away, or build expensive desalination floras on the Pacific Ocean, instead of imposing water restrictions. Greater Phoenix is good at recycling waste water, but the majority of members of it is used for refrigerating the Palo Verde nuclear power plant to the west of the town, the largest in the US and the only one not on its own body of water. Conversely, the ocean district is Arizona's biggest energy customer, because it has to spouted the irrigate uphill from the Colorado along miles of canals into Phoenix and Tucson. And most of that electricity comes from the heavily polluting, coal-fired Navajo Generating Station in the north of the state. Meanwhile, despite experiencing more than 330 epochs of colors sunshine a year, Holway estimates that Arizona merely receives 2-5% of its force from solar power. The Navajo Generating Station in north Arizona. Photo: AlamyPhoenix is extreme but not alone. Most American metropolitan use more resources than needed and thats the method they were designed, says Sandy Bahr, chairman of the Arizona chapter of the Sierra Club. There is overconsumption and a expendable mentality. Our garbage is taken to remote landfill sites, the cities are designed for gondolas, and sprawl is the norm. In his 2011 journal Bird on Fire, the New York University sociologist Andrew Ross labelled Phoenix the least sustainable metropoli in the world. He says he stands by his assessment and warns of an eco-apartheid, whereby low-income districts on the more polluted south side of the Salt River( which formerly spurted energetically through the city and is now a seep) are less able to protect themselves from the heat and drought than wealthier citizens. Q& A What is the Overstretched Cities series? Overstretched Cities is an in-depth look at how urbanisation has appreciated metropolitans all over the world sprout in sizing, putting new strain on infrastructure and natural resources but in some cases offering hope for a more sustainable rapport with the natural world. Over the course of the week, Guardian Cities matches will look beyond the numbers to tell the stories of people affected by the 21 st century's person and intake boom. From sprawling cities in the developed world the hell is downing more than their fair share of energy and sea, to less wealthy municipalities unequipped to handle the rapid increase in geographic and population size, we will ask this world phenomenon by talking to the people who are trying to mitigate its bad upshots and glitter a light on the upside of human populations boom by researching the social and environmental advantages of metropolitan density. Was this helpful? Thank you for your feedback. Theres a stark inequality, he responds. The aid shelters, with their hybrid vehicles, their solar panel and other light-green thingamajigs; and the tribes on the other side struggling to breathe clean breath and imbibe uncontaminated spray. Its a prediction of where the world is headed. Or, he responds, you can just look at the past. The Hohokam people were the original irrigators of the valley that subsequently became home to Phoenix. Their civilization, numbering an estimated 40,000, crumbled in the 15 th century for grounds believed to relate to disagreements over scarce water. Bangalore is popularly known for its global IT firm and job opportunities in some of the big companies, but other than that, it boasts of some of the stylish properties. With a cheerful brave, stylish and a stylish life style, parties are now more inclined towards owning and obtaining qualities in Bangalore. The metropolitan is now witnessing some serious changes in terms of new room activities and makes trying the most appropriate to deliver the dwelling mixture. There are many housing assignments which are aiming to create a big change; some of the project would be Golden Gate, Shriram qualities, Purvankara etc. Kinds of belonging you will find You should be able to decide and then choose what kind of dimension you are looking for, whether it's an accommodation, independent villas, simple one room plain, whatever may be your pick is, and they are able to narrow down your selection based on spot as well to get your standard owned. If “you've had” budget self-disciplines, you don't need to worry; you will get quality according to your reach. But it is quiet expected that, if you wish to stay in primary part of the city, asset would cost more as compared to property of interior part. Some of the commercial place which are placid favourite for building asset, Brigade road, Cunningham street, M.G Road etc. If “you're trying to” sought for dimensions here, make sure you have extra time to find the right various kinds of batch for you, because it will take a lot of is necessary to get a house in this point. Some of the affluent areas which provides expensive residence bargains are, Sadashiv Nagar, Off Palace road etc. Now are 5 locales in Bangalore which has acquired a great importance for quality * Indira Nagar is a well and scheduled field which is connected to the entire city with great accommodate equipment. It is also attached to the Domlur airport; in addition, “theres” hospices, restaurants and educational institutions. In short-lived, it boasts of all the necessary and essential features that a well planned municipality must have along with superb casing property. * Banshankari is another area which is closely connected to railway station( 15 km) and to the airport( 25 km ). This home offers accommodate property deals within medium budget and at the same go supporting necessary facilities. * Malleshawaram is considered to be the greenest neighborhood with natural appeal all around and lush greenery to be found as well. “Thats one” of the important parts of the Garden city offering some of the large dimensions for buyers. * Koramangala is known for having most of the thought business firm and residences, along with popular education institute, shopping malls and many more equipment originating it model for anybody to settle down. * Marthahalli is a target close to Sarjapur and Whitefield, too connected with other regions of the world. This region offers medium plan home facilities. These are 5 top most places in Bangalore which offers immense residence facilities and properties. There are no measures to evaluate the value of a freehold property. This is because estimating a freehold is not an accurate science. Nonetheless, you can follow certain guidelines on what you need to take into consideration when valuing a freehold, which is produced by the advisory services that demonstrate free admonition to leaseholders. You must also make these three ingredients into consideration 😛 TAGEND 1. The current appraise of the property 2. The annual grind rent 3. The number of years currently left on the lease Also, evaluate the expected percentage increased number of property price that results from extending the leases of different sections, together with forecast-ed long term interest rates and inflation rates. Take help from an expert valuer rather than trying to work out a chassis all by yourself, to present before the freeholder. An professional valuer will be able to give you the best suggestion, which will enable you to make a practical offer. You will find expert valuers online. They will help you with the entire process of dialogue and buying the freehold. For the benefit of the freehold, most surveyors include a little additional to a property's quality. This is done after comparing it with same asset with the same number of years on the lease but no freehold. First, approach your freeholder privately, before you perform him with a first notification. This document should include your preliminary present for the freehold, which starts off the legal process of buying it. A word of precaution. Never cause an initial find without procuring an expert valuation. If you shape the mistaken evaluation in the initial observe you won't be able to take back the render. After the initial observation, wait for the freeholder to reply to it with a counter show by a year that you have given. The freeholder must be sanctioned at least two months from the year the initial notice is served. If the freeholder is not referring his bar placard within this period, the leaseholders can take contents into their hands. They can apply for a vesting dictate at national courts. It is now up to the court to move the freehold to the leaseholders. So freeholder's should respond on time to the initial notice for their own benefit. Buying a share of freehold will attain little profit if you already have had a decent length loan. You would still have to give the same authorized expenses as someone with a short hire, but would lead to a drop in the value of the property. So, you have decided to part with your residence. Querying how to get the best deal that they are able to apologize holding it is an investment dimension up to now. Before you put up a notice for sale, or start surpassing the word to your near and dear, consider the fact that the reality grocery is some parts of the country has been sluggish. Hence, it would be useful to put in a little bit of campaign that they are able to conclude your property inspection a lot more attractive to potential customers. Now are some professional gratuities that will help enhance the resale value of your residence by tens of thousands if not few lakhs. Pep up the exteriors As soon as a dwelling is bought, homeowners spend time, coin and vigor in designing the interiors to their quirks and imagination. What they miss to see is that, the exteriors play a major role in creating an impression in the mindset of a possible purchaser than the interior. It is the exterior that is firstly determined before they take a walk-through of the interiors. Hence, pay attention to pepping up your exteriors in equal amount to the interiors to get a better resale value. Creating added space An additional room is punishment, but what if it serves no determination? Furthermore, what if that additional chamber does not fit into the ergonomic layout of the dwelling accurately? Instead, think of ways to increase the cavity within the home. There are plenty of architect conglomerates with interior designers and vast consultants who cure adjusting the physical features of any home to make it appear spacious and airy. A spacious residence will deliver a higher resale price than one which boasts about a tiny and ill-fitting extra room. Setting right child repairs Like ceases of water that compile together to structure a chaotic puddle, a number of unattended minor mends can wash away a significant portion of your property's resale cost. A prospective purchaser will be more interested in knowing the current physical condition and the expected longevity of the members of this house before making a final decision. The scene of revealing walls, falling patches and plumbing problems can definitely take a hit on the property's auction worthiness. Hence, make sure all minor amends are attended to on a regular basis. Keep a cushion for price negotiation Indians are pitiless intermediaries. We love to bargain for the best deal in every event. Especially when it is about to purchase a new home expect the negotiation process to be fiercer than what you can imagine. Hence, make sure you premium your belonging with some cushion for reduction during the negotiation process. You might want to take into consideration the existing sell frequency of same assets and how the lots have ended before preparing the rate or agreeing to the price offered by the buyer. The resale value for a owned is highly is dependent on various factors. Location, floor room, number of apartments and furnishings all play a major role in determining the resale price. However, other factors like well-maintained exteriors, perfectly operating plumbing and sanitation, added seat for storage, etc. can help further increase the resale appreciate to a higher denomination. But did you know the average cost of a home in Los Angeles (< em >$< em> 658,000) is more than double “the member states national” average for houses of the same sizing? Real estate professionals say that the gap between the cost of living in LA and the rest of the country will continue to do big, all the mode through 2018. When gainfully filled, developed parties with stipends hovering around $250,000 a year are looking to move to nearby metropolis due to the inability to find a dwelling within their budget that gratifies their standard of living, it is clear that California is pricing out its own tenants. And the truth is – there isn't really lots anyone can do about it. The Cause While no single question is exclusively to blame for the unbelievably kindled home rate in Los Angeles, the generalized answer is that there are not enough houses to meet the demand, and in addition to being able to that, the cost to build more accommodate remains developers away. It is a vicious cycle of fiscals – people want housing, structure companies can't fill that challenge because the cost to them is too high, this makes fund and jobs out of the metropolitan area as builders, investors, and developers look to the suburbs to build, so the demand ripens, and the cost flourishes alongside it. What is even more sudden, is that the positive increment in employment creation and the rest of the economy is actually putting more of a strain on residence rate. Los Angeles has added tens of thousands of jobs in almost all sectors of world markets, from the lower level introduction chores, all the way to opening space for new senior executives and CEOs, and as you can expect, that conveys more parties look to move to the city to fill the openings which the jobs have created; thus adding to the demand for accommodate that seems insatiable in Los Angeles. The Proposed Mixtures … The answer seems simple, right? Time build more rooms. Regrettably , nothing is ever easier than i thought. Up until recently there was a push among lawmakers to, at the very least, keep the cost of housing under control through litigation. The solution seemed concentrated on reducing the cost for contractors to improve the house and new developments. Prior to this year, case seemed to offer huge levy motivations to developers inclined and able to quickly construct brand-new multi-family units, particularly in urban environment. Peculiarly to those builders who moved such new developments more eco-friendly and energy-efficient. Many state law makers have focused power and scrutiny on low-income house aids. The legislative analyst's report estimated that building inexpensive residences for the 1.7 million low-income households in California that now devote half their wages on building would expense just as much to finance per year as the state's spending on Medi-Cal. … And why they have flunked As much as state litigators may demand to deal with the devastating home deficit in LA, there is a huge problem – namely, that most decisions involving new developments and building fall into the sips of municipality and local government. The state governments' hands are restrained. Regrettably, the smaller authorities tend to have a something much narrow-minded consider of developments in the situation, seeking to raise additions and find solutions for/ their/ city, without much its further consideration of the surrounding areas. Additionally, the main tool that state legislators could use to quickly improve dwellings, is in direct opposition to a myriad of business and environmental pastimes. The C.E.Q.A( California's governing environmental principle ), in many ways, thwarts the building of brand-new housing developments at any rate which would make an impact on the building shortage. So the question grows … what can we do? Should we relinquish environmental protection constitutions to lower residence costs? It is a matter for has to be addressed, but with so many political affects and issues, most lawmakers won't touch it. And SoCal residents and home owners associations aren't meeting it all very well. Many of these local regulate bodies are in striking opposition of speedy development of housing- because that means that their vicinities would have to face the dreaded “D” word … Density. Push-back from neighborhoods and suburban areas is obvious- no one wants to be army in, especially in the areas which are the most affected by the building dearth( affluent coastal communities ). So it appears to be as though lawmakers are obstruction on all fronts. Have Lawmakers given up ? This year, it seems as if mood lawmakers have given up on dealing here the increasing casing overhead. Little to no new solutions have been proposed, and those that have are not being passed through and put into locate. The commonwealth is at a stand-still and lawmakers seem to make the ” I guess we'll just have to wait and see what happens” approach. As litigation oversteps to increase the minimum wages to $15 an hour, many people believe that this increase will ease the burden on low-toned and middle-income households and low for financial growth and eventually lead to a reduction in the casing shortage. “Economists be concerned that if lawmakers don't fix the house equip problems, many of the state's efforts to improve the lives of low-income residents will hesitate. Countless legislators cited high-pitched housing expenses as a reason to boost California's minimum wage to $15 per hour over the next six years, but ” ‘< em> unless something's done to root residence expenditures, often of that repay multiply “couldve been” feed up by higher fees ,‘ Thornberg said.” (< em> LA Times ) The Verdict ? SoCal is in a pickle, and with legislators honestly admitting that the casing difficulty is not national priorities for this year, the residents will have to pay the premium. Residence payments in Los Angeles will continue to rise, unchecked, until newer, large opinions come into neighborhood who are capable of is stopped to the vicious cycle of challenge, shortage of supplying, and the tremendous affect of special interests.
{ "pile_set_name": "Pile-CC" }
The effect of areca nut on salivary and selected oral microorganisms. The aim of this study was to investigate the effect on the growth of salivary and selected oral microorganisms of areca nut, aqueous extracts of the nut, its major alkaloid arecoline and the components tannic acid and catechin of its tannin fraction. The antibacterial properties of the above were tested on Streptococcus mutans, Streptococcus salivarius, Candida albicans and Fusobacterium nucleatum and, as a control, Staphylococcus aureus. This was followed by investigating its effect on salivary organisms cultured from the saliva after chewing boiled areca nut. Extracts inhibited the growth of the selected organisms in a concentration dependent manner, baked and boiled nuts being significantly more potent than raw nut. Growth of C. albicans was the least affected by the nut extracts. Tannic acid was strongly antibacterial but not catechin or arecoline. No antibacterial effect could be demonstrated on salivary organisms after chewing the nut for 5 minutes but exposure of saliva to the cud for 1 hour caused a significant depression of bacterial growth. It is concluded that the hydrolysable tannins in the tannin fraction, which include tannic acid, are responsible for the antibacterial properties of the nut and that prolonged intraoral exposure to the nut can suppress bacteria in the mouth.
{ "pile_set_name": "PubMed Abstracts" }
14 Ill. App.2d 322 (1957) 144 N.E.2d 757 Lois Rossten, Appellant, v. Frank Wolf and Anna Wolf, Appellees. Gen. No. 47,115. Illinois Appellate Court — First District, First Division. June 19, 1957. Released for publication October 4, 1957. *323 George A. Behling Jr., of Chicago, for appellant. Edward L. Kerpec, for appellees. JUDGE McCORMICK delivered the opinion of the court. *324 The plaintiff, Lois Rossten, takes this appeal from an order of the Superior Court of Cook county vacating a default judgment entered against defendants, Frank and Anna Wolf, and allowing defendants to appear and plead to the complaint. The plaintiff filed a suit against the defendants for damages for personal injuries suffered by her as a result of the alleged negligent operation of a motor vehicle by Anna Wolf while she was acting as agent for Frank Wolf. Summons was issued, and the return of the sheriff of Cook county was that he had served both Anna and Frank Wolf on March 7, 1956 by leaving a copy of the summons with a Mrs. B. Coddington, daughter, at their usual place of abode and by mailing copies of the same to the defendants. On May 3, 1956 both defendants were defaulted for failure to appear. On May 28, 1956 the plaintiff proved up her case and judgment was entered against both defendants in the sum of $2,000 with a special finding of willful and wanton negligence against the defendant Anna Wolf. On August 7, 1956 the defendants presented a motion to the court to set aside the judgment theretofore entered, and to grant leave to the defendants to file their appearance and answer to the complaint within a time to be fixed by the court. In support of the motion the defendants presented affidavits of Betty Coddington, Frank Wolf and Anna Wolf. The affidavit of Betty Coddington alleged that she was a resident of Battle Creek, Indiana; that she is the daughter of Frank Wolf and the sister of defendant Anna Wolf; that on March 7, 1956 she was visiting the home of her parents and sister at 10555 South 81st Avenue, Palos Park, Illinois; that at no time was she a member of the household of Frank Wolf and Anna Wolf; that the sheriff served her with summons and she retained the said summons until on or about July 4, 1956; that at no time had she delivered or conveyed the contents of the summons to her father or her sister prior to July 4th. The *325 affidavits of both Frank and Anna Wolf alleged that at no time had they or either of them been served with summons in the said suit either by personal, substituted or mailed service, and they both set up a meritorious defense. No counteraffidavits were filed. Hearing was had on the motion on August 7, 1956, and the court entered the following order: "This cause coming on to be heard on defendants' petition to vacate and set aside judgment heretofore entered herein, and for leave to file the appearance of defendants and Edward L. Kerpec as their attorney, and the court being advised: "It is hereby ordered that leave be and the same is hereby given defendants to file their appearance herein; that Edward L. Kerpec file his appearance as attorney for defendants; the motion to quash service be and the same is hereby denied; that judgment heretofore entered on May 28, 1956 be and the same is vacated, set aside and held for naught; that defendants file pleadings to the complaint within ten days (10 days)." The plaintiff appeals only from the portion of the order which provides that the judgment be vacated and the defendants be given leave to appear and plead to the complaint. Her contention is that since the order of the court denied the "motion" to quash service it must be implied that the court considered the service to be valid and effective and disbelieved all of the affidavits submitted on behalf of the defendants, and that therefore the motion made by the defendants, under section 72 of the Civil Practice Act (Ill. Rev. Stat. 1955, chap. 110, par. 72), could not be sustained since the court could not refuse to quash the service and at the same time vacate the judgment and permit the defendants to appear and defend. [1] The argument of the plaintiff to have that portion of the order refusing to quash service stand, and to reverse the portion of the court's order vacating the *326 judgment and permitting the defendants to appear, is highly technical. In the motion made by the defendants they did not pray that the summons be quashed. They only prayed that the judgment be vacated and they be given opportunity to appear and defend. In their affidavits they based their right to relief upon the fact that there had been no proper service made upon them. The court could have properly ordered that they be granted leave to appear and defend without any mention of whether or not the service should be quashed. The statement in the order denying the motion to quash the service is surplusage, as the record does not indicate that any such motion was before the court. [2, 3] As we said in Lichter v. Scher, 11 Ill. App.2d 441, "in a case in which default has been entered and there has been no trial on the merits, the judgment itself is based on the technique of procedure and is subject to careful scrutiny." The only issue before us is whether or not the court was justified upon any basis appearing in the record in entering the order from which this appeal is taken. We must indulge in all reasonable presumptions based on the record that the business and proceedings in the trial court were in accord with the law and rules of the court. [4-6] In the case before us the affidavit made by the defendants and Betty Coddington were not controverted by counteraffidavits, nor did the plaintiff file a motion to strike. The statements contained in the affidavits must be taken as true. Lane v. Bohlig, 349 Ill. App. 487. While we cannot tell what the trial judge had in mind, it is possible that he concluded that the service was not valid and consequently, in accordance with the prayer in their motion, that the defendants might be allowed to appear and plead, and inadvertently signed an order containing a denial of a nonexistent motion to quash. In such a case the order of the court was a proper order. On the other hand, the *327 trial judge might have believed, mistakenly or not, that the party served could legally be considered a member of the family of the defendants and also believed that the defendants had no knowledge of such service until after the entry of the judgment against them and that Betty Coddington had not given them the summons nor informed them that it had been served upon her, in which case the order also would be proper. While it is the law that if substituted service is made in strict compliance with the statute the service is not rendered void because a member of the family who has been properly served has failed to inform the defendants of that fact (72 C.J.S. Process, sec. 43), nevertheless, under the rule laid down in Lichter v. Scher, 4 Ill. App.2d 37, and cases cited therein, a motion pointing out such facts might be properly considered an appeal to the equitable powers of the court. In the Lichter case the court discusses the rule of law applicable to motions of this character and says: "All prior decisions relating to proceedings of this character must now be reconsidered in the light of Ellman v. De Ruiter, 412 Ill. 285, 106 N.E.2d 350 (1952). In that case process had been served but the defendant, through mistake or misunderstanding as to the return day of the summons, failed to file his appearance on time. Default was taken and an ex parte judgment was entered at the same time that the plaintiff was negotiating with the defendant for settlement. The defendant was not informed of the judgment until more than thirty days after its entry. The court held that the proceeding was in the nature of a suit in equity; that while the Civil Practice Act has not effected a complete amalgamation of the practice and procedure in common law and suits in equity, there has been a fusion sufficient to enable a court of law, when the occasion demands it, to apply equitable principles in determining whether relief should be granted in such proceedings. The court decided that it was *328 `within the spirit of the Civil Practice Act, and within the scope of the function of the motion which has replaced the writ of error coram nobis, that defendant be given summary relief in this proceeding.' "The impact of the dicision in Ellman v. De Ruiter, supra, was felt immediately on the civil procedure of this state. It has already been cited three times as controlling precedent on cases in point.... [Citing Tomaszewski v. George, 1 Ill. App.2d 22, Admiral Corp. v. Newell, 348 Ill. App. 180; and Schnable v. Tuma, 351 Ill. App. 486.] The net effect of those decisions was to carry out in differing factual situations the holding of the Ellman case. "In Lane v. Bohlig, 349 Ill. App. 487, 111 N.E.2d 361 (1953) the Third Division of this court held that to deny a motion similar to that made by defendant in the instant case, without holding a hearing or requiring the plaintiff to answer by counter-affidavits, was error in view of uncontroverted statements in affidavits in support of said motion." See also Moore v. Jones, 12 Ill. App.2d 488, and Paramount Paper Tube v. Capital Engineering, 11 Ill. App.2d 456. [7] The order of the court vacating the judgment and permitting the defendants to appear and defend was a proper exercise of the equitable powers of the court under the rule laid down in Ellman v. De Ruiter, 412 Ill. 285. The order of August 7, 1956 of the Superior Court of Cook county vacating its judgment of May 28, 1956 and permitting the defendants to appear and defend is affirmed. Order affirmed. ROBSON, P.J. and SCHWARTZ, J., concur.
{ "pile_set_name": "FreeLaw" }
Q: curl to compile a list of redirected pages Suppose I have a bash script that goes through a file that contains a list of old URLs that have all been redirected. curl --location http://destination.com will process a page by following a redirect. However, I'm interested not in the content, but on where the redirect points so that I can update my records. What is the command-line option for curl to output what that new location for the URL is? A: You wou want to leave out the --location/-L flag, and use -w, checking the redirect_url variable. curl -w "%{redirect_url}" http://someurl.com should do it. Used in a script: REDIRECT=`curl -w "%{redirect_url}" http://someurl.com` echo "http://someurl.com redirects to: ${REDIRECT}" From the curl man page: -w, --write-out <format> Make curl display information on stdout after a completed transfer. The format is a string that may contain plain text mixed with any number of variables. The format can be specified as a literal "string", or you can have curl read the format from a file with "@filename" and to tell curl to read the format from stdin you write "@-". The variables present in the output format will be substituted by the value or text that curl thinks fit, as described below. All variables are specified as %{variable_name} and to output a normal % you just write them as %%. You can output a newline by using \n, a carriage return with \r and a tab space with \t. NOTE: The %-symbol is a special symbol in the win32-environment, where all occurrences of % must be doubled when using this option. The variables available are: ... redirect_url When an HTTP request was made without -L to follow redirects, this variable will show the actual URL a redirect would take you to. (Added in 7.18.2) ...
{ "pile_set_name": "StackExchange" }
Infection dynamics and genetic variability of Mycoplasma hyopneumoniae in self-replacement gilts. The aim of this study was to assess the longitudinal pattern of M. hyopneumoniae detection in self-replacement gilts at various farms and to characterize the genetic diversity among samples. A total of 298 gilts from three M. hyopneumoniae positive farms were selected at 150days of age (doa). Gilts were tested for M. hyopneumoniae antibodies by ELISA, once in serum at 150 doa and for M. hyopneumoniae detection in laryngeal swabs by real time PCR two or three times. Also, 425 piglets were tested for M. hyopneumoniae detection in laryngeal swabs. A total of 103 samples were characterized by Multiple Locus Variable-number tandem repeats Analysis. Multiple comparison tests were performed and adjusted using Bonferroni correction to compare prevalences of positive gilts by ELISA and real time PCR. Moderate to high prevalence of M. hyopneumoniae in gilts was detected at 150 doa, which decreased over time, and different detection patterns were observed among farms. Dam-to-piglet transmission of M. hyopneumoniae was not detected. The characterization of M. hyopneumoniae showed 17 different variants in all farms, with two identical variants detected in two of the farms. ELISA testing showed high prevalence of seropositive gilts at 150 doa in all farms. Results of this study showed that circulation of M. hyopneumoniae in self-replacement gilts varied among farms, even under similar production and management conditions. In addition, the molecular variability of M. hyopneumoniae detected within farms suggests that in cases of minimal replacement gilt introduction bacterial diversity maybe farm specific.
{ "pile_set_name": "PubMed Abstracts" }
Systemic lupus erythematosus: what should family physicians know in 2018? Systemic lupus erythematosus (SLE) is a complex multi-systemic autoimmune disease with considerable clinical and immunological heterogeneity. Family physicians should be familiar with the protean manifestations of SLE to aid early diagnosis and monitoring of disease progression. The role of family physicians in SLE includes education, counselling, psychological support, management of mild disease, and recognition of the need for referral to other specialists for more serious disease and complications. Surveillance of cardiovascular risk factors and osteoporosis and advice about vaccination and reproductive issues can be performed in the primary care setting under close collaboration with rheumatologists and other specialists. This review provides family physicians with the latest classification criteria for SLE, recommendations on SLE-related health issues, and pharmacological therapies for SLE.
{ "pile_set_name": "PubMed Abstracts" }
Introduction ============ *Drosophila obscura* and *D. subobscura* (Diptera: Drosophilidae) are closely related species of the *D. obscura* group,[@R1] with a wide distribution in the Palaearctic. Both are generalists and co-occur broadly in the colline and alpine zone.[@R2] They are frequently used species in evolutionary-biological studies (for review see refs. [@R3]--[@R7]). Accurate species identification of living specimens of both sexes is difficult, as the two species are morphologically highly similar,[@R8] with considerable intraspecific variation in the diagnostic characters.[@R9] The problem is aggravated by the need to keep to a minimum the anesthesia by CO~2~, to avoid reduced longevity and fecundity.[@R10]^,^[@R11] For introducing wild-caught individuals to the laboratory with the aim to retain genetic variation, a rapid and non-destructive method for species identification with the potential for high throughput would thus be desirable as an alternative to morphology-based methods. Insect cuticular layers contain complex mixtures of hydrocarbons (CHCs), many of which are synthesized by the insect itself, i.e., supposedly genetically determined.[@R12] In addition to their central role in the prevention of desiccation,[@R13] CHCs are important for chemical communication, for example in mate choice[@R14]^,^[@R15] and social behavior.[@R16] The idea that the bouquet of CHCs will thus be species specific led researchers to enquire into their usefulness in species identification[@R17]^,^[@R18] and various examples of success exist.[@R19] We decided to test the usefulness of near-infrared spectroscopy (NIRS) to discriminate between *D. subobscura* and *D. obscura*: Previous studies suggested differences in desiccation resistance between the two species,[@R8] and, possibly due to a lack of interspecific mate recognition, no hybridization between them has been reported as yet,[@R20] both of which may involve CHC species specificity. NIRS characterizes chemical patterns qualitatively and quantitatively based primarily on C-H, N-H, O-H stretching vibrations.[@R21] It is thus a useful tool for the characterization of biological material, and is, also owing to its non-destructive and non-invasive nature, becoming common practice in ecology[@R22] and entomology.[@R23]^,^[@R24] Of relevance here, it was successfully used in the identification of (non-drosophilid) dipterans.[@R25]^-^[@R27] The objectives of this study were to determine if NIRS (1) can be used to discriminate living *D. obscura* and *D. subobscura* specimens by using multivariate chemometrics, and (2) whether calibration models elaborated for wild-caught specimens and for specimens from different laboratory-reared generations can be cross-applied. Cross-applicability would reduce significantly the effort needed for establishing the identification of specimens with such differing backgrounds. However, it needs to be kept in mind that genetic and phenotypic changes can arise from evolution in a novel environment.[@R28] The CHC bouquet, in particular, can evolve due to changes in the ambient thermal regime and in diet composition[@R29] but can also change due to acquisition of hydrocarbons from food.[@R30] Results ======= Statistical parameters of the partial least squares (PLS) calibration models (number of PLS factors used, coefficient of determination (r2) and standard error of cross validation (SECV)) and the classification results for the validation sets are listed in [Table 1](#T1){ref-type="table"}. The calibration models had r^2^ values between 0.43 and 0.63, and SECV values between 0.33 and 0.40. The correct classification for the validation sets ranged between 85% and 92%; the best prediction results were achieved for the eighth lab-reared generation (F8), with 90% for F8 males (F8 min) and 92% for F8 females (F8f). ###### **Table 1.** Classification results of *Drosophila subobscura* and *D. obscura* based on PLS regression models developed from near-infrared spectra (500--2200 nm). ------------------------------------------------------------------------------------ --------------------- ------------------- --------------------- ------------------- ---------------------   **Males** **Females**   **Wild** **F1** **F8** **F1** **F8** **PLS factors** 7 11 9 12 13 **r^2^** 0.57 0.63 0.55 0.49 0.43 **SECV** 0.33 0.34 0.34 0.38 0.40 **n in the validation sets: *D. subobscura* / *D. obscura*** 252 / 15 19 / 24 90 / 64 27 / 41 60 / 67 **Cross-validation results: % correctly classified** 90% 94% 84% 87% 83% **n correctly classified (%) / n total** 226 (**85%**) / 267 38 (**88%**) / 43 138 (**90%**) / 154 59 (**87%**) / 68 117 (**92%**) / 127 **After exclusion of class values 1.4--1.6: n (%) correctly classified / n total** 213 (**90%**) / 237 36 (**90%**) / 40 127 (**91%**) / 139 56 (**90%**) / 62 111 (**96%**) / 116 **After exclusion of class values 1.3--1.7: n (%) correctly classified / n total** 182 (**95%**) / 191 26 (**90%**) / 29 106 (**94%**) / 113 47 (**92%**) / 51 96 (**97%**) / 99 **After exclusion of class values 1.2--1.8: n (%) correctly classified / n total** 145 (**97%**) / 148 18 (**90%**) / 20 86 (**92%**) / 93 44 (**94%**) /47 80 (**99%**) / 82 ------------------------------------------------------------------------------------ --------------------- ------------------- --------------------- ------------------- --------------------- n = number of individuals We then explored how the exclusion of prediction values around 1.5 influenced classification rate and loss of specimens, by symmetrically excluding values below and above 1.5, decreasing and increasing in steps of 0.02 to the extremes of 1.0 and 2.0, respectively. Exclusion of values between 1.4 and 1.6 resulted in, for example, the F8 females in an increase of the correct classification from 92% to 96% and in the exclusion of 10% of specimens ([Fig. 1](#F1){ref-type="fig"}). For the other models and validation sets, the corresponding values were similar at increases of correct classification to 90--91% and 9--11% of individuals excluded ([Table 1](#T1){ref-type="table"}); for our data set sizes, we found this to represent an acceptable compromise across models between accuracy and number of specimens excluded. When values 1.30--1.70 and 1.20--1.80 were excluded, the success rate for F8f increased to 97% and 99%, but the portion of specimens identified dropped to 78% and 65%, respectively ([Fig. 1](#F1){ref-type="fig"}). ![**Figure 1.** The portion of correctly classified specimens depends on the exclusion of spectra with ambiguous prediction values, exemplified by the F8 females. The thin line shows the increasing loss of individuals with increasing range of excluded prediction values, the bold line shows the corresponding increase in correct classification. Exclusion ranges in boxes are discussed in the text.](fly-6-284-g1){#F1} Wavelengths important for the identification of *D. subobscura* and *D. obscura* were identified from the PLS regression coefficients, with wavelengths showing very high or very low coefficients being more important. There were peaks occurring in all of the five calibration models and peaks that were important only in single models. [Figure 2](#F2){ref-type="fig"} shows the regression-coefficient plot for F8f. ![**Figure 2.** PLS regression coefficient used for identifying important wavelengths for classification of *Drosophila subobscura* and *D. obscura* females from the F8.](fly-6-284-g2){#F2} When calibration models created for one group were used to predict validation sets from the other groups, the classification success ranged from 56% to 83% ([Table 2](#T2){ref-type="table"}; prediction values between 1.4 and 1.6 excluded). ###### **Table 2.** Correct classification rate (%) for validation sets performed on the different calibration models to test their cross-applicability (classification values 1.4--1.6 excluded). -------------------- ----------------------- -------- --------   **Calibration model** **Validation set** **Wild** **F1** **F8** **Wm** **90** 83 75 **F1m** 65 **90** 77 **F8m** 67 77 **91** **F1f** n.a. **90** 56 **F8f** n.a. 57 **96** -------------------- ----------------------- -------- -------- n.a. = not applicable Discussion ========== Here we show that NIRS can be used to distinguish between *Drosophila subobscura* and *D. obscura* with an accuracy of 85% to 92% using PLS analysis, when using the full range of prediction values. This indicates that the composition of CHCs may differ between the two species. We cannot directly relate NIR-spectral differences to CHCs, and also the visible spectral range was relevant to successful PLS models (see further down), but we assume that CHC composition contributed significantly to species differences (compare refs. [@R14]--[@R19]). The prediction results for the wild-caught flies were comparable to those obtained for laboratory-reared specimens, in line with the notion that hydrocarbon profiles are more genetically than environmentally determined[@R31] -- although the two species were reared under the same conditions, differences in the cuticular profiles persisted and were detectable by NIRS. These findings contrast the NIRS study by Mayagaya et al.[@R25] who predicted two Anopheles species reared in the laboratory with an accuracy of almost 100%, and field-collected specimens with 80% accuracy. Including both wild-caught flies and laboratory-reared flies (from all generations) in the same model did not improve our prediction results, the best models resulting in 82% and 79% prediction success for females and males, respectively (S. Fischnaller, unpubl.). However, from the practical point of view of setting up breeding lines based on identification via NIRS, our error rates are not fatal, given that *Drosophila obscura* and *D. subobscura* do not hybridize.[@R20] Hence, no interspecific gene flow is expected for unintentionally heterospecific cultures, and the identification procedure can be repeated in consecutive generations. The lower rate of correct classification in our study as compared with the work by Rodriguez-Fernandez et al.,[@R27] who used nine Diptera species, could be caused by a closer phylogenetic relatedness of our species as well as by our including multiple populations in the sample -- genetic diversity was found to be very high across other wild populations of *D. subobscura*.[@R28] Furthermore, we included individuals of all ages, and thus likely both unmated and mated individuals, in our calibration and validation sets. NIRS is sensitive to the age of individuals, and thus used for age-grading of various insects,[@R25]^,^[@R32]^,^[@R33] and Everaerts et al.[@R34] showed that in Drosophilidae, in both females and males, CHC changes occur during mating. The variation introduced by either or both of these effects may possibly have impeded greater success of our calibration procedures. One way to improve classification is the exclusion of specimens with prediction values around 1.5 ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}). This procedure was suggested by Sikulu et al.[@R26] in general, but to our knowledge the trade-off between increase of classification success and loss of specimens has not yet been explored in a quantitative manner. We suggest that such exploration be adopted as a standard procedure in NIRS species-identification studies. Depending on the demands for the specific project, researchers could thus prioritise either classification success or number of specimens identified in a controlled manner. Another way of improving accuracy with our species could be to scan just wings. Using the pulled-out right wings of thawed males in NIRS analysis enabled us to distinguish *D. subobscura* from *D. obscura* with 100% accuracy (n = 50 males per species; data not shown). This is in line with the findings from Shevtsova et al.[@R35] who found high interspecific variation in the wing interference patterns of Drosophilidae. Scanning just wings of live specimens is very difficult to put into praxis, however, due to the need for standardised positioning of wings on the one hand and minimum CO~2~ exposure of specimens at the other (S. Fischnaller, unpubl.). Exploring this possibility in depth remains subject to future exploration. Examination and comparison of the regression coefficient plots indicated that there are peaks important to species discrimination that are common to all five calibration models. The region around 510, 540 and 610 nm indicates that there are differences between the two species in the visible region, possibly caused by variation in cuticle thickness, bristles and/or pigmentation.[@R35] The region of 1050--1070 nm indicates vibration of water molecules at the third overtone, as well as occurrence of molecules containing N-H functional groups (ref. [@R36], also used for the interpretation of the subsequently listed wavelengths). Peaks at 1370--1390 nm (CH~2~ second overtone, and water), 1720--1730 nm (CH~3~ first overtone), 1810--1840 nm and 1870 nm (C-H first overtone, water), and 2140--2180 nm (N-H and O-H combination bands) also contributed to all calibration models. Our study suggests that wild-caught specimens of our species should not be used to identify laboratory-reared specimens, and vice versa, due to excessive failure rates ([Table 2](#T2){ref-type="table"}). This contrasts the findings of Mayagaya et al.[@R25] of 79% correctly classified wild-caught Anopheles when using models based on laboratory-reared individuals. Our low success rate is supported by absorption peaks in the regression coefficients exclusive to just one of the calibration models (e.g., 1025 nm, 1460 nm in Wm; 1500 nm, 2050 nm in F1 min; 1770 nm in F1f; 2000 nm in F8f). In other words, chemical differences led to the observed generation specificity of the models. Toolson and Kuper-Simbron[@R29] reported for *Drosophila pseudoobscura* that maintenance in the laboratory leads to physiological and biochemical changes. They reported a shift in the cuticular composition even for the first generation of large populations reared in the laboratory, and explained it by changes of selective pressure and fitness advantages under novel environmental factors. Especially in small populations genetic drift can additionally increase the genetic differentiation across populations (*D. subobscura*: see refs. [@R28], [@R37]). Also, hydrocarbon profiles can change in a non-inherited manner due to acquisition of food-derived hydrocarbons (ant example: ref. [@R30]). Thus, changes in the metabolic profiles -- either due to genetic or environmental changes -- may have altered the recorded NIRS data across generations, impeding the use of calibration models generated for one generation in the others. Future research should aim to pinpoint potential non-inherited contributions as well as assess if this problem ceases in later generations which would indicate that it is due to rapid laboratory adaptation[@R5]^,^[@R38] or whether larger population sizes diminish it which would indicate that it is due to genetic drift (but note that our population sizes were in line with general practice, e.g., Fry[@R39]). In conclusion, there are three main findings to our study: First, near-infrared spectroscopy proved a useful tool for the identification of living Drosophila flies. Second, we could not cross-apply models and validation sets among field-caught and lab-reared individuals and across generations, indicating changes due to laboratory adaptation, genetic drift and/or diet changes. Third, classification rates could be considerably improved by excluding prediction values around 1.5, suggesting that researchers should consider excluding a particular range of prediction values depending on their research question. Our study thus underscores the enormous potential of the NIRS technique to species identification (e.g., refs. [@R24], [@R25], [@R26], [@R40], [@R41]), and indicates that it could become an important tool also for the delimitation of species in integrative taxonomy,[@R42] as well as in other biological fields.[@R43] Materials and Methods ===================== Insects ------- Specimens were collected from six different locations in North Tyrol (Austria) during August and September 2010. To represent a wide range of habitats, the collection sites were chosen from various altitudes between 570 and 2000 min above sea level ([Table 3](#T3){ref-type="table"}). The minimum and maximum distances between populations were 2 and 60 km, respectively. Collecting was done by net sweeping over baits of fermented banana[@R44] in the evening hours from 5 to 7 p.m. The field-caught flies were transported alive to the laboratory and anaesthetised with CO~2~ for morphology-based species identification. CO~2~ exposure length for species identification, as well as for spectra collection (see below), was kept to a minimum and never exceeded four minutes per specimen. Flies that were identified as *D. subobscura* or *D. obscura* according to Bächli and Burla[@R9] were used to set up breeding lines for each location sampled. All lines were kept at a minimum census size of 60 individuals on an artificial diet (corn-meal, sugar, agar, yeast, Tegosept) and at a photoperiod of 12/12 h (light/dark) at 19°C. ###### **Table 3.** Sampling data for field-collected *Drosophila obscura* and *D. subobscura*. -------------------- ----------------------------- ------------------------- ----------------------------------- ----------------------------- ------------- -----------       **Number of specimens collected**       ***Drosophila obscura*** ***Drosophila subobscura*** **Location** **Geographic coordinates** **Altitude (m a.s.l.)** **females** **males** **females** **males** **Kaserstattalm** 47°07′34.86"N 11°17′30.83"E 2,029 4 27 3 11 **Hahntennjoch** 47°17′24.07"N 10°39′19.97"E 1,973 3 0 6 10 **Buzihütte** 47°16′20.99"N 11°21′23.27"E 711 0 0 32 45 **Mentlberg** 47°14′55.34"N 11°21′56.31"E 616 0 11 59 133 **Arzl** 47°17′11.22"N 11°25′09.80"E 707 0 3 22 26 **Innsbruck city** 47°15′53.43"N 11°20′34.59"E 579 2 9 24 62 -------------------- ----------------------------- ------------------------- ----------------------------------- ----------------------------- ------------- ----------- a.s.l. = above sea level Data collection --------------- Spectra were collected from anaesthetised flies using a Labspec® 5000 Portable Vis/NIR Spectrometer (350--2,500 nm; ASD Inc.) by placing flies individually on their backs on a 9 cm diameter Spectralon plate. The 3 mm diameter bifurcated fiber-optic probe was positioned about 2 mm above the specimen, focusing on the abdomen. The spectrometer automatically calculated and saved the average spectrum of 30 collected spectra of each individual. Background reference (the baseline) was measured using a separate 3 cm diameter Spectralon plate to avoid contamination. All field-caught individuals as well as 251 randomly chosen individuals of the F1 and 421 of the F8 of the breeding lines were sexed and scanned. We thus included a wide range of individual ages in our sample. Data analysis ------------- All recorded spectra were converted into Galactic spectrum file format using ASD ViewSpecPro. Spectra used for the calibration sets were pre-processed by mean-centring and analyzed using PLS regression and leave-one-out cross validation[@R45]^,^[@R46] implemented in GRAMS software PLS/IQ. Spectra were generally very noisy below 500 nm and above 2200 nm and these regions were excluded from further analysis. Calibration models were elaborated separately for males (m) and females (f), because females can be easily distinguished from males and because Drosophila sexes differ in their CHC-profiles.[@R34] We performed models for the following five groups: (1) the wild, field-collected males, referred to as "Wm" (due to the low number of field-caught *D. obscura* females, no model could be created for this group), (2) the first lab-reared generation, referred to as "F1m" and (3) "F1f," and (4) the eighth lab-reared generation, referred to as "F8m" and (5) "F8f." The training sets for each calibration model contained 70 spectra (35 of each species). A two-way comparison in PLS analysis was made by assigning integer values 1 and 2 to *D. subobscura* and *D. obscura*, respectively. Independent validation sets, treated as "unknown" specimens, were then classified on the basis of the calibration model in each group. Spectra predicted to have a class value of ≤ 1.5 were considered to belong to *D. subobscura*, those with a predicted value of ≥ 1.5 to *D. obscura*. The numbers of PLS regression factors to be used in the prediction models were determined by examining the values of the predicted residual sum of squares[@R46] and the classification results of the independent validation sets. Accuracy of the calibration models was examined by checking the r^2^ indicating the closeness of fit between NIRS and reference data, the SECV of the leave-one-out procedure, and by calculating the prediction results using the validation sets---the most rigorous indicator of model quality.[@R47] Spectral residuals, which were possibly due to technical problems such as movement of insufficiently anaesthetised specimens, were discarded from the sample. Such outliers were detected by visual examination of the spectra using spekwin32 (F. Menges "Spekwin32---free software for optical spectroscopy"- Vers.1.71.5, 2010, <http://www.effemm2.de/spekwin/>) and by examination of the leverage and studentised residuals plots generated in GRAMS (compare ref. [@R48]). We thank Heike Perlinger and Clemens Folterbauer for assistance in the laboratory, Alexander Rief for sampling assistance, Regina Medgyesy for help in compiling literature, Gerhard Bächli for help with morphological species identification, and two anonymous referees for constructive criticism. This research was funded by the University of Innsbruck; FMS was supported by FWF P 23949-B17. Previously published online: [www.landesbioscience.com/journals/fly/article/21535](http://www.landesbioscience.com/journals/fly/article/21535/) CHCs : cuticular hydrocarbons NIRS : near-infrared spectroscopy PLS : partial least squares Wm : field-collected males F1 : first lab-reared generation F8 : eighth lab-reared generation r^2^ : coefficient of determination SECV : standard error of cross validation No potential conflicts of interest were disclosed. [^1]: These authors contributed equally to this work.
{ "pile_set_name": "PubMed Central" }
[Endoscopic treatment with Deflux for primary vesicoureteral reflux]. Effectivety for endoscopic treatment for primary reflux has been under discussion as a single procedure. In the last 3 years our unit have been used Deflux, (dextranomer copolymer in hialuronic acid) for this pathology. The aim of this study is to analyze the results of our experience. Since 2002, a prospective protocol for VUR has been applied. We reviewed the last 25 cases treated with Deflux per thousand injection who had ultrasound and cistography. 86% (n = 21) were females and with a mean age of 6.1 years (range 2-14) the success rate with a single injection was 73.6% (n = 28). The amount of deflux injected was irrelevant in the result. The results in the low grades reflux (I-II) reaching the 100% (n = 15). The worse result was in the double system cases with just one successful case out of 6 injected. The procedure was in outpatient bases. There were no peri-procedures complications. The endoscopic treatment for VUR with Deflux, is a good alternative to medical treatment especially in single ureter with low grade. Therefore the authors recommend this technique at the time of counseling parents.
{ "pile_set_name": "PubMed Abstracts" }
JetBlue Airways on Wednesday officially established Jacksonville International Airport as a site for one of the airlines’ cargo stations. The airline noted that it chose Jacksonville as the location for the 31st station because it will allow cargo shipments from Jacksonville to easily be routed through its hubs at John F. Kennedy International Airport in New York City and Boston Logan International Airport. The cargo site also will be instrumental for shipments to and from Puerto Rico because the airline will begin flight service to and from Luis Muñoz Marín International Airport May 19.
{ "pile_set_name": "Pile-CC" }
iStock/Thinkstock(NEW YORK) -- The practice of fat shaming at the doctor's office can be harmful to both the mental and physical health of a patient, according to a comprehensive new review of research published Thursday. "Disrespectful treatment and medical fat shaming" is "stressful and can cause patients to delay health care seeking or avoid interacting with providers," stated the abstract to the review published Thursday by Joan Chrisler and Angela Barney, researchers at Connecticut College's Department of Psychology. The review examined 46 past studies, which looked at doctors' biases toward obesity and also compared patients' reports of fat shaming from their doctors with their health outcomes. Researchers found that fat shaming from a doctor can take a significant negative toll on a patient's health, as it can lead to decreased trust in their health care provider. In extreme cases, it can also cause a doctor to assume that a patient's weight is responsible for a host of health conditions and lead to a misdiagnosis, researchers said. The findings were presented at the 125th Annual Convention of the American Psychological Association, where Chrisler called fat shaming by a doctor a form of malpractice. “Recommending different treatments for patients with the same condition based on their weight is unethical and a form of malpractice,” Chrisler said. “Research has shown that doctors repeatedly advise weight loss for fat patients while recommending CAT scans, blood work or physical therapy for other, average-weight patients.” In the review, researchers called for better training for health care providers so that patients of all sizes are treated with respect.
{ "pile_set_name": "Pile-CC" }
Electroconductive natural polymer-based hydrogels. Hydrogels prepared from natural polymers have received immense considerations over the past decade due to their safe nature, biocompatibility, hydrophilic properties, and biodegradable nature. More recently, when treated with electroactive materials, these hydrogels were endowed with high electrical conductivity, electrochemical redox properties, and electromechanical properties; consequently, forming a smart hydrogel. The biological properties of these smart hydrogels, classified as electroconductive hydrogels, can be combined with electronics. Thus, they are considered as good candidates for some potential uses, which include bioconductors, biosensors, electro-stimulated drug delivery systems, as well as neuron-, muscle-, and skin-tissue engineering. However, there is lacking comprehensive information on the current state of these electroconductive hydrogels which complicates our understanding of this new type of biomaterials as well as their potential applications. Hence, this review provides a summary on the current development of electroconductive natural polymer-based hydrogels (ENPHs). We have introduced various types of ENPHs, with a brief description of their advantages and shortcomings. In addition, emerging technologies regarding their synthesis developed during the past decade are discussed. Finally, two attractive potential applications of ENPHs, cell culture and biomedical devices, are reviewed, along with their current challenges.
{ "pile_set_name": "PubMed Abstracts" }
Robot Song Sunny Ray and the Magnificent Moon Artificial Intelligence Artificial Intelligence is taking over the world, Arena can help you build the ultimate AI. Siri, Cortana, driverless cars and virtual judges – there’s no escaping. Artificial Intelligence is an integral part of our life now and into the future. Why wait for robots to take over the world when you can work with Arena to build your own AI? Using cutting-edge digital technology students work with Arena’s artists to create a unique Artificial Intelligence, or AI, that reflects the personality of the school and the students who contributed to its creation. A core group of students steer the project while additional classes may be asked to participate in activities to assist in creating the ‘face’ and develop the ‘mind’ of the AI. Arena artists work with students to identify values, issues, characteristics and unique stories or memories attached to their school and community. Students work together using cutting edge digital technology, photography, scientific analysis and storytelling to build their AI. They learn to ‘puppet’ the AI with live animation technology, giving both movement and voice to the digital character. The workshop culminates with a Q & A style performance in which students attempt to convince the audience their AI is real. The performance is both curated and performed by the students for the whole school community. Workshop Details Artificial Intelligence is an Arena Theatre Company Workshop. Age range: 12 – 17 years or Year 7 -12 Participants: Up to 20 + involvement from two further classes (optional) Artificial Intelligence introduces students to new technologies, new modes of performance, promotes creative expression, teamwork and provides a rare opportunity for students to work alongside professional artists. In creating their AI students reflect on their school and community and develop an understanding of what values and characteristics are important to them in their school and community. Curriculum Links Artificial Intelligence is suitable for schools, local arts and community centres, holiday and festival programs and is available for booking throughout 2019. For more information please contact info@arenatheatre.com.au.
{ "pile_set_name": "Pile-CC" }
1. Field The disclosure relates to a method, system, and article of manufacture for the segmentation of logical volumes. 2. Background In certain virtual tape storage systems, hard disk drive storage may be used to emulate tape drives and tape cartridges. For instance, host systems may perform input/output (I/O) operations with respect to a tape library by performing I/O operations with respect to a set of hard disk drives that emulate the tape library. In certain virtual tape storage systems at least one virtual tape server (VTS) is coupled to a tape library comprising numerous tape drives and tape cartridges. The VTS is also coupled to a direct access storage device (DASD), comprised of numerous interconnected hard disk drives. The DASD functions as a cache to volumes in the tape library. In VTS operations, the VTS processes the host's requests to access a volume in the tape library and returns data for such requests, if possible, from the cache. If the volume is not in the cache, then the VTS recalls the volume from the tape library to the cache, i.e., the VTS transfers data from the tape library to the cache. The VTS can respond to host requests for volumes that are present in the cache substantially faster than requests for volumes that have to be recalled from the tape library to the cache. However, since the capacity of the cache is relatively small when compared to the capacity of the tape library, not all volumes can be kept in the cache. Hence, the VTS may migrate volumes from the cache to the tape library, i.e., the VTS may transfer data from the cache to the tape cartridges in the tape library.
{ "pile_set_name": "USPTO Backgrounds" }
JUST WATCHED Glenn Greenwald: Journalism is not terrorism MUST WATCH Story highlights David Cameron pushed for The Guardian to hand over Snowden material, report says Glenn Greenwald says lawyers for his partner have filed a lawsuit over his detention They are seeking a declaration that what the UK authorities did is illegal, Greenwald says The UK government says it has a duty to protect national security Lawyers acting for David Miranda, the partner of journalist Glenn Greenwald, said they will bring his case to the High Court in London on Thursday after he was detained at Heathrow Airport. Greenwald, who works for The Guardian newspaper, has been at the forefront of high-profile reports exposing secrets in U.S. intelligence programs, based on leaks from former U.S. National Security Agency contractor Edward Snowden. Miranda, a Brazilian citizen, spent nearly nine hours in detention Sunday being questioned under a provision of Britain's terrorism laws. He was stopped as he passed through London on his way from Berlin to his home in Brazil. JUST WATCHED Toobin: Greenwald handling crisis wrong MUST WATCH JUST WATCHED Journalistic immunity debate MUST WATCH Journalistic immunity debate05:44 The lawyers, hired by The Guardian to represent Miranda, are trying to recover his property and prevent the government from inspecting the items or sharing what data they may have already gleaned from them. "What they're essentially seeking right now is a declaration from the British court that what the British authorities did is illegal, because the only thing they're allowed to detain and question people over is investigations relating to terrorism, and they had nothing to do with terrorism, they went well beyond the scope of the law," Greenwald told CNN's AC360 on Tuesday. "And, secondly, to order them to return all the items they stole from David and to order that they are barred from using them in any way or sharing them with anybody else." Meanwhile, new claims have emerged that the pressure placed on The Guardian over its reporting on information leaked by Snowden came from the highest levels of government. The British newspaper The Independent reported Wednesday that Prime Minister David Cameron ordered the country's top civil servant, Sir Jeremy Heywood, "to contact the Guardian to spell out the serious consequences that could follow if it failed to hand over classified material received from Edward Snowden." Asked about the report by CNN, Cameron's office did not deny it. "We won't go in to specific cases, but if highly sensitive information was being held insecurely, the government would have a responsibility to secure it," a Downing Street press officer said. She declined to be named in line with policy. The Guardian's editor, Alan Rusbridger, said in an editorial published Monday that the paper had physically destroyed computer hard drives under the eyes of representatives of Britain's General Communications Headquarters -- the UK equivalent of the NSA. The move followed several meetings with "a very senior government official claiming to represent the views of the prime minister" and "shadowy Whitehall figures," Rusbridger said. They demanded The Guardian hand over the Snowden material or destroy it, he said. Deputy Prime Minister Nick Clegg, the head of Cameron's Liberal Democrat coalition partners, considered the request "reasonable," his office said. ‪"The Deputy Prime Minister felt this was a preferable approach to taking legal action," according to a statement issued Wednesday evening. "He was keen to protect the Guardian's freedom to publish, whilst taking the necessary steps to safeguard security." Greenwald broke the story of the existence of a U.S. National Security Agency program that is thought to have collected large amounts of phone and Internet data. The Guardian also claimed, based on documents provided by Snowden, that GCHQ made use of the NSA program, known as PRISM, to illegally spy on UK citizens. A UK parliamentary committee subsequently found "no basis" for this claim. The UK government says GCHQ acts within a strong legal framework. 'Journalistic material' Miranda was stopped as he returned to the couple's Rio de Janeiro home after staying in Berlin with filmmaker Laura Poitras, who has been working with Greenwald on NSA-related stories. Miranda will seek a judicial review on the grounds that the legislation under which he was detained was misused, his solicitor Morgan said Tuesday. Morgan wrote to Home Secretary Theresa May and the Metropolitan Police chief asking for assurances that "there will be no inspection, copying, disclosure, transfer, distribution or interference, in any way, with our client's data pending determination of our client's claim." The law firm has also demanded the same from any third party, either domestic or foreign, that may have been given access to the material. The letter, seen by CNN, claims that Schedule 7 of Terrorism Act 2000 was used to detain Miranda "in order to obtain access to journalistic material" and that this "is of exceptional and grave concern." Miranda has said he does not know what data he was carrying back with him. 'Huge black eye' for British government Britain's Home Office on Tuesday defended Miranda's questioning, saying the government and police "have a duty to protect the public and our national security." "If the police believe that an individual is in possession of highly sensitive stolen information that would help terrorism, then they should act and the law provides them with a framework to do that," it said. "Those who oppose this sort of action need to think about what they are condoning." In a statement that didn't name Miranda but referred to his detention, the Metropolitan Police called what happened "legally and procedurally sound" and said it came after "a detailed decision-making process." The statement describes the law under which Miranda was detained as "a key part of our national security capability which is used regularly and carefully by the Metropolitan Police Service to help keep the public safe." But that's not how Miranda and Greenwald view the law, or at least how it was applied in this case. Sitting alongside his partner, Greenwald said the detention gave the British government "a huge black eye in the world, (made) them look thuggish and authoritarian (for) interfering in the journalism process (and created) international incidents with the government of Brazil, which is indignant about this." Greenwald added, "To start detaining people who they think they are reporting on what they're doing under terrorism laws, that is as dangerous and oppressive as it gets." Miranda, who didn't have an interpreter on hand during his detention despite English being a second language for him, said: "They didn't ask me anything about terrorism, not one question." He added, "They were just telling me: 'If you don't answer this, you are going to jail.'" Greenwald said the entire episode was designed to intimidate him and other investigative journalists from using classified information and digging into stories critical of the British and allied governments. But, he said, it will have the reverse effect on him, making him more determined to carry on. The seizure of material from Miranda will not stop the newspaper reporting on the story, he added. "Of course, we have multiple copies of every single thing that we're working on," Greenwald said. "Nobody would ever travel with only one copy of anything."
{ "pile_set_name": "Pile-CC" }
Management of large medial sphenoid wing meningiomas: a series of 178 cases. The aim of this study was to evaluate our treatment and outcome in patients with large medial sphenoid wing meningiomas (SWMs). Data from 178 patients with large medial SWMs treated was collected and analyzed retrospectively. Most of patients underwent microsurgical resection under electrophysiological monitoring and Doppler probe. Radiation therapy was administered in 64 patients with residual tumor and malignant pathology. Total resection of the tumor was achieved in 118 of 178 cases (66.3%), subtotal in 60 of 178 (33.7%) at the time of initial surgery without serious surgical complications except 2 patients with ptosis. Postoperative vision improved in 84 patients (87.5%), remained unchanged in 8 (8.3%) and deteriorated in 4 (4.2%). The progress free survival (PFS) and Karnofsky performance score (KPS) between patients with gross total resection (GTR) and patients with subtotal resection (STR) followed by radiation therapy (RT) had no significant difference. Surgery still remains a principal treatment option for SWMs. Good craniotomy techniques, proper hemostasis and optimal surgery strategy are critical to improve resection rate and elevate prognosis. Likewise, it is expected that STR with adjuvant RT can provide satisfactory results in case of total removal impossible.
{ "pile_set_name": "PubMed Abstracts" }
using System; using System.Linq; using FizzWare.NBuilder; using FluentAssertions; using NUnit.Framework; using NzbDrone.Core.MediaFiles; using NzbDrone.Core.Qualities; using NzbDrone.Core.SeriesStats; using NzbDrone.Core.Test.Framework; using NzbDrone.Core.Tv; namespace NzbDrone.Core.Test.SeriesStatsTests { [TestFixture] public class SeriesStatisticsFixture : DbTest<SeriesStatisticsRepository, Series> { private Series _series; private Episode _episode; private EpisodeFile _episodeFile; [SetUp] public void Setup() { _series = Builder<Series>.CreateNew() .With(s => s.Runtime = 30) .BuildNew(); _series.Id = Db.Insert(_series).Id; _episode = Builder<Episode>.CreateNew() .With(e => e.EpisodeFileId = 0) .With(e => e.Monitored = false) .With(e => e.SeriesId = _series.Id) .With(e => e.AirDateUtc = DateTime.Today.AddDays(5)) .BuildNew(); _episodeFile = Builder<EpisodeFile>.CreateNew() .With(e => e.SeriesId = _series.Id) .With(e => e.Quality = new QualityModel(Quality.HDTV720p)) .BuildNew(); } private void GivenEpisodeWithFile() { _episode.EpisodeFileId = 1; } private void GivenOldEpisode() { _episode.AirDateUtc = DateTime.Now.AddSeconds(-10); } private void GivenMonitoredEpisode() { _episode.Monitored = true; } private void GivenEpisode() { Db.Insert(_episode); } private void GivenEpisodeFile() { Db.Insert(_episodeFile); } [Test] public void should_get_stats_for_series() { GivenMonitoredEpisode(); GivenEpisode(); var stats = Subject.SeriesStatistics(); stats.Should().HaveCount(1); stats.First().NextAiring.Should().Be(_episode.AirDateUtc); stats.First().PreviousAiring.Should().NotHaveValue(); } [Test] public void should_not_have_next_airing_for_episode_with_file() { GivenEpisodeWithFile(); GivenEpisode(); var stats = Subject.SeriesStatistics(); stats.Should().HaveCount(1); stats.First().NextAiring.Should().NotHaveValue(); } [Test] public void should_have_previous_airing_for_old_episode_with_file() { GivenEpisodeWithFile(); GivenOldEpisode(); GivenEpisode(); var stats = Subject.SeriesStatistics(); stats.Should().HaveCount(1); stats.First().NextAiring.Should().NotHaveValue(); stats.First().PreviousAiring.Should().Be(_episode.AirDateUtc); } [Test] public void should_have_previous_airing_for_old_episode_without_file_monitored() { GivenMonitoredEpisode(); GivenOldEpisode(); GivenEpisode(); var stats = Subject.SeriesStatistics(); stats.Should().HaveCount(1); stats.First().NextAiring.Should().NotHaveValue(); stats.First().PreviousAiring.Should().Be(_episode.AirDateUtc); } [Test] public void should_not_have_previous_airing_for_old_episode_without_file_unmonitored() { GivenOldEpisode(); GivenEpisode(); var stats = Subject.SeriesStatistics(); stats.Should().HaveCount(1); stats.First().NextAiring.Should().NotHaveValue(); stats.First().PreviousAiring.Should().NotHaveValue(); } [Test] public void should_not_include_unmonitored_episode_in_episode_count() { GivenEpisode(); var stats = Subject.SeriesStatistics(); stats.Should().HaveCount(1); stats.First().EpisodeCount.Should().Be(0); } [Test] public void should_include_unmonitored_episode_with_file_in_episode_count() { GivenEpisodeWithFile(); GivenEpisode(); var stats = Subject.SeriesStatistics(); stats.Should().HaveCount(1); stats.First().EpisodeCount.Should().Be(1); } [Test] public void should_have_size_on_disk_of_zero_when_no_episode_file() { GivenEpisode(); var stats = Subject.SeriesStatistics(); stats.Should().HaveCount(1); stats.First().SizeOnDisk.Should().Be(0); } [Test] public void should_have_size_on_disk_when_episode_file_exists() { GivenEpisode(); GivenEpisodeFile(); var stats = Subject.SeriesStatistics(); stats.Should().HaveCount(1); stats.First().SizeOnDisk.Should().Be(_episodeFile.Size); } } }
{ "pile_set_name": "Github" }
Q: Outlook Interop: how to iterate all the calendars? I want to get all events from all calendars, how do i iterate thru all calendar folders and then all events for each calendar? A: If I had to guess, though I'm just getting in to Outlook myself, I would suggest the following: Outlook.Application app = new Outlook.Application(); Outlook.NameSpace ns = app.GetNamespace("MAPI"); Outlook.MAPIFolder folder = ns.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderCalendar); Then something along the lines of foreach (outlook.MAPIFolder subFolder in folder.Folders) { // do something with subFolder } And you could probably create something recursive to exhaust all possibilities of the MAPIFolder.Folders property. EDIT Ultimately, try stepping through in the debugger one you've gotten the default folder and see what you're left with. My guess is this will have the information you need.
{ "pile_set_name": "StackExchange" }
[Ultrasound imaging of the abdominal aorta]. Abnormalities of the abdominal aorta and the visceral vessels can represent a diagnostic challenge in patients with both acute and chronic clinical symptoms. In addition to the primary conventional examination using color-coded duplex ultrasound, contrast-enhanced ultrasound (CEUS) with low mechanical index (low MI) may contribute to achieving a precise diagnosis. CEUS is a new and promising method in the diagnosis and follow-up of aortic and visceral artery lesions. Color-coded duplex ultrasound and CEUS with SonoVue(R) allow a rapid and non-invasive diagnosis especially in critically ill patients as these methods can readily be applied at the bedside. In this article the contribution of color-coded duplex ultrasound and CEUS as compared to multi-slice computed tomography angiography (MS-CTA) in various pathologies of the abdominal aorta and the visceral arteries will be addressed.
{ "pile_set_name": "PubMed Abstracts" }
You don't have to agree with this . But it's time that the oil compamy's stopped using our troops and other countries for their own purposes . They have the money to hire outside CONTRACTORS . Why then are our troops doing so much for so little pay . If the oil companys want them to fight their battles , then start their base pay at $100,000.00 a year. I back our troops but not the policy of litterally renting them out. It's time to stop the bloodshed for the sake of profit. It's time to stop and hear the baby's cry , the innocent people of the world cry out. The blood of past wars cry out. And what do they cry out ? " ENOUGH ".Think about it when you fill up . IT'S BLOOD . One drop on the ground. Who's is it ? Enough said ! I am ready for the flack now . Barry You don't have to agree with this . But it's time that the oil compamy's stopped using our troops and other countries for their own purposes . They have the money to hire outside CONTRACTORS . Why then are our troops doing so much for so little pay . If the oil companys want them to fight their battles , then start their base pay at $100,000.00 a year. I back our troops but not the policy of litterally renting them out. It's time to stop the bloodshed for the sake of profit. It's time to stop and hear the baby's cry , the innocent people of the world cry out. The blood of past wars cry out. And what do they cry out ? " ENOUGH ".Think about it when you fill up . IT'S BLOOD . One drop on the ground. Who's is it ? Enough said ! I am ready for the flack now . Barry
{ "pile_set_name": "Pile-CC" }
Q: Sample data - "Issue while executing stored procedure which consists both update and insert statements" Below are the sample table and file details for the question which I have asked on "Issue while executing stored procedure which consists both update and insert statements". Below are the steps I am following before executing the Procedure. I will get a file from the Vendor which contains the data in the below format. 6437,,01/01/2017,3483.92,, 14081,,01/01/2017,8444.23,, I am loading these data to the table NMAC_PTMS_NOTEBK_SG. In the above file 1st column will be the asset. I am updating the table with extra column with name lse_id with respect to that asset. Now the NMAC_PTMS_NOTEBK_SG table will have the data in the below format. LSE_ID AST_ID PRPRTY_TAX_DDCTN_CD LIEN_DT ASES_PRT_1_AM ASES_PRT_2_AM 5868087 5049 Null 01-01-2017 3693.3 NULL Now my procedure will start. In my procedure the logic should be in a way I need to take the lse_id from NMAC_PTMS_NOTEBK_SG and compare the same in MJL table (here lse_id = app_lse_s). Below is the structure for MJL table. CREATE TABLE LPR_LP_TEST.MJL ( APP_LSE_S CHAR(10 BYTE) NOT NULL, DT_ENT_S TIMESTAMP(3) NOT NULL, DT_FOL_S TIMESTAMP(3), NOTE_TYPE_S CHAR(4 BYTE) NOT NULL, PRCS_C CHAR(1 BYTE) NOT NULL, PRIO_C CHAR(1 BYTE) NOT NULL, FROM_S CHAR(3 BYTE) NOT NULL, TO_S CHAR(3 BYTE) NOT NULL, NOTE_TITLE_S VARCHAR2(41 BYTE) NOT NULL, INFO_S VARCHAR2(4000 BYTE), STAMP_L NUMBER(10) NOT NULL, PRIVATE_C CHAR(1 BYTE), LSE_ACC_C CHAR(1 BYTE), COL_STAT_S CHAR(4 BYTE), INFO1_S VARCHAR2(250 BYTE), INFO2_S VARCHAR2(250 BYTE), INFO3_S VARCHAR2(250 BYTE), INFO4_S VARCHAR2(250 BYTE), NTBK_RSN_S CHAR(4 BYTE) ) TABLESPACE LPR_LP_TEST PCTUSED 0 PCTFREE 25 INITRANS 1 MAXTRANS 255 STORAGE ( INITIAL 64K NEXT 1M MINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0 BUFFER_POOL DEFAULT ) LOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING; CREATE UNIQUE INDEX LPR_LP_TEST.MJL_IDX0 ON LPR_LP_TEST.MJL (APP_LSE_S, DT_ENT_S) LOGGING TABLESPACE LPR_LP_TEST PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE ( INITIAL 64K NEXT 1M MINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0 BUFFER_POOL DEFAULT ) NOPARALLEL; CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_AIUD" AFTER INSERT OR UPDATE OR DELETE ON mjl BEGIN mpkg_trig_mjl.mp_mjl_aiud; END mt_mjl_aiud; / CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_AIUDR" AFTER INSERT OR UPDATE OR DELETE ON mjl FOR EACH ROW BEGIN mpkg_trig_mjl.mp_mjl_aiudr (INSERTING, UPDATING, DELETING, :NEW.app_lse_s, :NEW.prcs_c, :NEW.note_type_s, :OLD.app_lse_s, :OLD.prcs_c, :OLD.note_type_s); END mt_mjl_aiudr; / CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_BIUD" BEFORE INSERT OR UPDATE OR DELETE ON mjl BEGIN mpkg_trig_mjl.mp_mjl_biud; END mt_mjl_biud; / CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_OBIUR" BEFORE INSERT OR UPDATE ON mjl FOR EACH ROW BEGIN IF INSERTING THEN :NEW.stamp_l := mpkg_util.mp_time_ticker; ELSE IF :OLD.stamp_l > 999999990 THEN :NEW.stamp_l := 1; ELSE :NEW.stamp_l := :OLD.stamp_l + 1; END IF; END IF; END mt_mjl_obiur; / Below is the procedure I am using which you have provided in previous post and it is almost working good for me. CREATE OR REPLACE PROCEDURE LPR_LP_TEST.SP_PTMS_NOTES ( p_app_lse_s IN mjl.app_lse_s%TYPE, --p_dt_ent_s IN mjl.dt_ent_s%TYPE, --p_note_type_s IN mjl.note_type_s%TYPE, --p_prcs_c IN mjl.prcs_c%TYPE, --p_prio_c IN mjl.prio_c%TYPE, --p_note_title_s IN mjl.note_title_s%TYPE, --p_info1_s IN mjl.info1_s%TYPE, --p_info2_s IN mjl.info2_s%TYPE ) AS --v_rowcount_i number; --v_lien_date mjl.info1_s%TYPE; --v_lien_date NMAC_PTMS_NOTEBK_SG.LIEN_DT%TYPE; --v_asst_amount mjl.info2_s%TYPE; v_app_lse_s mjl.app_lse_s%TYPE; BEGIN v_app_lse_s := trim(p_app_lse_s); -- I hope this dbms_output line is for temporary debug purposes only -- and will be removed in the production version! dbms_output.put_line(app_lse_s); merge into mjl tgt using (select lse_s app_lse_s, sysdate dt_ent_s, 'SPPT' note_type_s, 'Y' prcs_c, '1' prio_c, 'Property Tax Assessment' note_title_s, lien_dt info1_s, ases_prt_1_am info2_s from nmac_ptms_notebk_sg where lse_id = v_app_lse_s) src on (trim(tgt.app_lse_s) = trim(src.app_lse_s)) -- and tgt.dt_ent_s = src.dt_ent_s) when matched then update set --tgt.dt_ent_s = src.dt_ent_s, tgt.note_title_s = src.note_title_s, tgt.info1_s = src.info1_s, tgt.info2_s = src.info2_s where --tgt.dt_ent_s != src.dt_ent_s tgt.note_title_s != src.note_title_s or tgt.info1_s != src.info1_s or tgt.info2_s != src.info2_s when not matched then insert (tgt.app_lse_s, tgt.dt_ent_s, tgt.note_type_s, tgt.prcs_c, tgt.prio_c, tgt.from_s, tgt.to_s, tgt.note_title_s, tgt.info1_s, tgt.info2_s) values (src.app_lse_s, src.dt_ent_s, src.note_type_s, src.prcs_c, src.prio_c, src.from_s, src.to_s, src.note_title_s, src.info1_s, src.info2_s); commit; end; Now the logic should be I need to pass lse_id from the file which I have already saved to the procedure. If the lse_id which I am passing is matching with the app_lse_s in the mjl table then I need to update that row and some of the harcoded fields which I am doing it correclty. If the lse_id is not matching then I have to insert a new row for that lease and the hardcoded fields. The issue which I am facing is the dt_ent_s in the mjl table is a unique constraint. Please let me know if the above is making any sense to you... A: "The issue which I am facing is the dt_ent_s in the mjl table is a unique constraint." Actually it's not, it's part of a compound unique key. So really your ON clause should match on on (tgt.app_lse_s = src.app_lse_s and tgt.dt_ent_s = src.dt_ent_s) Incidentally, the use of trim() in the ON clause is worrying, especially trim(tgt.app_lse_s). If you're inserting values with trailing or leading spaces your "unique key" will produce multiple hits when you trim them. You should trim the spaces when you load the data from the file and insert trimmed values in your table. "ORA-00001: unique constraint (LPR_LP_TEST.MJL_IDX0) violated" MJL_IDX0 must me a unique index. That means you need to include its columns in any consideration of unique records. Clearly there is a difference between your straight INSERT logic and your MERGE INSERT logic. You need to compare the two statements and figure out what the difference is.
{ "pile_set_name": "StackExchange" }
Suicide Note Causes Stir Over Fictional Treasure 2013-08-17T12:38:16Z2013-08-26T15:30:49Z OVERLAND PARK, Kansas_A suicide note left by a local sports reporter in Overland Park, Kansas, led police on a wild goose chase for buried treasure Friday. Police say Martin Manley had been planning his death for months and had been outlining the details on his website. In one post Manley said he'd buried $200,000 worth of gold and silver coins in Overland Park. He even provided the exact GPS location. After searching the area with metal detectors, police say there is no buried treasure. Police turned away at least 12 people with shovels who came looking for the loot. The Dallas Mavericks have hired outside counsel to investigate allegations of inappropriate conduct by former team president Terdema Ussery in a Sports Illustrated report that described a hostile workplace environment for women. The Dallas Mavericks have hired outside counsel to investigate allegations of inappropriate conduct by former team president Terdema Ussery in a Sports Illustrated report that described a hostile workplace environment for women. Students who survived the Florida school shooting are preparing to flood the Capitol pushing to ban the assault-style rifle used to kill 17 people, vowing to make changes in the November election if they can't... Students who survived the Florida school shooting are preparing to flood the Capitol pushing to ban the assault-style rifle used to kill 17 people, vowing to make changes in the November election if they can't persuade lawmakers to change law now.
{ "pile_set_name": "Pile-CC" }
Metal cutting or "machining" is recognized to be one of the most important and most widely used processes is manufacturing. Among the common machining operations are shaping, planing, milling, facing, broaching, grinding, sawing, turning, boring, drilling and reaming. Some of these processes, such as sawing, operate on both the external and internal surfaces of the workpiece, while others operate only on the internal (e.g. reaming) or external (e.g. milling) surfaces of the workpiece. These various processes are described in detail in DeGarmo, Materials and Processes in Manufacturing, 3rd edn., 1969), especially in chapter 16, "Metal Cutting". The measure of productivity of a given machining operation is determined by the total amount of metal removed from the workpiece in a given period of time. To this end a wide variety of materials have been used or suggested as cutting tools. These are commonly classified as tool steels, high speed steels, cast non-ferrous alloys, sintered carbides and ceramics. (There are also some limited applications for diamonds). The commonly measured parameters of cutting tool performance are cutting speed, depth of cut, feed rate and tool life. Each of these prior art cutting tool materials is deficient in one or more of these parameters. Tool steel, high speed steel and cast non-ferrous alloys all have critical temperature limitations which restrict their cutting speed to relatively low rates, as measured in feet per minute (fpm) or meters per minute (m/min). Typically high speed steels are restricted to 100-225 fpm (30-70 m/min) for cutting steel and 250-300 fpm (75-90 m/min) for cutting non-ferrous materials. The cast non-ferrous alloys will operate at up to about twice those rates. The carbide materials, such as tungsten carbide, improve on the cutting speed rates of the steels by a factor of 2-5, particularly when the carbides are coated. However, the carbides are not as tough as the steels and are susceptible to impact breakage. This severely limits their usage to applications where impact is a factor, such as in making interrupted cuts or in machining hard workpieces. Ceramic materials, such as alumina, have been found to produce cutting tools which can operate at much higher speeds than the conventional steel and carbide cutting tools. For instance, cutting speeds of 500-1400 fpm (150-430 m/min) for steel cutting have been reported. Tool life with the ceramics, however, has been a serious problem, because the ceramics are even more brittle and less tough than the carbides. Of particular concern has been the tendency of the ceramic materials to fracture catastrophically and unexpectedly when subjected to impact. Thus, while cutting speeds have been high for the ceramic materials, it has been possible to operate them only at quite low feed rates, much lower than those used for the steels and carbide cutting tools. It has thus been found that productivity, which is a function of both cutting speed and feed rate is relatively low for all prior art types of cutting tools. The steel and carbide tools, while having high feed rates, have relatively low cutting speeds. Conversely, the ceramics, while having high cutting speeds, operate only at low feed rates. Productivity, determined as total amount of metal removal for a given time period, therefore remains relatively low regardless of the type of cutting tool used. References relating to the use of various ceramics as cutting tools include U.S. Pat. No. 4,543,343 to Iyori et al which describes use of a ceramic comprised of alumina, zirconia and titanium carbide together with titanium boride. Another reference is U.S. Pat. No. 4,366,254 which describes a cutting tool ceramic comprised of alumina and zirconia together with carbides, nitrides or carbo-nitrides of Group IVB and VB metals. There have been some teachings that silicon carbide fiber reinforced ceramics ca be used in various machine parts. Examples shown have included heat exchangers, molds, nozzles, turbines, valves and gears; see Japanese patents nos. 59-54680 and 59-102681. Such disclosures, however, are not particularly pertinent to the cutting tool invention described herein, since such parts are not subject to impact stresses as part of their normal service environment. No mention is made of improved toughness or impact resistance nor are such properties of interest in the articles described. It has also been disclosed that fracture toughness in ceramics can be improved by incorporation of silicon carbide whiskers into the ceramics. Two papers by Becher and Wei have described mechanisms for increase in toughness as related to whisker content and orientation; see "Toughening Behavior is SiC Whisker Reinforced Alumina", Comm. Am. Cer. Soc. (Sept., 1984) and "Transformation Toughened and Whisker Reinforced Ceramics", Soc. Auto. Engrs., Proc. 21st Auto. Tech. Dev. Mtg., 201-205 (Mar., 1984). See also Wei U.S. Pat. No. 4,543,345. These papers, however, deal only with thermal and flexural applications and do not address anything with respect to machining processes. It would therefore be highly advantageous to have a tool which operates at the high cutting speeds of the ceramics while also allowing the high feed rates found with the steels and carbides. Such cutting tools would yield productivities significantly higher than any of the prior art tools.
{ "pile_set_name": "USPTO Backgrounds" }
Q: Using LINQ to find the class that is in the bottom of the inheritance chain Given a sequence of assemblies with classes eg. AssemblyA Customer AssemblyB Customer : AssemblyA.Customer AssemblyC Customer : AssemblyB.Customer Given the name (not taken care of namespace) Customer, can I use LINQ to query against the sequence of assemblies to find the customer at the bottom of the inheritance chain (AssemblyC.Customer in this case) ? A: IEnumerable<Assembly> assemblies = ... Assembly assemblyA = ... // Since you say the only fact you wish to use about the class is that it // is named 'Customer' and it exists in Assembly A, this is just about the // only way to construct the Type object. Pretty awful though... Type customerType = assemblyA.GetTypes() .Single(t => t.Name == "Customer"); // From all the types in the chosen assemblies, find the ones that subclass // Customer, picking the one with the deepest inheritance heirarchy. Type bottomCustomerType = assemblies.SelectMany(a => a.GetTypes()) .Where(t => t.IsSubclassOf(customerType)) .OrderByDescending(t => t.GetInheritanceDepth()) .First(); ... public static int GetInheritanceDepth(this Type type) { if (type == null) throw new ArgumentNullException("type"); int depth = 0; // Keep walking up the inheritance tree until there are no more base classes. while (type != null) { type = type.BaseType; depth++; } return depth; }
{ "pile_set_name": "StackExchange" }
250 U.S. 208 (1919) CAPITAL TRUST COMPANY, ADMINISTRATOR OF ARNOLD, v. CALHOUN. No. 368. Supreme Court of United States. Argued May 2, 1919. Decided June 2, 1919. ERROR TO THE COURT OF APPEALS OF THE STATE OF KENTUCKY. *209 Mr. T.L. Edelen for plaintiff in error. Mr. Joseph W. Bailey and Mr. Charles F. Consaul for defendant in error. *212 MR. JUSTICE McKENNA delivered the opinion of the court. Proceeding in equity under the law of Kentucky for an accounting from the Capital Trust Company as administrator de bonis non of the estate of Thomas N. Arnold, deceased, and that the estate be settled and distributed. Defendant in error Calhoun and Calhoun & Sizer, a firm composed of C.C. Calhoun and Adrian Sizer, attorneys at law, appeared in the proceeding and by cross-petition prayed judgment against the trust company as such administrator for the sum of $1504.50, with interest from July 10, 1915. *213 An outline of the facts is as follows: Thomas N. Arnold, prior to his death, believing that he had a just claim against the United States, entered into a contract with the firm of Calhoun & Sizer and employed it to undertake the prosecution of the claim, and on August 1, 1905, entered into a written contract with it by which, in consideration of the services rendered and to be rendered by it in the prosecution of the claim, he agreed to pay it a fee equal in amount to 50% of whatever sum of money should be awarded or collected on the claim, the payment of which was made a lien upon the claim or upon any draft or evidence of payment that might be issued in liquidation thereof. The firm undertook the prosecution of the claim and bills were introduced in Congress for its payment, and on May 22, 1908, it was referred to the Court of Claims by a resolution of the United States Senate for findings of fact under § 14 of the Act of March 3, 1887, c. 359, 24 Stat. 505, now § 151 of the Judicial Code. About that time the firm of Calhoun & Sizer was dissolved and subsequently Arnold died and the beneficiaries of the estate entered into a written contract with defendant in error, C.C. Calhoun, to continue the prosecution of the claim and agreed to pay him 50% of the amount which might be collected, the fee to be a lien "on any warrant" which might "be issued in payment of said claim." January 15, 1912, the Court of Claims made findings of fact in the matter of the claim and stated the amount thereof as $5015.00. The court's findings were certified to Congress and that body, by an Act approved March 4, 1915, c. 140, 38 Stat. 962, made an appropriation for the payment of the claim and the Secretary of the Treasury was directed to pay it. The act, however, contained the following provisions: "That no part of the amount of any item appropriated in this bill in excess of twenty per centum thereof shall be *214 paid or delivered to or received by any agent or agents, attorney or attorneys on account of services rendered or advances made in connection with said claim. "It shall be unlawful for any agent or agents, attorney or attorneys to exact, collect, withhold or receive any sum which in the aggregate exceeds twenty per centum of the amount of any item appropriated in this bill on account of services rendered or advances made in connection with said claim, any contract to the contrary notwithstanding. Any person violating the provisions of this Act shall be deemed guilty of a misdemeanor, and upon conviction thereof shall be fined in any sum not exceeding $1000." June 7, 1915, Calhoun requested the Secretary of the Treasury to issue a warrant to him for the sum of $1003, which he recited was to be payable to him on account of services as attorney in the claim of the Capital Trust Company against the United States, as appropriated for by the act of Congress, the receipt of said warrant to be taken and accepted as a full and final release and discharge of any claim he had against the United States on account of services in said claim. Afterward, on July 1, 1915, notice was given to Calhoun, as attorney for the claimant, that in settlement of the claim a check was mailed him for $1003, being 20% of the claim, and to the trust company as administrator de bonis non of Arnold, check for $4012. A part of this money is still in the hands of such administrator and there is no other property belonging to the estate. The cross petition additionally asserts the following: No part of the fee except the sum of $1003 has been paid and there is a balance due of $1504.50, with interest from July, 1915, the date the money was received by the trust company. July 10, 1915, Calhoun presented his claim to the administrator duly proved and demanded payment, but payment was refused. The whole of the $1504.50, therefore, *215 remains unpaid, and Calhoun has a lien upon the fund for the payment, he having accepted the check for $1003 under protest and only on account. The contract preceded the act of Congress and when the act was passed such contracts were lawful and Congress was without authority to take from him his property without due process of law or just compensation therefor or to deprive him of his liberty of contract. This is repeated and emphasized in various ways and the Fifth Amendment is especially invoked as sustaining it, and for which reasons it is alleged that the "attempted limitation of attorney's fees by said act" was "null and void." A demurrer to the cross petition was overruled and the trust company answered. A detail of its averments is not necessary. It practically admits those of the cross-petition and pleads in defense the provisions of the act of Congress, and also § 3477, Rev. Stats. A demurrer was sustained to the answer and judgment rendered for Calhoun for the sum of $1504.50, with interest from July 1, 1915. The judgment was affirmed by the Court of Appeals. The court said: "This case runs on all fours with Black v. O'Hara's Admr., 175 Ky. 623, where it was held that the Act of Congress approved March 4, 1915, appropriating money for the payment of similar claims and containing a similar provision limiting an attorney's fee to twenty per cent. of the amount recovered, in so far as it attempted to limit the amount of a fee theretofore earned, was unconstitutional and invalid. "We have been urged to recede from the rule announced in Black v. O'Hara's Admr., supra, as being unsound in principle; but after a careful reconsideration of the reasoning by which the decision in that case is supported, we are satisfied of its soundness, and reaffirm it." We encounter at the outset a question upon the form *216 of the judgment. The cross-petition was presented in a proceeding to require an accounting of the administrator of Arnold and the petition asserted a claim and lien upon the money in the administrator's hands received from the United States Government. The judgment, however, does not refer to that money or the lien upon it; it provides only that Calhoun recover of the administrator "the sum of fifteen hundred and four 50/100 dollars with interest from July 1st, 1915, and his costs herein and may have execution," etc. If the judgment only establishes a claim against the administrator to be satisfied, not out of the moneys received from the United States but from other assets of the estate, a situation is presented which it was said in Nutt v. Knut, 200 U.S. 12, 21, would not encounter legal objection. In other words, the limitation in the act appropriating the money to 20% as the amount to be paid to an agent or attorney would have no application or be involved. But the judgment is construed by the parties as having more specific operation, construed as subjecting the money received from the Government to the payment of the balance of Calhoun's fee, doubtless because the estate has no other property. On that account it is attacked by the trust company and defended by Calhoun. The controversy thus presented is discussed by counsel in two propositions: (1) The validity of the contract independently of the limitation imposed by Congress upon the appropriated money; (2) the power of Congress to impose the limitation as to that money. The latter we regard as the main and determining proposition; the other may be conceded, certainly so far as fixing the amount of compensation for Calhoun's services (we say Calhoun's services as the appearance of the firm of Calhoun & Sizer was withdrawn), and even so far as the contract provided for a lien, if the distinction made by counsel be tenable — that *217 is, a distinction between a lien on the claim and a lien "upon any draft, money, or other evidence of payment," to quote from the first agreement, or "on any warrant which may be issued in payment," to quote from the second agreement. So far as the contract fixed the amount of fee it is within the rule of Nutt v. Knut, supra, and, for the sake of the argument, the lien may be conceded to be valid against § 3477, Rev. Stats., to the contrary if it be regarded as having been given not upon the claim but upon its evidence, as counsel contend. It may, therefore, not only escape the defect that was held fatal to the lien asserted in Nutt v. Knut, but may claim the support of McGowan v. Parish, 237 U.S. 285. We, however, need not dwell upon the distinctions (their soundness may be disputed) nor upon the contentions based upon them, because, as we have said, we consider the other proposition, that is, the power of Congress over the appropriated money and the limitation of payment out of it to an agent or attorney to 20% of the claim, to be the decisive one. In its discussion counsel for Calhoun have gone far afield and have invoked many propositions of broad generality — have even adduced as impliedly against the power, if we understand counsel, the constitution of the Court of Claims and its jurisdiction as weight in the same direction. We can only instance some of the points of the argument. The Act of February 26, 1853, c. 80, 10 Stat. 161, now § 823, Rev. Stats., is cited as recognizing the right of attorneys to compensation for their services in claims against the United States, and it is said that contracts for such compensation have been universally sanctioned as legal. And, further, official statements are adduced to the effect that the Court of Claims is so constituted "that the successful prosecution of a claim" in it "is something more than a merely perfunctory performance on part of *218 counsel"; it is "a matter of great business hazard and risk to counsel when done upon purely contingent fees." And in many cases, it is further urged, no other than contingent fees are possible and to deny them is practically to deny the right to counsel. Mr. Justice Miller is quoted from, in Taylor v. Bemiss, 110 U.S. 42, in illustration of such result and its injury. The right to counsel being thus recognized, and recognized antecedently to the contract now involved, it became, counsel contend, a "pre-existing valid right," and to take it away is to divest the right — to take it away is to deprive of property of value assured of protection by the Constitution of the United States. To sustain the contentions a number of state cases are cited. Among them is Black v. O'Hara, Admr., 175 Kentucky, 623, the case which the Court of Appeals regarded as authority for its ruling in the present case. In a general sense there is force and much appeal in the contentions, but we think they carry us into considerations beyond our cognizance. Liberty in any of its exertions and its protection by the Constitution are of concern. The right to bind by contract and require performance of the contract are examples of that liberty and that protection and they might have resistless force against any interfering or impairing legislation if the contest in the case was simply one between Calhoun and the Arnold estate. But there are other elements to be considered — there is the element of the condition Congress imposed on the subject-matter of the controversy regarded as a condition of its grant. Relief could only be had through legislation. This was petitioned and the Senate of the United States was prompted to refer the claim to the Court of Claims. A defect of remedy remained even after the court had been thus invoked and had reported the amount and facts of the claim. Further legislation was necessary, but it could not have been compelled; it *219 was optional, not compulsory; and it would seem to require no argument to convince that the terms of its enactment must be taken as expressed and the relief it granted accepted with the condition imposed upon it. Indeed, the proposition is confused by its discussion. And it is certainly difficult to deal with the distinction that counsel make between preexisting and prospective transactions. The right is absolute and universal and necessarily must be to have any strength at all. It is only arbitrary in the sense that many of the faculties of government are. And we have seen there was exertion of one of its powers in the present case — not, however, to interfere with or lessen the asserted obligation of the contract between Arnold and Calhoun, but to limit only the application of the money gratuitously appropriated in the payment of attorneys' fees. The contention is that this cannot be done, or, to put it another way, that the appropriation, though it could not be compelled, was yet subservient to the contract of Calhoun (and, we may interject, if for 50%, for any per cent. or terms) and that he was entitled to all the contract provided, denuded of the condition imposed upon the appropriation. The contention has no legal basis, and it may be said it has no equitable one. Neither the justice nor the policy of what sovereignty may do or omit to do can be judged from partial views or particular instances. It is easy to conceive what difficulties beset and what circumstances had to be considered in legislating upon such claims. Definite dispositions were matters of reflection and, it may be, experience — imposition was to be protected against as well as just claims provided for, and, considering claimants and their attorneys in the circumstances, it may have seemed to Congress that the limitation imposed was fully justified, that 20% of the amounts appropriated would be a proper adjustment between them. We are not concerned, however, to accuse or defend. Whatever might have been *220 the moving considerations, the power exercised must be sustained. Frisbie v. United States, 157 U.S. 160; Ball v. Halsell, 161 U.S. 72. The first case dealt with conditions upon pension legislation; the second concerned a claim against the United States on account of Indian depredations. It is, therefore, contended that they are unlike Calhoun's contract with Arnold and that their principle is not applicable. We think otherwise. The legislation passed on was sustained as within the power of government. We conclude, therefore, that Calhoun's claim for a balance due as fees cannot be paid out of the moneys appropriated by Congress and now in the hands of the administrator de bonis non, or recognized as having any validity as against that fund. Beyond this we need not go. Judgment reversed and cause remanded for further proceedings not inconsistent with this opinion. MR. JUSTICE HOLMES concurs in the result. MR. JUSTICE McREYNOLDS took no part in the decision.
{ "pile_set_name": "FreeLaw" }
Time-dependent effects on small intestinal transport by absorption-modifying excipients. The relevance of the rat single-pass intestinal perfusion model for investigating in vivo time-dependent effects of absorption-modifying excipients (AMEs) is not fully established. Therefore, the dynamic effect and recovery of the intestinal mucosa was evaluated based on the lumen-to-blood flux (Jabs) of six model compounds, and the blood-to-lumen clearance of 51Cr-EDTA (CLCr), during and after 15- and 60-min mucosal exposure of the AMEs, sodium dodecyl sulfate (SDS) and chitosan, in separate experiments. The contribution of enteric neurons on the effect of SDS and chitosan was also evaluated by luminal coadministration of the nicotinic receptor antagonist, mecamylamine. The increases in Jabs and CLCr (maximum and total) during the perfusion experiments were dependent on exposure time (15 and 60 min), and the concentration of SDS, but not chitosan. The increases in Jabs and CLCr following the 15-min intestinal exposure of both SDS and chitosan were greater than those reported from an in vivo rat intraintestinal bolus model. However, the effect in the bolus model could be predicted from the increase of Jabs at the end of the 15-min exposure period, where a six-fold increase in Jabs was required for a corresponding effect in the in vivo bolus model. This illustrates that a rapid and robust effect of the AME is crucial to increase the in vivo intestinal absorption rate before the yet unabsorbed drug in lumen has been transported distally in the intestine. Further, the recovery of the intestinal mucosa was complete following 15-min exposures of SDS and chitosan, but it only recovered 50% after the 60-min intestinal exposures. Our study also showed that the luminal exposure of AMEs affected the absorptive model drug transport more than the excretion of 51Cr-EDTA, as Jabs for the drugs was more sensitive than CLCr at detecting dynamic mucosal AME effects, such as response rate and recovery. Finally, there appears to be no nicotinergic neural contribution to the absorption-enhancing effect of SDS and chitosan, as luminal administration of 0.1 mM mecamylamine had no effect.
{ "pile_set_name": "PubMed Abstracts" }
WEDNESDAY, Feb. 6 (HealthDay News) -- Treating major depression safely and affordably is a challenge. Now, Brazilian researchers have found that two techniques often used individually produce better results when used together. The researchers paired the antidepressant Zoloft (sertraline) and a type of noninvasive brain stimulation called transcranial direct current stimulation (tDCS) to treat people with moderate to severe symptoms of major depression. Transcranial direct current stimulation appears to be just as effective a treatment as Zoloft, but the two together are even more effective, said lead researcher Dr. Andre Russowsky Brunoni, from the Clinical Research Center at University Hospital of the University of Sao Paulo. This painless treatment uses a low-intensity electrical current to stimulate specific parts of the brain. Previously, it has been tested for various conditions, such as stroke, anxiety, pain and Parkinson's disease, the researchers said. Dr. Sarah Hollingsworth Lisanby, chair of the department of psychiatry and behavioral sciences at Duke University School of Medicine, is enthusiastic about the findings. Lisanby said the advent of technologies such as noninvasive brain stimulation is "one of the exciting new developments" in treating depression. Transcranial direct current stimulation is one of a family of approaches that uses electrical or magnetic fields to stimulate the brain to alter brain function, she said. "These techniques offer great promise for people with depression, because we know, unfortunately, medications aren't always effective, and psychotherapy isn't always effective, so having effective alternatives is important," Lisanby said. She noted the current study's two-pronged approach addresses both aspects of brain action. The drug affects the chemical aspects of brain function, while the electrical stimulation targets the brain's electrical activity. "Because the brain is an electro/chemical organ, using both electrical and chemical approaches to treat it makes intuitive sense," she said. For the report, published online Feb. 6 in JAMA Psychiatry, Brunoni's team divided 120 patients with major depression who had never taken antidepressants to take Zoloft or an inactive placebo every day with or without electrical brain stimulation, or with sham stimulation. After six weeks of treatment, Brunoni's group found depression significantly improved among patients receiving Zoloft or electrical brain stimulation. However, the biggest gain was seen in those who received both therapies. To gauge improvement, the researchers used the Montgomery-Asberg depression rating scale. Overall, the patients received 12 half-hour brain stimulation sessions over six weeks. Side effects from brain stimulation usually are mild and include itching, scratching and redness on the stimulated area, Brunoni said. However, the combination treatment was associated with more cases of mania after treatment, he said. "Although we could not rule out whether this association was spurious, other studies should investigate this issue," Brunoni said. Brain stimulation alone could be useful for patients who can't take psychiatric drugs, he said. And the devices that deliver the treatment are relatively affordable, the authors noted. Brunoni hopes this study will stimulate additional trials using this approach. "If other studies are also positive, tDCS might be a clinical therapy in the future," he said. People suffering from major depression usually need lifetime treatment, he added. If these study results pan out, this could mean taking antidepressants daily and undergoing weekly sessions of brain stimulation for optimal relief, he said. Another expert said brain stimulation looks "very promising" as an emerging new treatment for depression. "The safety profile is excellent," said Dr. Colleen Loo, a professor in the School of Psychiatry at the University of New South Wales in Australia. "It is a very mild form of brain stimulation, no risk of seizures, does not impair thinking and may in fact improve thinking," she said. Currently, tDCS is not approved by the U.S. Food and Drug Administration to treat any condition, Lisanby said. However, another noninvasive, brain-stimulating technique is FDA-approved and clinically available, she said. That technique -- called transcranial magnetic stimulation (tMS) -- uses a magnetic field to induce electrical changes within the brain.
{ "pile_set_name": "Pile-CC" }
*2 - 10*f + 9. Let u(a) = 7*a**3 - 2*a**2 + 2*a. Let g be u(1). Calculate t(g). -12 Let g = 6 - 3. Suppose 5*c - 115 = -4*t - 30, -t + 30 = g*c. Let x(p) = -10 + 0*p**2 + p**3 - 6*p**2 + t. Calculate x(6). 5 Let z(c) be the third derivative of -c**4/24 - c**3/6 + 3*c**2. Let n(k) be the second derivative of k**3/6 + 2*k**2 - k. Let p be n(-5). Determine z(p). 0 Let u(k) = -k + 5. Let g(h) = -3*h + 14. Let q be 3*4/(12/11). Let c(y) = q*u(y) - 4*g(y). What is c(-5)? -6 Let t(b) = 13*b + 17*b**3 - 13*b + 1. Let o be (6/(-18))/((-4)/12). Calculate t(o). 18 Let t(c) = -c + 1. Let a be t(-3). Let b(l) be the third derivative of 0*l + 1/120*l**6 + 1/60*l**5 - 1/8*l**a + 2*l**2 + 1/2*l**3 + 0. Determine b(-3). -6 Let w(j) = j**2 - 5*j + 2. Let o be w(4). Let h(t) = -2*t**3 - 3*t**2 - t - 2. Let a be h(o). Let z(p) = 5*p - 4 - 1 - p**2 + 1. What is z(a)? 0 Let q(u) = -2*u - 1. Let s = 4 - -8. Suppose 0 = i - 0*i - s. Suppose 0 = 3*d + d - i. Determine q(d). -7 Let k(s) = -s - 3. Let f = 9 + -14. Give k(f). 2 Let t(v) = -v**3 + 3*v**2 + v**2 + 5 - 2*v + 5*v**2 - 4*v**2. Let n be t(4). Suppose -f + n = -4*u + 2*u, 5*u = f - 28. Let h(j) = j. Determine h(f). 3 Let m(i) = -3*i + 3 + 0 + 4. What is m(5)? -8 Suppose 0 = 14*u - 32 - 122. Let f(b) = -b**2 + 10*b + 14. Calculate f(u). 3 Suppose 4*n + 4 = -0*i - 4*i, 3*i - 4*n + 24 = 0. Let y(j) = -j**2 - 5*j - 1. Let h be y(i). Let w be (h*3/9)/1. Let b(p) = 8*p**3 - p + 1. What is b(w)? 8 Let c(v) be the third derivative of -v**5/40 - v**4/24 - 2*v**3/3 - 4*v**2. Let u(o) be the first derivative of c(o). Calculate u(4). -13 Let d(s) be the third derivative of s**5/60 + 5*s**4/24 + 2*s**3/3 + 37*s**2. Calculate d(-6). 10 Let z(q) = -q**2 + 13*q - 3. Let t = -10 + 17. Let b(j) = j**2 - 12*j + 3. Let a(i) = t*b(i) + 6*z(i). Suppose 4*m - 2*m - 8 = 0. Give a(m). -5 Let v(o) = o - 10. Let x be v(12). Let y(d) be the second derivative of -3/2*d**x + 1/2*d**4 + 2*d + 2/3*d**3 + 1/20*d**5 + 0. What is y(-5)? 2 Let k(a) = a**3 - 5*a**2 - 2*a + 7. Let v be 10 - 0/(3 + 1). Let h = 15 - v. Give k(h). -3 Let v(t) = -t**2 - 3*t - 3. Let s(h) = -h**2 - 2*h - 3. Suppose r = -r - 4. Let c be s(r). Give v(c). -3 Let v(j) = 3*j**3 - 9*j**2 - 7*j - 13. Let u(t) = 2*t**3 - 9*t**2 - 7*t - 12. Let r(x) = 4*u(x) - 3*v(x). Determine r(-8). -17 Suppose 0*q = -4*q + 8. Let u(b) = 6 + q*b**2 + 3*b**3 - 2*b + 0*b - 5. Suppose 0 = 5*w - w - 4. Calculate u(w). 4 Let p(r) = r - 5*r + 5*r + 0*r - 4. Calculate p(0). -4 Let m(n) = -2*n - 5 - 4*n + 3*n**2 + 1 + 3*n. Determine m(3). 14 Let z(v) = v**3 - 4*v**2 + 2*v - 5. Let k be z(4). Let y be 30/(-8)*8/k. Let d be (-54)/y - (-4)/(-10). Let n(f) = -f**3 + 4*f**2 + 5*f - 4. Give n(d). -4 Let q(k) = k**2 + 3*k - 7. Let w(s) = -s. Let t(o) = -q(o) + 3*w(o). What is t(-5)? 12 Let b(w) = -w**3 + w**2 - w + 1. Let n be ((-4)/16*0)/1. What is b(n)? 1 Let o(x) be the third derivative of x**5/4 + x**4/24 + 11*x**2. Calculate o(-1). 14 Suppose -2 = -o - 3. Let t(i) = 5*i**3 - i**2 + 1. What is t(o)? -5 Suppose 0*o = -2*o + 58. Suppose 3*m = 3*a + 9, o = -a + 3*a + 5*m. Let s(d) = -2 - d + 2 - a. Calculate s(-4). 2 Let k(h) = h + 1. Let j be 0/((-7)/((-14)/(-6))). Calculate k(j). 1 Let i(c) = c**2 - 5*c + 1. Let d be (-1)/(-3)*0/4. Suppose -b - 2*b - 30 = d. Let a = 14 + b. Determine i(a). -3 Let k(q) be the third derivative of q**6/120 - 7*q**5/60 + 7*q**4/24 - 2*q**3/3 - 19*q**2. Let f(j) = j**2 + 4*j - 6. Let b be f(-6). Give k(b). 2 Let w(z) be the first derivative of z**2 - 5*z + 17. Calculate w(5). 5 Suppose 2*x - 20 = -16. Let m(r) be the third derivative of 0 + 0*r - x*r**2 + r**3 - 1/24*r**4. What is m(0)? 6 Let z(d) = -5*d**3 - d**2 - d. Let t be (2*9/3)/2. Suppose t*s - 4 = 7*s. Give z(s). 5 Let z(x) = x + 0*x + 1 - 13*x. Suppose -20*i = -19*i + 1. Calculate z(i). 13 Let r(w) = -14*w - 11 - 13*w + 26*w. What is r(-9)? -2 Let f(d) = -2*d**2 - 19*d - 12. Let k be f(-9). Let p(m) = m**3 + 2*m**2 - 5*m - 4. Determine p(k). 2 Let x(o) = 3*o + 4. Let u = -34 + 29. Give x(u). -11 Suppose 0 = -2*k - 7*j + 4*j + 2, 0 = k - 4*j - 23. Let v(a) = -a**2 + 9*a - 7. Determine v(k). 7 Let j(m) = 6*m - 4. Let x(a) = -a**3 + 9*a**2 + 11*a - 13. Let y be x(10). Give j(y). -22 Let w(f) be the first derivative of -1/3*f**3 + 0*f - 9 + 1/2*f**2. Give w(3). -6 Let q(s) = 4*s**2 + s - 1. Let y be q(1). Let f(d) = 165*d - 333*d + 167*d + 5. What is f(y)? 1 Let x(u) be the first derivative of u**4/4 + 4*u**3/3 - 7*u**2/2 - 4*u + 17. Let v be (-66)/14 - 4/14. What is x(v)? 6 Let w(o) be the second derivative of 2/3*o**3 - 1/3*o**4 + 0 - o**2 - o + 1/10*o**5. Calculate w(2). 6 Let a(r) = r - 14. Suppose j + 0*j = 6*j. Determine a(j). -14 Let c(x) = x**2 + 3*x. Let a be c(-3). Suppose a = -0*g + g - 4. Let i(w) = -2*w - 4. Let v(m) = -2*m - 5. Let b(n) = 6*i(n) - 5*v(n). Calculate b(g). -7 Let x(b) = 6 + 0 - 30*b + 2*b**2 + 34*b - 3*b**2. Determine x(5). 1 Let d(k) be the third derivative of k**6/20 - k**5/30 + k**4/12 - k**3/6 - k**2. Let i = 1 + 0. Give d(i). 5 Let q(x) = x**3 + 12*x**2 + 10*x - 11. Suppose 0 = -2*k - 5*u - 17, -2*u - 2*u = k + 7. Let l be q(k). Let w = l + 4. Let i(p) = -p + 2. What is i(w)? -2 Let f = 7 - 4. Let h(y) be the first derivative of y**3/3 - y**2/2 - 1. Let b(z) = 7*z**2 - 8*z - 2. Let w(u) = -b(u) + 6*h(u). Calculate w(f). -1 Let w(v) be the second derivative of v**4/12 + 5*v**3/6 + 3*v**2/2 - 17*v. Let g(a) = -6*a**2 + a. Let d be g(1). Determine w(d). 3 Let l(d) be the third derivative of -d**7/5040 - d**6/180 + d**5/12 - 9*d**2. Let m(i) be the third derivative of l(i). Calculate m(-6). 2 Let y(v) = v**2. Let u(q) = q**3 + 5*q - 1. Let b(x) = -u(x) + 5*y(x). Let m(z) = -z**2 - z + 5. Let o be m(0). Let r = -1 + o. Give b(r). -3 Suppose 15*s - 3 = 12*s. Let w(z) be the third derivative of 3*z**6/40 + z**5/60 + z**4/24 - z**3/6 - 2*z**2. Determine w(s). 10 Suppose -4*m + 55 = -9*m. Let a = m + 17. Let i(t) = 2*t - 5. Give i(a). 7 Let j = -64 - -72. Let f(r) = r**2 - 6*r - 2. Give f(j). 14 Let c = 6 + -4. Let t(k) = -13*k**2 + 11*k**c + 1 + k**3 - 2*k**3 + k. Give t(-3). 7 Let y(w) = -w + 5. Let q be -1 + (-9)/(27/12). Give y(q). 10 Suppose -5*o - 4 = 1. Let h(v) be the first derivative of 11*v**4/4 - v**2/2 - 5*v + 2. Let y(q) = q**3 - 1. Let t(l) = h(l) - 5*y(l). Give t(o). -5 Let i(h) = -2*h**2 + 7*h - 6. Suppose 0 = -5*s + 8 + 12. What is i(s)? -10 Suppose -11 = -3*m + 1. Let y(l) = 11*l**3 - 5*l**2 + 19. Let g(t) = 5*t**3 - 2*t**2 + 9. Let d(s) = -13*g(s) + 6*y(s). What is d(m)? -3 Let x(k) = -k**2 + 3*k + 3. Suppose -2*r + 1 = -5. Let z(n) = n. Let o(a) = r*z(a) - x(a). What is o(3)? 6 Let o(g) = -g**2 + g - 2. Let j be o(0). Let s be (-5 - j) + 2 + 4. Let x(n) = 7*n + 1. Let u(m) = -20*m - 2. Let p(f) = 4*u(f) + 11*x(f). Give p(s). -6 Let j = 1 - -1. Let c(a) be the first derivative of 3/2*a**2 + 1/3*a**3 - 1 - 2*a. Determine c(j). 8 Let r(x) = 15*x**3 - x**2 + 1. Let w be (-8)/(-16)*1*2. Give r(w). 15 Let c = -23 - -29. Suppose -h + 16 = -c*h + k, 2*h - 8 = 4*k. Let u(t) = -2*t + 4. What is u(h)? 12 Let z(l) = -2*l + 2. Suppose 0 = -3*v - 4*s + 5*s + 15, -s + 20 = 4*v. Suppose v - 4 = -h. Let w(m) = -3*m**3 + m + 1. Let x be w(h). Calculate z(x). -4 Let c(u) = u**2 + 2*u + 1. Suppose -4*b + 5*y = 20, 0*b - 4*b = 3*y - 12. Let h be (b + 6)/((-36)/24). Determine c(h). 9 Let x = -11 + 8. Let s = x - -2. Let h(o) = -2*o**2 + o. Determine h(s). -3 Let n(y) be the third derivative of y**5/60 - y**4/24 - 2*y**2. Let d be n(-1). Let t be d/6*0/(-1). Let h(u) = -u**2 + u - 1. What is h(t)? -1 Let v(i) = 6*i + 6. Let k(f) = 3*f + 5 + 7*f - 5*f. Let y(r) = -5*k(r) + 4*v(r). Give y(4). -5 Let h(t) = t**3 - t**2 - t + 6. Suppose 10 = i - 5*q, -i + 3*i = -3*q - 6. Give h(i). 6 Let g(s) = 2*s**2 - 4*s - 1. Let h be 8 + -4 + -1*1. Let t be g(h). Suppose 3 = t*w - 4*w. Let l(z) = -2*z - 2. What is l(w)? -8 Let h(l) = 4*l**2 + 9*l + 4. Let p(a) = -5*a**2 - 10*a - 4. Let s(q) = -6*h(q) - 5*p(q). Calculate s(5). 1 Let w = 8 + -8. Suppose w = 3*a + 6 - 0. Let o(y) = 3*y - 1. What is o(a)? -7 Let l(j) = -j + 2. Suppose -6*i = -i - u + 13, -2*i = 2*u + 10. Give l(i). 5 Let c(h) = 3 + 13*h**2 + 1 - h**2 - 3. Calculate c(-1). 13 Let s be (-6)/(-21) - 80/(-14). Let f = -2 + s. Let k(o) = -f*o + o**3 - 6
{ "pile_set_name": "DM Mathematics" }
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <xliff xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:oasis:names:tc:xliff:document:1.2" version="1.2" xsi:schemaLocation="urn:oasis:names:tc:xliff:document:1.2 http://docs.oasis-open.org/xliff/v1.2/os/xliff-core-1.2-strict.xsd"> <file original="AdblockPlusSafari/Base.lproj/Main.storyboard" source-language="en_US" datatype="plaintext" target-language="zh_Hans"> <header> <tool tool-id="com.apple.dt.xcode" tool-name="Xcode" tool-version="7.1" build-num="7B91b"/> </header> <body> <trans-unit id="0Xh-gE-5Fu.text"> <source>Whitelisted Websites</source> <note>Class = "UILabel"; text = "Whitelisted Websites"; ObjectID = "0Xh-gE-5Fu";</note> <target xml:lang="zh_Hans" state="translated">白名单中的网站</target> </trans-unit> <trans-unit id="3Bc-kL-vsl.text"> <source>About</source> <note>Class = "UILabel"; text = "About"; ObjectID = "3Bc-kL-vsl";</note> <target xml:lang="zh_Hans" state="translated">关于</target> </trans-unit> <trans-unit id="3FL-nd-DlQ.text"> <source>Allow some nonintrusive ads</source> <note>Class = "UILabel"; text = "Allow some nonintrusive ads"; ObjectID = "3FL-nd-DlQ";</note> <target xml:lang="zh_Hans" state="translated">允许一些非侵入性的广告</target> </trans-unit> <trans-unit id="4Ma-gI-dEb.headerTitle"> <source>VERSION</source> <note>Class = "UITableViewSection"; headerTitle = "VERSION"; ObjectID = "4Ma-gI-dEb";</note> <target xml:lang="zh_Hans" state="translated">版本</target> </trans-unit> <trans-unit id="9TK-Zf-1G9.text"> <source>1. Go to iOS Settings → Safari → Content Blockers</source> <note>Class = "UILabel"; text = "1. Go to iOS Settings → Safari → Content Blockers"; ObjectID = "9TK-Zf-1G9";</note> <target xml:lang="zh_Hans" state="translated">1. 转到 iOS 设置 → Safari → 内容拦截器</target> </trans-unit> <trans-unit id="94r-nm-jV7.text"> <source>We'd like to encourage websites to use straightforward, nonintrusive advertising. That's why we've established strict guidelines to identify Acceptable Ads, which are shown under default settings. If you wish to browse ad-free, you can disable this setting at any time.</source> <note>Class = "UILabel"; text = "We'd like to encourage websites to use straightforward, nonintrusive advertising. That's why we've established strict guidelines to identify Acceptable Ads, which are shown under default settings. If you wish to browse ad-free, you can disable this setting at any time."; ObjectID = "94r-nm-jV7";</note> <target xml:lang="zh_Hans" state="translated">我们想鼓励网站使用简单、非侵入式的广告。这就是为什么我们已经建立了严格的准则来判断在默认设置下显示的可接受广告。如果你想要浏览时完全无广告,您可随时禁用此设置。</target> </trans-unit> <trans-unit id="ABG-dZ-EA2.text"> <source>Acceptable Ads</source> <note>Class = "UILabel"; text = "Acceptable Ads"; ObjectID = "ABG-dZ-EA2";</note> <target xml:lang="zh_Hans" state="translated">可接受广告</target> </trans-unit> <trans-unit id="ASd-WJ-vg5.title"> <source>Adblock Plus</source> <note>Class = "UINavigationItem"; title = "Adblock Plus"; ObjectID = "ASd-WJ-vg5";</note> <target xml:lang="zh_Hans" state="translated">Adblock Plus</target> </trans-unit> <trans-unit id="CNX-VR-6h9.text"> <source>Annoying ads are always blocked, while nonintrusive ads are displayed by default. You can change this setting at any time.</source> <note>Class = "UILabel"; text = "Annoying ads are always blocked, while nonintrusive ads are displayed by default. You can change this setting at any time."; ObjectID = "CNX-VR-6h9";</note> <target xml:lang="zh_Hans" state="translated">自动过滤恼人的侵入性广告,但对于非侵入性广告默认保留。您可以随时更改此设置。</target> </trans-unit> <trans-unit id="EiF-J0-RYy.title"> <source>Whitelisted Websites</source> <note>Class = "UINavigationItem"; title = "Whitelisted Websites"; ObjectID = "EiF-J0-RYy";</note> <target xml:lang="zh_Hans" state="translated">白名单中的网站</target> </trans-unit> <trans-unit id="Hto-9e-Mjf.title"> <source>About</source> <note>Class = "UINavigationItem"; title = "About"; ObjectID = "Hto-9e-Mjf";</note> <target xml:lang="zh_Hans" state="translated">关于</target> </trans-unit> <trans-unit id="JGs-ZZ-lI9.headerTitle"> <source>MORE</source> <note>Class = "UITableViewSection"; headerTitle = "MORE"; ObjectID = "JGs-ZZ-lI9";</note> <target xml:lang="zh_Hans" state="translated">更多</target> </trans-unit> <trans-unit id="KLy-wT-2fz.text"> <source>Safari Configuration</source> <note>Class = "UILabel"; text = "Safari Configuration"; ObjectID = "KLy-wT-2fz";</note> <target xml:lang="zh_Hans" state="translated">Safari 配置</target> </trans-unit> <trans-unit id="PnU-9J-ZXq.text"> <source>Update Filter lists</source> <note>Class = "UILabel"; text = "Update Filter lists"; ObjectID = "PnU-9J-ZXq";</note> <target xml:lang="zh_Hans" state="translated">更新过滤规则列表</target> </trans-unit> <trans-unit id="RRQ-SL-6ZV.text"> <source>You're in control</source> <note>Class = "UILabel"; text = "You're in control"; ObjectID = "RRQ-SL-6ZV";</note> <target xml:lang="zh_Hans" state="translated">一切在你的控制之下</target> </trans-unit> <trans-unit id="TUG-U9-6x7.normalTitle"> <source>Download on the App Store</source> <note>Class = "UIButton"; normalTitle = "Download on the App Store"; ObjectID = "TUG-U9-6x7";</note> <target xml:lang="zh_Hans" state="translated">在 App Store 下载</target> </trans-unit> <trans-unit id="Thw-XS-bUp.text"> <source>2. Enable Adblock Plus and wait for 10 seconds</source> <note>Class = "UILabel"; text = "2. Enable Adblock Plus and wait for 10 seconds"; ObjectID = "Thw-XS-bUp";</note> <target xml:lang="zh_Hans" state="translated">2. 启用 Adblock Plus 并等待10秒</target> </trans-unit> <trans-unit id="VES-kP-JDg.text"> <source>No websites whitelisted</source> <note>Class = "UILabel"; text = "No websites whitelisted"; ObjectID = "VES-kP-JDg";</note> <target xml:lang="zh_Hans" state="translated">没有网站白名单</target> </trans-unit> <trans-unit id="VJZ-V3-ohd.text"> <source>To get a better ad blocking experience, use Adblock *Browser* for iOS</source> <note>Class = "UILabel"; text = "To get a better ad blocking experience, use Adblock (unbreakable space character)*Browser* for iOS"; ObjectID = "VJZ-V3-ohd";</note> <target xml:lang="zh_Hans" state="translated">获得更好的广告屏蔽体验,使用 Adblock *浏览器* iOS</target> <alt-trans> <target xml:lang="zh_Hans">获得更好的广告屏蔽体验,使用 Adblock *浏览器* for iOS</target> </alt-trans> </trans-unit> <trans-unit id="Vy3-8g-AOR.normalTitle"> <source>Got it</source> <note>Class = "UIButton"; normalTitle = "Got it"; ObjectID = "Vy3-8g-AOR";</note> <target xml:lang="zh_Hans" state="translated">知道了</target> </trans-unit> <trans-unit id="euz-Uy-cr5.text"> <source>Tap the Settings icon → Acceptable Ads</source> <note>Class = "UILabel"; text = "Tap the Settings icon → Acceptable Ads"; ObjectID = "euz-Uy-cr5";</note> <target xml:lang="zh_Hans" state="translated">轻按“设置”图标 → 可接受广告</target> </trans-unit> <trans-unit id="hBD-xm-gdv.text"> <source>Acceptable Ads</source> <note>Class = "UILabel"; text = "Acceptable Ads"; ObjectID = "hBD-xm-gdv";</note> <target xml:lang="zh_Hans" state="translated">可接受广告</target> </trans-unit> <trans-unit id="iRE-JE-Bfh.title"> <source>Acceptable Ads</source> <note>Class = "UINavigationItem"; title = "Acceptable Ads"; ObjectID = "iRE-JE-Bfh";</note> <target xml:lang="zh_Hans" state="translated">可接受广告</target> </trans-unit> <trans-unit id="jgu-sr-NP6.text"> <source>Adblock Plus won't work if you don't configure your Safari settings. Don't worry, we'll show you how.</source> <note>Class = "UILabel"; text = "Adblock Plus won't work if you don't configure your Safari settings. Don't worry, we'll show you how."; ObjectID = "jgu-sr-NP6";</note> <target xml:lang="zh_Hans" state="translated">如果您不配置您的 Safari 设置,Adblock Plus 不会有效。别担心,我们会向您展示如何进行。</target> </trans-unit> <trans-unit id="kFT-cY-IYR.text"> <source>To enable ad blocking in Safari</source> <note>Class = "UILabel"; text = "To enable ad blocking in Safari"; ObjectID = "kFT-cY-IYR";</note> <target xml:lang="zh_Hans" state="translated">要启用 Safari 中的广告拦截</target> </trans-unit> <trans-unit id="md9-Cg-TU4.text"> <source>Adblock Plus</source> <note>Class = "UILabel"; text = "Adblock Plus"; ObjectID = "md9-Cg-TU4";</note> <target xml:lang="zh_Hans" state="translated">Adblock Plus</target> </trans-unit> <trans-unit id="oLj-ed-HW7.normalTitle"> <source>Configure Safari</source> <note>Class = "UIButton"; normalTitle = "Configure Safari"; ObjectID = "oLj-ed-HW7";</note> <target xml:lang="zh_Hans" state="translated">配置 Safari</target> </trans-unit> <trans-unit id="qtJ-F2-gSw.title"> <source>Adblock Plus</source> <note>Class = "UINavigationItem"; title = "Adblock Plus"; ObjectID = "qtJ-F2-gSw";</note> <target xml:lang="zh_Hans" state="translated">Adblock Plus</target> </trans-unit> <trans-unit id="w2u-Tw-F3D.headerTitle"> <source>EXCEPTIONS</source> <note>Class = "UITableViewSection"; headerTitle = "EXCEPTIONS"; ObjectID = "w2u-Tw-F3D";</note> <target xml:lang="zh_Hans" state="translated">例外</target> </trans-unit> <trans-unit id="z6o-iS-GOv.footerTitle"> <source>Last filter list update: %@</source> <note>Class = "UITableViewSection"; footerTitle = "Last filter list update: %@"; ObjectID = "z6o-iS-GOv";</note> <target xml:lang="zh_Hans" state="translated">上次更新过滤规则列表:%@</target> </trans-unit> </body> </file> <file original="AdblockPlusSafari/Info.plist" source-language="en_US" datatype="plaintext" target-language="zh_Hans"> <header> <tool tool-id="com.apple.dt.xcode" tool-name="Xcode" tool-version="7.1" build-num="7B91b"/> </header> <body> <trans-unit id="CFBundleDisplayName"> <source>Adblock Plus</source> <target xml:lang="zh_Hans" state="translated">Adblock Plus</target> </trans-unit> </body> </file> <file original="AdblockPlusSafari/Localizable.strings" source-language="en_US" datatype="plaintext" target-language="zh_Hans"> <header> <tool tool-id="com.apple.dt.xcode" tool-name="Xcode" tool-version="7.1" build-num="7B91b"/> </header> <body> <trans-unit id="ADD WEBSITE TO WHITELIST"> <source>ADD WEBSITE TO WHITELIST</source> <note>Whitelisted Websites Controller</note> <target xml:lang="zh_Hans" state="translated">添加网站到白名单</target> </trans-unit> <trans-unit id="Failed to update filter lists. Please try again later."> <source>Failed to update filter lists. Please try again later.</source> <note>Message of filter update failure dialog</note> <target xml:lang="zh_Hans" state="translated">更新过滤规则列表失败。请稍后再试。</target> </trans-unit> <trans-unit id="YOUR WHITELIST"> <source>YOUR WHITELIST</source> <note>Whitelisted Websites Controller</note> <target xml:lang="zh_Hans" state="translated">您的白名单</target> </trans-unit> <trans-unit id="Filter list update failed"> <source>Filter list update failed</source> <note>Title of filter update failure dialog</note> <target xml:lang="zh_Hans" state="translated">过滤规则列表更新失败</target> </trans-unit> </body> </file> <file original="AdblockPlusSafariExtension/Info.plist" source-language="en_US" datatype="plaintext" target-language="zh_Hans"> <header> <tool tool-id="com.apple.dt.xcode" tool-name="Xcode" tool-version="7.1" build-num="7B91b"/> </header> <body> <trans-unit id="CFBundleDisplayName"> <source>Adblock Plus</source> <target xml:lang="zh_Hans" state="translated">Adblock Plus</target> </trans-unit> </body> </file> </xliff>
{ "pile_set_name": "Github" }
A volume holographic recording system is known as a digital information recording system employing the principle of hologram. The feature of the system is that an information signal is recorded in a recording medium as a change of a refractive index. The recording medium is made from a photo-refractive material such as single-crystal lithium niobate. As one of the conventional holographic recording and reproducing methods, a method is known which performs recording and reproduction by using Fourier transform. As shown in FIG. 1, in a conventional 4f-system holographic recording and reproducing apparatus, laser light 12 emitted from a laser light source 11 is split into a signal light 12A and a recording reference light 12B by a beam splitter 13. The signal light 12A passes through a beam expander 14 where the beam diameter thereof is expanded, and is then incident as collimated light on a spatial light modulator (SLM) 15 such as a transmission-type TFT liquid crystal device (LCD) panel. The spatial light modulator (SLM) 15 receives recording data that has been converted into an electric signal by an encoder 25, and forms a dot pattern of bright and dark dots on a plane. When being transmitted through the spatial light modulator (SLM) 15, the signal light 12A is optically modulated so as to include a data signal component. The signal light 12A including the dot pattern signal component passes through a Fourier transform lens 16 arranged away from the spatial light modulator (SLM) 15 by a focal distance f of the lens 16, thereby the dot pattern signal component is transformed by Fourier transform. Thus, the signal light 12A is converged on a position in a recording medium 5. On the other hand, the recording reference light 12B obtained by splitting by the beam splitter 13 is directed to the inside of the recording medium 5 by a mirror 18 and a rotatable mirror 19, and intersects with the optical path of the signal light 12A in the recording medium 5 so as to form an optical interference pattern. The entire optical interference pattern is recorded in the recording medium 5 as a change of a refractive index. In the above-mentioned manner, diffracted light from image data that has been illuminated with coherent collimated light is converged by the Fourier transform lens, thereby the image data is transformed into a distribution on a focal plane of the Fourier transform lens, i.e., a Fourier plane. The distribution obtained as a result of Fourier transform is made to interfere with a coherent reference light, so that interference fringes thus generated are recorded in the recording medium placed in the vicinity of the focal point of the Fourier transform lens. When recording of data of one page (hereinafter, simply referred to as a “page”) is finished, the rotatable mirror 19 is rotated by a predetermined amount and is translated by a predetermined distance, thereby changing an angle of incidence of the recording reference light 12B with respect to the recording medium 5. Then, data of the next page is recorded in a similar manner. By performing sequential recording in the above-described manner, angular multiplex recording is performed. In reproduction, on the other hand, a dot pattern image is reproduced by performing inverse Fourier transform. In data reproduction, as shown in FIG. 1, the optical path of the signal light 12A is blocked by the spatial light modulator (SLM) 15, for example, thereby allowing only the reference light 12B to be incident on the recording medium 5. In reproduction, the position and angle of the rotatable mirror 19 are changed and controlled by a combination of rotation and translation of the rotatable mirror 19 so as to make the angle of incidence of the reference light the same as that of the recording reference light when a page to be reproduced was recorded. On the opposite side of the recording medium 5 on which the reference light 12B is incident, reproduction light that reproduces the optical interference pattern that has been recorded. The reproduction light is directed to an inverse Fourier transform lens 16A arranged away from the recording medium 5 by a focal distance f of the lens 16A, where the reproduction light is subjected to inverse Fourier transform. Thus, a dot pattern signal can be reconstructed. Moreover, the dot pattern signal is received by a photodetector 20 such as a charge-coupled device CCD, arranged at a position away from the lens 16A by the focal distance of the lens 16A, and is then converted into an electric digital data signal again. Then, the electric digital data signal is sent to a decoder 26, thereby original data is reproduced. As described above, in order to record information in a certain volume in a recording medium with high density, the recording was conventionally performed for that volume of several cubic millimeters in a multiplexing manner using angular multiplexing or wavelength multiplexing. In such a recording or reproducing operation, signal light and/or reference light had to be fixed at a predetermined recording or reproducing position in the recording medium for a predetermined time period in accordance with the sensitivity of the recording medium and the photodetector. Thus, in recording of data, a position of interference between the signal light and the reference light was adjusted to the predetermined recording position in the recording medium and the recording of the data was then performed, while the recording medium was fixed. Subsequently, the position of the interference was moved and then next data was recorded. In reproduction, a position illuminated with the reference light was adjusted to the recording position at which the data was recorded and reproduction was then performed, while the recording medium was fixed. After the reproduction from that recording position was finished, the illuminated position was moved and then next data was reproduced. Thus, the conventional technique had a problem that it was difficult to perform high-density recording and reproduction at a high speed. Moreover, there was another problem that in order to control a light beam in recording and reproduction, a high-precision paging control mechanism was required. This was disadvantageous to the size reduction of the system. The present invention was made in view of the above, and the problems mentioned above are exemplary problems to be solved by the present invention. In other words, it is an object of the present invention to provide a recording and/or reproducing apparatus and a recording and/or reproducing method that can avoid a limitation on a recording or reproducing speed so as to enable high-speed and high-density recording and reproduction. It is another object of the present invention to provide a recording medium that can avoid the aforementioned limitation on the recording or reproducing speed so as to enable high-speed and high-density holographic recording and reproduction.
{ "pile_set_name": "USPTO Backgrounds" }
High Frequency Response Sensor*Range:-100KPa~0~1KPa...20KPa...100MPa*Intrinsic frequency up to 2MHz*Accuracy:<0.1 %FS*CE High Frequency Response SensorHPT901 -It is ideal for dynamic pressure measurement High-frequency Dynamic Pressure Transducer Dynamic Pressure Sensor -Dynamic Pressure-Transducer- Dynamic Pressure Transmitter- - It features good elastic mechanics characteristics, of which comprehensive perform is better than that of piezoelectric dynamic pressure sensor! - Profiles: - In such scientific test and modern instruments and meters as military engineering, chemical explosion test, petroleum investigation and mining as well as petroleum well test, material, mechanics, civil engineering, geo-mechanics, trauma medical, hydraulic power mechanical test etc, undistorted measurement of some frequently-changing, fast and steep pressure wave from rise is made for the dynamic pressure form, amplitude, and effective value, it is required that the pressure sensors used shall have high intrinsic frequency, very short rising time, and wide and good response frequency band to ensure sufficient dynamic pressure measuring accuracy. HPT 901 high-frequency dynamic pressure transducer is specially designed for this purpose. It adopts German made high-performance chip, using micro machining technology to make the intergrated silicon chip small effective dimensions, and high intrinsic frequency. It features good elastic mechanics characteristics, of which comprehensive perform is better than that of piezoresistive dynamic pressure sensor. It is ideal for dynamic pressure measurement. - Application of Product - *Provided with enclosure fully made of stainless steel, with excellent corrosion resistant performance *Wide pressure measuring range *Intrinsic frequency up to 2MHz *Working stability *Strong disturbance immunity *Genuine imported set with stable performance *Small dimensions, light weight, full variants and high ratio of performance to costs *Wide range fo measuring medium - Feature of Product - *Military engineering *Chemical explosion test *Petroleum investigation and mining as well as petroleum well test *Mechanics *Civil engineering *Geo-medical *Trauma medical *Hydraulic power mechanical - 2. Specifications: - Model: Parameter: HPT901 Pressure Range: -1 Bar-0-0.1 Bar.....1000 Bar Optional Frequency Range: 0~1KHz~3KHz……2MHz optional. Pressure Type: Gauge pressure; Absolute pressure optional. Overload: 200% F.S. Burst Pressure 300% F.S. Accuracy: (Linearity Hysteresis Repeatability) - ≤±0.4%F.S - ≤±0.25%F.S - ≤±0.1%F.S Intrinsic frequency 1MHz~2MHz 500KHz~1MHz 150KHz~700KHz Bandwidth: 0 ~ 200KHz 0 ~ 20KHz 0~1KHz~3KHz Rising time: 0 ~ 1µS 0 ~ 12µS 0 ~ 0.2 mS ~75µS Long Stability: Standard: 0.1%F.S±0.05%/Year Working Temp: -40°C~85°C(Special:-10°C~250°C) Storage Temp: -40°C~120°C Temp Compensation: -20°C~75°C Medium compatible: Compatible with 316L Stainless Steel Electronic Wire: 2 Wire 3 Wire Output: 4~20mA 0~5V 1~5 V Power Supply: 12~36 V DC 12~36 V DC 12~36 V DC Load Resistance: ≤(U-12)/0.02Ω Insulate resistance: >100M Ω @100V Zero Temp. Drift: 0.03%FS/°C(≤100kPa),- 0.02%FS/°C(>100kPa) FS Temp. Drift: 0.03%FS/°C(≤100kPa),- 0.02%FS/°C(>100kPa) Vibration effect: ≤±0.01%FS (X,Y,Z axis,200Hz/g) Material of Port: 1Ci18Ni9Ti Stainless Steel Sensor membrane: 316L Stainless Steel or Mono-crystalline silicon Explore Proof: Exia II CT5 Resolution: Infinite small (theoretical), 1/100000 (general) Electronic connection: DIN Hirschman Terminal Box (IP65) Fixed cable and water proof IP67 and 3 Meters Pressure connect port: 1/4’‘NPT male 1/2’‘NPT Female G1/2’’; G1/4’’male optional. (by order) Water Proof: IP67 Response time: ≤1ms - - - - - - - - Tips of Selection - 1.The frequency output of the product may be optional three items (H1, H2, and H3). The user may dedermine the most suitable ranger of frequency output to selection of product more economical according to industrial and mining requirements. 2. Please contact us in case of the other special requirements and clearly indicate them when placing an order.-
{ "pile_set_name": "Pile-CC" }
[Medial patellofemoral ligament reconstruction with gracilis autograft for patellar instability]. We describe a technique for patellar stabilization by reconstruction of the medial patellofemoral ligament with the gracilis tendon. The tendon is anchored posteriorly on the soft tissue of the medial femoral epicondyle and anteriorly on the medial border of the patella. The plasty is completed by suture of the medial patellar wing. Inferior or medial transposition of the tibial tubercle may be associated. We have used this technique since 1995 for 145 knees with patellar instability. The small incisions have the advantages of minimally invasive surgery, particularly for the postoperative period and the cosmetic effect.
{ "pile_set_name": "PubMed Abstracts" }
Q: VLANs Across 2 Switches with a Single Trunk I'm trying to create two separate networks spanning two switches using a single connection between the switches (Netgear GS108Ts). It seems like the way to do this is to setup 2 identical VLANs on each switch and use the default switching to ferry communications for both of them between the switches. This has not been quite as simple as I envisioned... I started by connecting both switches (using port 7 on both), just to verify that the switches were functional (they appear to be). I then configured both switches as follows: Assigned a static IP addresses in one network's subnet (I think this is necessary as I can't find a way to tell the switches which network to get their addresses from) Added 2 new VLANS: Internal (VLAN ID 4) and External (VLAN ID 5) Set port 1 to "Tagging" for the External VLAN Set ports 2-6 to "Tagging" for the Internal VLAN Changed ports 7-8 from "Untagging" to "Autodetect" for the Default VLAN Set PVID to 5 and "Ingress Filtering" to "Enable" for port 1 Set PVID to 4 and "Ingress Filtering" to "Enable" for port 2-6 At this point, I have not tested the External VLAN, but devices on the Internal VLAN have lost access to each other (even to other devices on the same switch). I suspect that I've misunderstood either "Tagging/Untagging" or "Ingress Filtering", but I'm not sure which, or what to do about it. We can assume that none of the devices attached to the switches support VLANs, so the switches need to manage all of the tagging. Also, each of the VLANs will have it's own router, which will provide DHCP. Any suggestions on how to configure this properly would be appreciated. A: I am not familiar with the particular Netgear switch, but I know exactly how VLAN tags work. You need the ports between the switches to be tagged for both VLANs, and all other ports untagged. If I understood your setup correctly, the following is what you need: Port 7 should belong to, and tagged for both VLANs 4 and 5. Port 1 should simply belong to VLAN 5. Port 2~6 should simply belong to VLAN 4. You shouldn't have to change any other default settings. Also a few things aren't clear: Do you wish to aggregate ports 7 and 8? If so, the trunked interface should be added to, and tagged for both VLANs 4 and 5. I would be very surprised that you aren't able to choose which interface to perform DHCP request on. When you enable DHCP on the switch, do you not have to choose a VLAN to enable it on?
{ "pile_set_name": "StackExchange" }
Montreal Canadiens Pennant Flag Show off your Canadiens pride with a Montreal Canadiens Pennant Flag! Mount this officially licensed Montreal Canadiens pennant in your home or office to show support for your team. A throwback to classic wool pennants, this premium-quality felt pennant proudly displays the team logo and colors. The durable construction allows you to roll it up and store it during the off-season.
{ "pile_set_name": "Pile-CC" }
Psoriasis is an inherited, chronic, proliferative disease characterized by epidermal hyperplasia and inflammation. The disease is characterized by the presence of psoriatic lesions that appear as erythematous, circumscribed plaques covered by loosely adherent silvery scales. The lesions may become extensive and may involve any area of the body, although lesions most often appear on the elbows, knees and scalp. Although the psoriatic plaques usually remain localized, in some cases the disease is sufficiently widespread as to be incapacitating and may be prolonged and unpredictable. Although many treatments for psoriasis provide limited relief, a common treatment for psoriasis involves topical application of corticosteroids, sometimes with occlusive dressings. However, the benefits of such treatments must be balanced against their adverse effects which can include skin atrophy with telangiectasia and striae formation as well as adrenal suppression. An extremely effective treatment for moderate to severe psoriasis involves topical application of a compound called anthralin (1, 8-dihydroxyanthrone). Beneficially, anthralin has no substantiated systemic toxic or carcinogenic effects and provides a more prolonged remission than is usually evidenced by topical corticosteroids. Anthralin is usually incorporated into cream or ointment dosage forms for skin application at concentrations ranging from 0.1 percent to about 8 percent. The therapeutic effects of anthralin appear to require only a short contact with the psoriatic lesions and may be applied daily for periods of 15 to 60 minutes and then washed off with soap and water. Unfortunately, anthralin is not widely used because of its ability to irritate the perilesional skin and its tendency to stain the hair, skin and nails of the patient, as well as any fabrics or bathroom fixtures that come in contact with the anthralin composition. The resultant staining leaves a deep violaceous color and these side effects have reduced the acceptance of this treatment. Accordingly, it is an object of the present invention to provide a lenitive composition for application to the skin of an anthralin patient for reducing anthralin-induced inflammation and the staining associated with anthralin use. Included in this object is the provision for a composition for reducing the staining of skin, hair and nails and the skin irritation associated with anthralin treatment of psoriasis. A further advantage of the composition and method of the present invention is the provision for reducing or preventing the staining of fabrics, particularly pajamas and bedclothes, as well as bathroom fixtures as a result of anthralin treatment of psoriasis. A further object of the present invention is to provide a lenitive composition of the type described that can be applied as a single phase aqueous treating solution that can be easily applied and removed from the effected areas of the skin. It has been reported by Ramsay et al that anthralin-induced inflammation may be reduced by the application of certain organic amines. J. Am. Acad. Dermatol. 1990, Vol. 22, pages 765-772 and Vol. 23, pages 73-76. In those publications, it is noted that while anthralin therapy is effective even with short contact, inflammation of the perilesional skin is a problem. Accordingly, dilute potassium hydroxide has been used to deactivate the anthralin and has led to a reduction in anthralin-induced inflammation, but not in its therapeutic effect. Because potassium hydroxide is also an irritant, the publications evaluated the use of certain organic amines, namely alkylamines and alkanolamines for a similar inhibitory effect on anthralin-induced inflammation. The amines were used as solutions dissolved in dichloromethane. However, dichloromethane is used in cleaning fluids and its human toxicity is known to be narcotic in high concentrations. Additionally, triethanolamine was used in an emulsion at an amine concentration of 10 percent by weight. However, the residual effect of the emulsion on the skin of the patients hampers the therapeutic action of subsequent anthralin treatments. Accordingly, it is another object of the present invention to provide a new and improved lenitive composition containing the effective organic amines and a film former in an aqueous non-toxic and dermatologically acceptable carrier. Other objects, features and advantages will be in part obvious and in part pointed out more in detail hereinafter. These and related objects are achieved in accordance with the present invention by providing a lenitive composition for application to the skin of an anthralin-treated patient for reducing anthralin-induced inflammation and the staining associated with anthralin use comprising a single phase aqueous treating solution comprising an organic amine and a film-forming agent dissolved within a nontoxic dermatologically acceptable carrier. The organic amine is selected from the group consisting of lower alkyl and lower alkanol primary, secondary and tertiary amines and comprises about 1-25 percent by weight of the single phase aqueous treating solution. The solution is topically applied to the anthralin-treated areas so as to completely cover the treated areas. The solution is applied after anthralin wash off and is permitted to dry, but can also be applied before wash off as well, preferably in the form of a fine spray or mist. A better understanding of the objects and advantages of the invention will be obtained from the following detailed description which sets forth an illustrative embodiment and is indicative of the way in which the principles of the invention are employed, including the several steps of the method and the relation of one or more of such steps with respect to each of the others and the composition possessing the features, characteristics, properties and relation of elements described and exemplified herein.
{ "pile_set_name": "USPTO Backgrounds" }
Melissa About me Im a married latin, very petite thin female from Aventura. I love my husband but i need a female to be my serious girlfriend...maybe more? I need a girl that can wow me in person with her personality and in the bedroom. I am very involved with BDSM/fetish life...you must be also or else we cannot click. Vanilla regular friendship in public, but in the bedroom i want to be your slut. You must get along with my husband...but i want to be discreet for the very beginning at least. I love leather skirts, pants, latex clothing, dressing up for fun when i am home. Hot Naked Pics About me At least that's what's on Wikipedia when you google his name. On color plate 17 in the book A Dictionary of Color see reference below , the color harlequin is shown as being a highly saturated rich color at a position halfway between chartreuse and green. He found that a mixture of Prussian Blue and Gamboge which was a pigment made from the bark of a tree from Cambodia, make the exact sort of dark rich green that he needed. After the development of chemical dyes permitted the adoption of the stable shade of rifle green now worn. Views Read Edit View history. SGBUS green is the color voted by the public and used by Singapore to colour all its government-owned public buses. It comes from the inorganic compound copper II acetoarsenite and was once a popular pigment in artists' paints. Hot porno About me Dartmouth green is the official color of Dartmouth College , adopted in N—Z List of colors compact. Green Pantone is the color that is called green in Pantone. A guy fucks his bitch in the greenhouse 3 years ago freepornvs. Castleton green is one of the two official colors of Castleton University in Vermont. Smita Deshpande August 20, at 9: Deep Brunswick green is commonly recognized as part of the British racing green spectrum, the national auto racing color of the United Kingdom. Hot Naked Pics About me Retrieved October 31, Kelly green is an American term. The first recorded use of fern green as a color name in English was in The use of the term as a color name occurred at least as far back as It is an official Crayola color since Persian green is a color used in Persian pottery and Persian carpets in Iran. Reseda green is a shade of greyish green in the classic range of colours of the German RAL colour standard , where it is colour Thank you so much.
{ "pile_set_name": "Pile-CC" }
A different kind of garage sale Just cant find the time to dedicate a Saturday for holding a traditional garage sale, so doing it as an ongoing thing. Come and browse, buy what you want, or buy the whole lot Some of the things we have to sellChristmas decorations, including huge collection of C9 strings and bulbs Red, Clear, and BlueMiscellaneous office furniture and accessoriesCherry dresserLa...
{ "pile_set_name": "Pile-CC" }
/* * Project: Breadbox Home Automation * File: dialog.ui * * Author: David Hunter * * This file contains the user Interface description for the * dialog used by the driver. The resource must be a template * resource to work with UserCreateDialog. * */ /* ---------------------------------------------------------------------------- Include files -----------------------------------------------------------------------------*/ /* first, read pre-defined class definitions. See the file * /staff/pcgeos/Include/generic.uih. */ #include "generic.uih" start DialogResource; /* ---------------------------------------------------------------------------- Changing Port Dialog -----------------------------------------------------------------------------*/ ChangePortDialog = GenInteraction { moniker = "Changing Port..."; children = TestingStatusGlyph, TestingInterfaceGlyph; visibility = dialog; attributes = default +modal, +notUserInitiatable; genStates = default -usable; hints = { HINT_ORIENT_CHILDREN_HORIZONTALLY, HINT_CENTER_CHILDREN_HORIZONTALLY } } TestingStatusGlyph = GenGlyph { moniker = TestingText; } TestingInterfaceGlyph = GenGlyph { hints = { HINT_FIXED_SIZE { SpecWidth <SST_AVG_CHAR_WIDTHS, 30> SpecHeight <> } } } visMoniker DirectText = "TW523 direct interface"; visMoniker SerialHD11Text = "IBM HD-11A serial interface"; visMoniker TestingText = "Testing for:"; visMoniker FoundText = "Found:"; end DialogResource;
{ "pile_set_name": "Github" }
Roylea cinerea (D.Don) Baillon: Ethnomedicinal uses, phytochemistry and pharmacology: A review. ROYLEA CINEREA (D.DON) BAILLON: Roylea cinerea (D.Don) Baillon family Lamiaceae is a shrub of the monotypic genus. Aerial parts of the plant are used traditionally in Indian sub-Himalayas and Nepal for the treatment of jaundice, skin diseases, malaria, diabetes, febrifuge and contusions. This article reviews botanical description, phytochemistry, ethnomedicinal uses and pharmacological activities of R. cinerea to evaluate if the scientifically evaluated pharmacological profile of the plant can corroborate ethnomedicinal uses. A survey was conducted to document ethnomedicinal and folklore uses of the plant in five districts of Himachal Pradesh, India. Phytochemical studies of R. cinerea reveal the presence of glycosides, diterpenes, flavonoids, tannins, steroids, saponins and phenols. R. cinerea extracts. The compounds showed anticancer, antifungal, hepatoprotective, antiperiodic, antiprotozoal, antidiabetic and antioxidant activities on scientific evaluation. A diterpenoid from the plant, precalyone, exhibited antiproliferative activity against P-388 lymphocytic leukemia cell line. Cinereanoid D, a labdane diterpenoid that inhibits ATP binding of heat shock protein Hsp90, is a potential anticancer lead. Two compounds from aerial parts of the plant, 4-methoxybenzo[b]azet-2(1H)-one and 3β-hydroxy-35-(cyclohexyl-5'-propan-7'-one)-33-ethyl-34-methylbacteriohop-16-ene, showed antidiabetic activity. Thus, the scientific reports confirm the ethnomedicinal use of this plant in diabetes, malaria and liver diseases. Roylea cinerea is a traditionally used medicinal plant from Western Himalayas. The pharmacological evaluation confirmed the ethnomedically claimed antidiabetic activity using scientifically accepted protocols and controls, although some of the studies require reconfirmation. The bioactivity-guided fractionation attributes the activity to 4-methoxybenzo[b]azet-2(1H)-one and 3β-hydroxy-35-(cyclohexyl-5'-propan-7'-one)-33-ethyl-34-methylbacteriohop-16-ene. Further, cinereanoid D is a potential lead for targeting Hsp90 and its medicinal chemistry studies can lead to a potent anticancer compound. The plant extract also showed antimalarial and hepatoprotective activities. Some of the studies discussed in this review require reconfirmation, as the protocols lacked proper positive and negative controls. Thus, the review of the scientific reports on Roylea cinerea supports ethnomedicinal use as antidiabetic, antimalarial and hepatoprotective. Further studies to prove scientific basis for use in leucorrhea, skin diseases, inflammation and strengthening of claims for liver tonic are required.
{ "pile_set_name": "PubMed Abstracts" }
Lucius Valerius Messalla Volesus Lucius Valerius Messalla Volesus was a Roman senator, who flourished under the reign of Emperor Augustus. He was consul in AD 5 with Gnaeus Cornelius Cinna Magnus as his colleague. His father, Potitus Valerius Messala, was suffect consul in 28 BC and prefect of the city of Rome. Lucius was a tresviri monetalis, the most prestigious of the four boards that form the vigintiviri; Aulus Licinius Nerva Silianus, consul in AD 7, was one of the other two members of this board at the same time as Silius. Because assignment to this board was usually allocated to patricians, Ronald Syme sees this as evidence that Lucius was a member of that class. Other offices Volesus held included proconsul of the Roman province of Asia. During the latter part of his career, Lucius was charged with crimes against humanity and found guilty. Although it has yet to be discovered, Augustus wrote of the fall of Lucius Valerius in his book, de Voleso Messala. References Category:Senators of the Roman Empire Category:Imperial Roman consuls Category:Roman governors of Asia Category:1st-century Romans Messalla Volesus, Lucius Category:Year of birth unknown Category:Year of death unknown
{ "pile_set_name": "Wikipedia (en)" }
Recent Entries The prior weekend, Ethan had been invited to help staff an initial mini-seminar sponsored by the same cult he was confident had helped him so much in the past. It was held several hundred miles from where Ethan lived, so he stayed in the local sponsor’s home. Sometime mid-seminar Ethan phoned Danni, upset that he’d violated his obligation to participants by sharing too much about his own personal struggles. In subsequent calls to her over the next several days, he told her that he The MLM culture I experienced included a symbiosis with self-help and prosperity cults, which seem to orbit its periphery in order to exploit a population made vulnerable by its already-suspended ability to think critically. The MLM events I attended often featured motivational speakers who encouraged listeners’ involvement in these cults. The pursuit of material wealth was central to MLM’s message; and high-level distributors pushed neophytes to read books and listen to audio programs authored The changes in Danni over the years were so insidious that I hadn’t really recognized them; but I finally realized that I no longer knew her. The sweet guileless woman who only wanted to help others was now totally consumed by her pursuit of material wealth—a pursuit that recognized no bounds. Danni’s rationalization for her ambition was that the more material wealth she possessed, the more resources she’d have to help those in need. However, her expressed motive wasn’t the issue for me. That Danni’s Danni’s flawed thinking was by no means the result of stupidity. By then, she was living almost exclusively inside the MLM culture where critical thought was disparaged as negative and counterproductive. “Believe in the products, believe in the company and believe in your upline” was an oft-repeated mantra that actively contributed to critical thought suppression. This was simply one more indicator that Danni’s thinking had been crippled by her exclusion of influences outside that bizarre culture. I’ll never forget one of Danni’s prospects who had contacted her seeking answers for his multiple sclerosis. Mark headed an exceptionally nice family that included an adult daughter who suffered the ravages of lupus. I went with Danni to their home with a full complement of her MLM company’s products to demonstrate their efficacy with “kinesiology”. This is a pseudo-scientific and highly subjective method of testing the effect of any substance in physical contact with the subject’s tissues—often
{ "pile_set_name": "Pile-CC" }
Content-Transfer-Encoding: quoted-printable Date: Tue, 01 May 2001 13:25:56 -0500 From: "Tracey Bradley" <tbradley@bracepatt.com> To: "Justin Long" <jlong@bracepatt.com> Cc: "Aryeh Fishman" <afishman@bracepatt.com>, "Andrea Settanni" <asettanni@bracepatt.com>, "Charles Ingebretson" <cingebretson@bracepatt.com>, "Charles Shoneman" <cshoneman@bracepatt.com>, "Deanna King" <dking@bracepatt.com>, "Dan Watkiss" <dwatkiss@bracepatt.com>, "Gene Godley" <ggodley@bracepatt.com>, "Kimberly Curry" <kcurry@bracepatt.com>, "Michael Pate" <mpate@bracepatt.com>, "Paul Fox" <pfox@bracepatt.com>, "Ronald Carroll" <rcarroll@bracepatt.com> Subject: Western lawmakers want Bush help with power Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: inline UPDATE 1-Western lawmakers want Bush help with power ------------------------------------------------------------------------------ -- WASHINGTON, April 30 (Reuters) - Thirty-three Democratic U.S. lawmakers from California, Washington state and Oregon wrote to Energy Secretary Spencer Abraham on Monday demanding stronger federal action to help western states suffering from an ongoing electricity shortage crisis. The lawmakers want a regionwide price cap for wholesale power prices to prevent hugely inflated prices and rolling blackouts this summer when demand intensifies due to air conditioning season. They specifically criticized the Republican-led Federal Energy Regulatory Commission (FERC) for failing to maintain fair and reasonable rates, allowing generators to profit from power prices some 10 times historical levels. "If political leadership were exercised today by the administration and FERC, California and the Pacific Northwest would already be on their way to ending the price-gouging," said Rep. Anna Eshoo, a Democrat from California. FERC last week approved a number of price mitigation measures to tame the power shortages in California, but remains against a regional price cap. The stance has drawn the ire of many lawmakers and dissent from FERC Commissioner William Massey, who says without such caps, the situation will not get better. Massey is outflanked on the commission by FERC Chairman Curtis Hebert, a Republican, and Linda Breathitt, a Democrat. Experts in California estimate power supplies could fall some 5,000 megawatts short this summer on a daily basis, threatening the return of rolling blackouts to cut demand. Eshoo is also the author of legislation to impose cost-of-service based rates in the Western energy market. ------------------------------------------------------------------------------ -- Copyright , 2001 Reuters Limited.
{ "pile_set_name": "Enron Emails" }
Flora's, California Flora's is a former settlement in El Dorado County, California. It was located west of Volcanoville. References Category:Former settlements in El Dorado County, California Category:Former populated places in California
{ "pile_set_name": "Wikipedia (en)" }
three letters picked without replacement from {h: 4, t: 3, v: 6, w: 2, r: 1, i: 3}. 1/969 Two letters picked without replacement from {p: 3, q: 1}. Give prob of sequence pp. 1/2 What is prob of sequence ur when two letters picked without replacement from jzdrdzddzdrzuzdu? 1/60 Four letters picked without replacement from ecececc. Give prob of sequence ceee. 1/35 What is prob of sequence jjjw when four letters picked without replacement from {j: 9, w: 11}? 77/1615 Calculate prob of sequence dq when two letters picked without replacement from qidi. 1/12 Two letters picked without replacement from cmmbmmbc. What is prob of sequence cb? 1/14 Four letters picked without replacement from {g: 3, z: 1, c: 1}. What is prob of sequence gzgg? 1/20 Three letters picked without replacement from wghgwjwhpwgw. What is prob of sequence hpg? 1/220 What is prob of sequence ir when two letters picked without replacement from {i: 1, c: 5, r: 4}? 2/45 Calculate prob of sequence ds when two letters picked without replacement from ssssdsssssdssd. 33/182 Four letters picked without replacement from {c: 2, q: 3, b: 5}. What is prob of sequence qccb? 1/168 What is prob of sequence iyi when three letters picked without replacement from ikkuibyibi? 1/60 What is prob of sequence cwc when three letters picked without replacement from lcmcvvvdvwvlcvd? 1/455 Four letters picked without replacement from {l: 7, q: 1, o: 10}. What is prob of sequence oqol? 7/816 Two letters picked without replacement from roorooororoor. What is prob of sequence ro? 10/39 Two letters picked without replacement from gpppgpgpgpgpgpggggpp. Give prob of sequence pp. 9/38 Calculate prob of sequence cc when two letters picked without replacement from tlwwlcwctclwwwwcwcw. 10/171 Calculate prob of sequence zzh when three letters picked without replacement from zzzzhzhhhz. 1/6 What is prob of sequence ej when two letters picked without replacement from ajjaeajje? 1/9 What is prob of sequence iwu when three letters picked without replacement from {s: 1, u: 10, w: 1, i: 5}? 5/408 Three letters picked without replacement from {q: 3, u: 3, o: 2, j: 3, i: 2, z: 1}. What is prob of sequence quu? 3/364 What is prob of sequence igi when three letters picked without replacement from {y: 2, o: 3, g: 5, i: 10}? 5/76 Two letters picked without replacement from jjsjxxjxxxuxsx. Give prob of sequence xj. 2/13 What is prob of sequence bx when two letters picked without replacement from xxxxbxxxxbbxxxxxxxxb? 16/95 Calculate prob of sequence eqee when four letters picked without replacement from qeqqeeqqqe. 1/35 Four letters picked without replacement from {b: 3, d: 3, u: 1, q: 2, c: 1}. What is prob of sequence qbdc? 1/280 Calculate prob of sequence ssbi when four letters picked without replacement from sbibbsiiib. 2/315 Four letters picked without replacement from aaaaaa. Give prob of sequence aaaa. 1 What is prob of sequence an when two letters picked without replacement from ayqnrnhr? 1/28 Two letters picked without replacement from hiyssinini. What is prob of sequence iy? 2/45 What is prob of sequence ljl when three letters picked without replacement from eeeeleejeeelj? 1/429 What is prob of sequence crc when three letters picked without replacement from rworowrccrw? 4/495 What is prob of sequence rbfs when four letters picked without replacement from {f: 4, b: 5, w: 1, y: 1, r: 1, s: 3}? 1/546 Three letters picked without replacement from cgggmgypgkpygygmg. Give prob of sequence pky. 1/680 Three letters picked without replacement from {j: 7, q: 2, l: 5, h: 4}. What is prob of sequence jqq? 7/2448 Three letters picked without replacement from {h: 8, j: 1, p: 8, b: 2, s: 1}. Give prob of sequence shj. 1/855 What is prob of sequence zjj when three letters picked without replacement from jjjqzqzjjzjzzzjzjqj? 28/323 What is prob of sequence at when two letters picked without replacement from tasast? 2/15 Two letters picked without replacement from {e: 2, l: 4, y: 3, u: 8}. What is prob of sequence ul? 2/17 Three letters picked without replacement from {p: 1, s: 2, q: 2, d: 4}. Give prob of sequence dqs. 2/63 Two letters picked without replacement from {a: 1, e: 2, s: 2}. Give prob of sequence es. 1/5 Calculate prob of sequence mj when two letters picked without replacement from {j: 4, m: 4, q: 1, s: 3, c: 1}. 4/39 Two letters picked without replacement from {b: 1, u: 1, t: 1, e: 1, c: 1, s: 2}. Give prob of sequence cu. 1/42 Calculate prob of sequence uzf when three letters picked without replacement from {z: 2, j: 1, g: 6, k: 1, u: 2, f: 7}. 14/2907 Two letters picked without replacement from zzqzzqzzqq. What is prob of sequence zq? 4/15 Calculate prob of sequence wwww when four letters picked without replacement from {w: 7}. 1 What is prob of sequence xc when two letters picked without replacement from {x: 3, w: 6, y: 3, m: 1, j: 1, c: 1}? 1/70 Calculate prob of sequence toh when three letters picked without replacement from thopptpoipvop. 1/286 What is prob of sequence qjdd when four letters picked without replacement from {v: 1, q: 4, u: 1, l: 2, d: 6, j: 1}? 1/273 What is prob of sequence kknk when four letters picked without replacement from kkkkkkkkkkknkkkkkkk? 1/19 What is prob of sequence lb when two letters picked without replacement from {l: 3, b: 5, m: 4}? 5/44 Calculate prob of sequence gl when two letters picked without replacement from uzzcclgucfu. 1/110 Calculate prob of sequence wbee when four letters picked without replacement from {b: 1, j: 3, v: 4, e: 2, w: 2, x: 2}. 1/6006 Calculate prob of sequence hdnn when four letters picked without replacement from dnhnnndnhd. 1/42 Three letters picked without replacement from xzxzyzkmxmmxzmm. Give prob of sequence xzk. 8/1365 Calculate prob of sequence db when two letters picked without replacement from ppbppdpbpbppbppbbbb. 4/171 Four letters picked without replacement from ntnnwjnjnnwtnwjnnjw. What is prob of sequence nnnt? 7/646 Three letters picked without replacement from {w: 3, g: 2, c: 3, p: 8}. What is prob of sequence gww? 1/280 Four letters picked without replacement from ssuusususuusuususs. Give prob of sequence usuu. 21/340 Two letters picked without replacement from {y: 4, c: 16}. Give prob of sequence yy. 3/95 Three letters picked without replacement from pqqqpppqqpqppqqppqq. What is prob of sequence pqp? 40/323 What is prob of sequence gge when three letters picked without replacement from egeeeeegeeeeeeeeeeeg? 17/1140 Calculate prob of sequence qbb when three letters picked without replacement from qmmmqvqmbimb. 1/220 Four letters picked without replacement from vvlvvllovolovollvl. Give prob of sequence ovll. 49/3060 What is prob of sequence zeh when three letters picked without replacement from {z: 1, h: 2, g: 1, c: 5, p: 2, e: 5}? 1/336 Calculate prob of sequence dka when three letters picked without replacement from {a: 3, k: 4, j: 1, g: 1, d: 5}. 5/182 Two letters picked without replacement from ooowwwo. Give prob of sequence wo. 2/7 Three letters picked without replacement from {f: 6, r: 1, z: 8}. Give prob of sequence zrz. 4/195 Three letters picked without replacement from {f: 1, k: 2}. Give prob of sequence kfk. 1/3 Two letters picked without replacement from rgrgggrrrgghrgrggn. Give prob of sequence nr. 7/306 Two letters picked without replacement from {p: 1, w: 5, x: 1, r: 2, t: 1, m: 5}. What is prob of sequence mx? 1/42 Three letters picked without replacement from eeeeedeedeuueeduueee. Give prob of sequence edd. 13/1140 Calculate prob of sequence vbd when three letters picked without replacement from {e: 4, i: 1, d: 1, b: 2, v: 2}. 1/180 Four letters picked without replacement from {o: 3, r: 4}. Give prob of sequence orro. 3/35 Two letters picked without replacement from {o: 18, p: 1}. Give prob of sequence oo. 17/19 Two letters picked without replacement from {i: 4, d: 1, j: 2, m: 7}. Give prob of sequence mi. 2/13 Two letters picked without replacement from xfbefsexfzfe. Give prob of sequence sf. 1/33 Two letters picked without replacement from ujjuubbbjujjujuuuub. Give prob of sequence jj. 5/57 What is prob of sequence vvvg when four letters picked without replacement from vggvvvvvvvg? 7/55 Two letters picked without replace
{ "pile_set_name": "DM Mathematics" }