text
stringlengths
0
6.48M
meta
dict
If you have been caught in the legal trap of a domestic violence case then it can snatch your employment, your home and any chance of living a normal life in the future. So contact an efficient domestic violence lawyer Colorado immediately and get the best solution for your problems. For further information please visit out website
{ "pile_set_name": "Pile-CC" }
Neon Therapeutics Files for $100 Million IPO for Immuno-Oncology Pipeline Neon Therapeutics (Proposed Nasdaq: NTGN) has filed for a $100 million initial public offering to fund clinical development of its neoantigen program. Neoantigens result from mutations occurring during tumor growth, are recognized as foreign and differ from native antigens to which the immune system is tolerant. The presence of neoantigens in cancer cells and their absence in normal cells makes them compelling, untapped targets for cancer therapy. The company launched in 2015 with a $55 million Series A investment led by Third Rock Ventures with participation from Clal Biotechnology Industries and Access Industries. The $106 million Series B round in 2017 was led by Partner Fund Management and joined by Third Rock and Access, along with new investors Fidelity, Wellington, Inbio Ventures, Nextech Invest, Pharmstandard International, Arrowmark Partners, Hillhouse Capital and Casdin Capital. The IPO will be led by Morgan Stanley, BofA Merrill Lynch and Mizuho Securities. Neon is developing a pipeline of personalized neoantigen vaccines and autologous T cell therapies, with an initial focus on melanoma, non-small cell lung cancer (NSCLC) and bladder cancer. The company has collaboration and licensing agreements in place with Vedantra Pharmaceuticals, Apexigen, CRISPR Therapeutics, Bristol-Myers Squibb, Merck, the Netherlands Cancer Institute, the Broad Institute, Dana-Farber Cancer Institute and Massachusetts General Hospital. Source: Neon Therapeutics In May 2018, Neon announced that it had treated its first patient in a Phase I clinical trial evaluating its proprietary personal neoantigen vaccine, NEO-PV-01, in combination with Merck’s Keytruda (anti PD-1 therapy) and a chemotherapy regimen of pemetrexed and carboplatin in patients with untreated or advanced metastatic nonsquamous NSCLC. The trial is being conducted in collaboration with Merck. In addition to evaluating the safety, tolerability and preliminary efficacy of the combination therapy, the trial will assess neoantigen-specific immune responses in peripheral blood and tumor tissue, and other markers of immune response. Treating our first patient in this clinical study marks an important milestone for Neon. We see a strong mechanistic rationale to explore the combination and sequence of a personal neoantigen cancer vaccine, anti-PD-1 therapy and chemotherapy. These data will help us understand the potential of NEO-PV-01 to improve durability and response rates of patients treated in combination with existing immuno-oncology drugs. – Richard Gaynor, M.D., president of research and development at Neon Therapeutics.
{ "pile_set_name": "Pile-CC" }
We can't wait for the Academy Awards this Sunday! While we definitely have our own predictions on who the winners will be, regardless of who is victorious, it's always an incredible night for Hollywood A-listers and movie buffs watching at home. From kissing their awards to smooching their costars, there's no better time to remember the love in the Oscar press room from years past.
{ "pile_set_name": "Pile-CC" }
Regular Exercise and Depressive Symptoms in Community-Dwelling Elders in Northern Taiwan. According to World Health Organization, depressive disorder will be a Top 2 disease in the world by 2020. In light of Taiwan's rapidly increasing elderly population, elderly psychological health is expected to become an increasingly important issue in healthcare. This study examines the association between regular exercise and depressive symptoms in community-dwelling older adults by gender in northern Taiwan. The participants were selected using a probability-proportional-to-size procedure from community-dwelling adults who were aged 65 years or older and living in northern Taiwan. A cross-sectional study and interviews were used to collect information about their exercise behaviors, depressive symptoms, and the factors influencing the depressive symptoms. Percentage, chi-square, t test, and logistic regression were used to analyze the data. One thousand twenty elderly individuals completed the questionnaires. Among the participants with the average age of 73.5 years, 44.5% were men, and 55.5% were women. Two hundred seventeen of the participants (21.3%) had depressive symptoms. Five hundred eighty-five of the participants (57.4%) exercised regularly. The result of logistic regression showed that regular exercise was a significant predictor of depressive symptoms in elderly individuals (odds ratio = 3.54, 95% confidence interval [1.76, 7.12]). Other factors such as gender, chronicle diseases, and health status were not related to depressive symptoms. Moreover, both for male and female individuals, regular exercise was a significant predictor of depressive symptoms (odds ratio = 4.76, 95% confidence interval [1.65, 13.72] and odds ratio = 3.03, 95% confidence interval [1.18, 7.69], respectively). Other factors were not related to depressive symptoms. This study shows regular exercise to be a significant predictor of depressive symptoms in both men and women. Therefore, senior citizens should be encouragedto exercise regularly as a way to promote good mental health.
{ "pile_set_name": "PubMed Abstracts" }
Cookies on the PFS website By using and browsing the PFS website, you consent to cookies being used in accordance with our policy. If you do not consent, you are always free to disable cookies if your browser permits, although doing so may interfere with your use of some of our sites or services. Find out more » Beneficiaries' rights to trust information Barbara Gardener Bio: Recently we looked at a novel way some trust beneficiaries had sought to obtain information from the trustees, using the Data Protection Act. We also mentioned a recent case which was decided since this topic was last considered in these articles in 2015. This month we consider this case in more detail and we will also set out the key rules on what the rights of beneficiaries and corresponding obligations of trustees are in this respect. The right to see trust accounts The case in question was RNLI and others v Headley and McCole [2016] EWHC 1948 (Ch). The facts of the case Mrs Farmer died in 1996 and her Will created two life interests for adult beneficiaries – her son and daughter-in-law, with up to ten named charities benefitting in remainder. The Will nominated as executors John Headley and Kevin McCole, both solicitors at the law firm Headleys. The estate was worth about £145,000. One of the life tenants is still alive, so the charities' interests have not yet fallen into possession. Over the years the charities made numerous requests for information about the trust and a set of provisional accounts was sent to some of them in 2007, but that was the last they ever heard from the firm for many years. In 2014, five of the charities instructed a firm of solicitors to act for them and they in turn managed to contact Headleys by phone, but did not receive the information they sought. After several more requests for information the charities issued proceedings against the trustees. As it turned out, one of the trustees had previously died so the case was against the surviving co-trustee, McCole, who in the event failed to appear or present a defence, so the judge had to consider the case on the claimants’ evidence. The argument and the decision The charities argued, apparently quoting the 1998 case of Armitage v Nurse, that 'Every beneficiary is entitled to see the trust accounts, whether his interest is in possession or not.' They also quoted Schmidt v Rosewood Trust to assert that trust accounts and other documents must be disclosed to all beneficiaries on demand, save in exceptional circumstances. Needless to say nothing is quite so straightforward. We previously considered the 2003 decision by the Judicial Committee of the Privy Council in Schmidt -v- Rosewood Trust Ltd (2003) WLR565 back in 2015. Briefly, the judges in that case decided that it was within the Court’s discretion to decide whether a beneficiary with only a remote or defeasible interest had any rights to see documents at all and, if so, then what classes of document should be disclosed, either completely or in a redacted form; and what safeguards should be imposed to limit the use which might be made of the documents or information disclosed under the order of the Court. In the present case the judge did not accept the charities’ claims in their entirety, preferring the view that disclosure must depend on 'what is needed in the circumstances for the beneficiaries to appreciate, verify and if need be vindicate their own rights against the trustees in respect of the administration of the trust' which will vary according to the facts of the case. The judge noted that the charities ' remainder interest had not yet crystallised and therefore they had no basis to demand information about trust income that was payable to the life tenant. However, he agreed that the charities did have a right to see accounts of capital, lists of investments and a breakdown of trustees' fees and expenditure in as far as the trustees have deducted these sums from trust capital. Finding the trustees to be in breach of duty, the judge made an order requiring these accounts to be disclosed to the charities. He also ordered the surviving trustee to pay the claimant charities' costs, approximately £8,000, in full, without the right to claim them back from the trust estate. The decision is therefore also a warning to trustees that they may end up being personally liable for failing to provide the requested information. So where does that leave us? The implications for trustees and beneficiaries. Here are some key points to bear in mind whether you are a trustee or a beneficiary. Beneficiaries with fixed rights under a trust have more rights to information than those under discretionary trusts. Certain beneficiaries must be provided with information as of right – e.g. a life tenant about the trust income they are entitled to. A life tenant under a qualifying interest in possession trust must be told about the value of the trust estate that falls into their estate for IHT purposes. Beneficiaries with fixed or contingent but defined rights (e.g. entitlement to capital at certain age or following the death of a life tenant) have a right to know that a trust exists and what their interests are. Generally speaking the trust document and other documents appointing/retiring trustees or changing/adding assets are disclosable to a beneficiary. However, if the right of a particular individual depends solely on the trustees´ discretion, there may be no right to receive any information until the person in question becomes a “real potential” beneficiary. Generally the trust accounts are also disclosable to a beneficiary but access may be restricted depending on the nature of the beneficiary’s interest. As shown in the above case a trustee is only obliged to provide the information needed by the particular beneficiary to appreciate their own rights against the trustee in respect of the administration of the trust; it is not necessary to go beyond this. Other documents, such as the settlor’s letter of wishes in relation to the trust, documents about the exercise of powers and discretions, and legal advice obtained by the trustees, generally need not be disclosed. If trustees receive a request for information from a beneficiary, they need to consider it carefully and only refuse a request if they are certain that this is in their power, bearing in mind that a beneficiary may apply to the Court and the trustees may find themselves personally liable for any costs. If in doubt the trustees may also apply to the Court for direction. COMMENT Some further clarification of what information and which beneficiaries are entitled to it is welcome as well as the warning to trustees to take beneficiaries’ requests seriously. It has been noted in recent years that there has been an increase in litigation brought by charities; indeed some have been criticised for being too aggressive. A potential action by a remainder beneficiary, be it a charity or not, is something to consider by any person making a Will or indeed a lifetime trust. A letter of wishes from the testator/settlor will always be an extremely useful tool for the trustees to use. Another issue is the form of Will drafting where charities are to benefit, with some, supposedly simple, provisions causing all kinds of problems. This is something we will look at next month. Tagged as: This document is believed to be accurate but is not intended as a basis of knowledge upon which advice can be given. Neither the author (personal or corporate) nor the CII Group, nor any CII Local Institute, faculty or society nor any of the officers or employees of those organisations accept any responsibility for any loss occasioned to any person acting or refraining from action as a result of the data or opinions included in this material. Any opinions expressed are those of the author or authors and not necessarily those of the CII Group, its Local Institutes, faculties or societies.
{ "pile_set_name": "Pile-CC" }
Fools' Gold Found to Regulate Oxygen Jul 23, 2012 As sulfur cycles through Earth's atmosphere, oceans and land, it undergoes chemical changes that are often coupled to changes in other such elements as carbon and oxygen. Although this affects the concentration of free oxygen, sulfur has traditionally been portrayed as a secondary factor in regulating atmospheric oxygen, with most of the heavy lifting done by carbon. However, new findings that appeared this week in Science suggest that sulfur's role may have been underestimated. Drs. Itay Halevy of the Weizmann Institute's Environmental Science and Energy Research Department (Faculty of Chemistry), Shanan Peters of the University of Wisconsin and Woodward Fischer of the California Institute of Technology, were interested in better understanding the global sulfur cycle over the last 550 million years – roughly the period in which oxygen has been at its present atmospheric level of around 20%. They used a database developed and maintained by Peters at the University of Wisconsin, called Macrostrat, which contains detailed information on thousands of rock units in North America and beyond. The researchers used the database to trace one of the ways in which sulfur exits ocean water into the underlying sediments – the formation of so-called sulfate evaporite minerals. These sulfur-bearing minerals, such as gypsum, settle to the bottom of shallow seas as seawater evaporates. The team found that the formation and burial of sulfate evaporites were highly variable over the last 550 million years, due to changes in shallow sea area, the latitude of ancient continents and sea level. More surprising to Halevy and colleagues was the discovery that only a relatively small fraction of the sulfur cycling through the oceans has exited seawater in this way. Their research showed that the formation and burial of a second sulfur-bearing mineral – pyrite – has apparently been much more important. Pyrite is an iron-sulfur mineral (also known as fools' gold), which forms when microbes in seafloor sediments use the sulfur dissolved in seawater to digest organic matter. The microbes take up sulfur in the form of sulfate (bound to four oxygen atoms) and release it as sulfide (with no oxygen). Oxygen is released during this process, thus making it a source of oxygen in the air. But because this part of the sulfur cycle was thought be minor in comparison to sulfate evaporite burial, (which does not release oxygen) its effect on oxygen levels was also thought to be unimportant. In testing various theoretical models of the sulfur cycle against the Macrostrat data, the team realized that the production and burial of pyrite has been much more significant than previously thought, accounting for more than 80% of all sulfur removed from the ocean (rather than the 30-40% in prior estimates). As opposed to the variability they saw for sulfate evaporite burial, pyrite burial has been relatively stable throughout the period. The analysis also revealed that most of the sulfur entering the ocean washed in from the weathering of pyrite exposed on land. In other words, there is a balance between pyrite formation and burial, which releases oxygen, and the weathering of pyrite on land, which consumes it. The implication of these findings is that the sulfur cycle regulates the atmospheric concentration of oxygen more strongly than previously appreciated.
{ "pile_set_name": "Pile-CC" }
Wednesday, October 6, 2010 In Which Adam Spends Too Much Time and Energy Pondering Over What Having Stretchy Powers Entails Wednesday! Along with the yellow oval underneath Batman's chest emblem, we started getting Elongated Man as a back-up story. While I have nothing against J'Onn J'Onzz or Roy Raymond, TV Detective, I found Ralph and Sue Dibny's stories a lot more entertaining. And hey! Why not start the day with some Fun with Out of Context Dialogue?(tm!): Over the years, Ralph's powers were a little undefined, depending on who wrote the story. But basically, Ralph was only supposed to be able stretch. Thusly: Personally, I can wiggle my ears, but it kind of freaks me out that Ralph was able to use his nose as an appendage. I mean, no matter how large my nose became, I don't think I would have the muscular control to wrap it around a pistol like that. I think the fact that he's referring to his nose as an "elephant's trunk" is our clue that the writer over-stepped Ralph's abilities to stretch. And there's also this: This kind of makes more sense to me, because the muscles in your arms make smacking someone with your elbow an easy thing to do. So while I question that stretching gives you the ability to use your nose as an eleventh finger, things like smacking someone with an outstretched arm, leg, elbow, knee, etc. made perfect sense. This, however, confuses me: Stretching does not equal giganticism. Ralph is not Plastic Man, who could actually change his shape. Ralph stretches, the end. Is anyone of the opinion that you could actually stretch your hand and make it bigger, thereby making it a plausible use of his powers? In this particular run, Ralph's stretching powers are all over the map, and I'm thinking we may have crossed the line here. If this was a "legal" use of the power, why didn't Ralph stretch out his chest to make himself appear more buff? He was vain enough to do it back in the day. I'm thinking "stretching" means "extending," and they went a little too far. Your thoughts? It also bothered me when Mr. Fantastic would become a bouncing ball. I don't think stretching means you can do that, either. And hey, did you know there was a Plastic Man tv pilot pitched to the Cartoon Network? Ah, what could have been. 5 comments: Of course, Sue Dibny and Sue Storm Richards were grateful for their hubbys' ...er...talents! Reminds me of Fred Hembeck's "Comic Book Newlywed Game" strip: Bob Ewebanks: "What's the most unusual place you and Reed've made whoopee?"Sue Richards: "The kitchen and the bedroom."Bob: "Well, we can only take one answer, Sue, so…"Sue: "No, you don’t understand, Bob.I was in the kitchen, Reed was in the bedroom....." Luffy, the rubbery hero of One Piece, can make his body parts gigantic, but only by inflating his bones (!) first. Plus, it has the nasty side effect of leaving him shrunken and vulnerable for a few minutes afterwards. About Me Be My Facebook Pal! Subscribe to CMNS! Review and Revenue Policies This site supports itself. Time and materials needed to bring you the laughs is a labor of love from me to you. However, there are a countless number of innocent souls out there who do need your support. Find your nearest Animal Rescue group with this link and dontate your time, money and love to them. Thanks! This site will consider reviewing any comic book-related media, including (but not limited to) graphic novels, television programs, movies, music, PC software, banana bread and video games. However, no compensation of any kind will be expected or accepted. Please contact comicsmakenosense (at) gmail (dot) com for submission guidelines. I reserve the right to say anything I dang well please (or nothing at all) about anything. If you don't like it, start your own blog. I'm certainly not stopping you.
{ "pile_set_name": "Pile-CC" }
This invention relates to a grease for constant velocity joints, in particular, a grease for constant velocity joints which has a good extreme pressure property, good durability and vibration inhibiting effect by adding organic molybdenum compound, antimonydialkyl dithiocarbamate (hereinunder referred as Sb-DTC), a zinc dithio phosphate and organic sulfur compound. The conventionally used greases include greases containing sulfur-phosphorus extreme pressure agent and an extreme pressure grease containing molybdenum disulfide and these greases are in general used in lubricating parts where wears and fretting corrosions are easily caused by extreme pressure, such as constant velocity joints used in motorcars (C.V.J), universal joint, steer linkage, spline shaft gear, coupling in industrial machine, gear motor and transmission gear. Greases for wear-inhibiting and extreme pressure composed of sulfur-phosphorus compound were disclosed in U.S. Pat. Nos. 4,466,895 and 3,322,802 and Japanese Patent Publication Soh 66-47099. In these greases, by using sulfur-phosphorus compound independently or in complex, the friction coefficient and extreme pressure were improved. But in order to increase the extreme pressure and decrease the friction coefficient high temperature, a comparatively large amount of additives are required to be used. Some problems remained unsolved such as thermal decomposition of grease by active sulfide derived from the decomposition of sulfur-phosphorus compound in causing high temperature, corrosion and aging by acidic compound. Greases using organic molybdenum, were disclosed in U.S. Pat. Nos. 3,840,463, 4,466,901, 4,428,861, 3,400,140 and 4,208,292 which describes greases using organic molybdenum compound (Mo-DTP) independently of other extreme pressure additives. Further U.S. Pat. No. 3,509,051 disclosed a grease which is characterized in using polyurea thickener, organic molybdenum compound, especially molybdenum dialkyl dithiocarbamate (Mo-DTC) and organic zinc compound in mixed condition to the basic oil. However, with respect to the use of organic molybdenum independently, wear-resistance is increased owing to a decrease in the friction coefficient, and there is no synergistic effect between the organic molybdenum and other extreme pressure additives. And as there are limits in extreme pressure of molybdenum disulfide (MoS.sub.2) compound produced by the decomposition of organic molybdenum, in friction condition where extreme pressure property is greatly required, great heat radiation due to lubrication in friction area and great deal of wears like scoring caused. And in case that a mixture of an organic molybdenum compound and an organic zinc compound (Zn-DTP) is used as with a lithium grease there is an increase in both, friction coefficient and wear-resistance. Though the critical temperature of lithium grease is 120.degree. C., particularly in flanging type constant velocity joints wherein the rolling friction and sliding friction simultaneously occur, the temperature the of surrounding area increases to over the maximum 120.degree. C. because the of impulse load and frictional heat caused by sliding friction. Furthermore, the thermal decomposition temperature of Mo-DTP and Zn-DTP is low therefore are readily decomposed at 120.degree. C. into molybdenum disulfide compound and some cause some detrimental side-effects such as corrosion, sludge and slight-corrosions remain unsolved. Further Japanese Patent Publication Pyung 5-62639 disclosed a grease composition comprised of molybdenum a compound and sulfur compound, which improved oxidation stability, wear resistance and corrosion-inhibiting effects but failed to reduce the beating noise and vibrations. Conventionally used greases do not infiltrate into the lubricating area well in bad lubrication conditions which can result in wear and wear vibrations. And in the parts where slight vibrations do occur, the oxide produced by initial corrosion accelerates the wear, and abnormal beating noise, and vibrations occur. Therefore, the inventors have made efforts to solve the aforementioned problems and at last have succeeded invent a grease which is characterized in that the extreme pressure and the wear-resistance properties are greatly improved, using organic molybdenum, antimony dialkyl dithiocarbamate, zinc dithiophosphate and organic sulfide compound in mixed condition; sludge occurrence possibility is reduced by improving thermal stability of additives; infiltration into the lubricating area is made easy by low viscosity; and good durability is aquired when it applied to constant velocity joints.
{ "pile_set_name": "USPTO Backgrounds" }
Q: Scene Graph as Object Container? Scene graph contains game nodes representing game objects. At a first glance, it might seem practical to use Scene Graph as physical container for in game objects, instead of std::vector<> for example. My question is, is it practical to use Scene Graph to contain the game objects, or should it be used only to define scene objects/nodes linkages, while keepig the objects stored in separate container, such as std::vector<>? A: Deciding on what type of scene management to use depends very heavily on what type of logic you are trying to run. Consider the different consumers of your scene: Rendering Consumer The renderer probably just wants to know what is currently visible to the user at any given point. It wants a bounding volume hierarchy for fast culling (BVH wiki article) so that it can figure out that a chair inside a boat doesn't need to be drawn because the boat's bounds are outside the view frustum. This might be embedded into an octree. It also might want to have an idea that the chair is on its back inside the boat, and that the boat is rolling up and down on some waves when it finally comes into view. That way to find the final world coordinates of the chair's vertices it can concatenate the chair and boat transforms and be done with it (this also makes your job as a programmer easier). Yet another way of looking at this problem is that the renderer is probably running a good card, and ultimately just wants a pile of triangles sorted so as to minimize texture, shader, material, lighting, and transform state changes. This last will probably help you more than a BVH, preformance-wise. Game Logic Consumer The game logic probably just wants a flat list of things that can talk to each other by a messaging system, so a std::vector is probably fine. The game might also want a way of keeping track of who is closest to what, so some sort of [nearest-neighbor][3] information might be helpful in that case. This can be provided by a BVH also, but having to up and down the graph might be annoying. The game might even just want to know that when it moves A, A's item B should move too... in which case we are back to a sort of transform hierarchy. Physics Consumer Your game physics might want to have a [special representation][4] of indoor spaces for very fast collision detection. Alternately it might use some sort of octree or [spatial hashing][5] to efficiently find things that might collide. None of the above physics data structure really looks like a "scene graph" in the Java3D sense. Audio Consumer An audio engine just wants geometry, perhaps a potentially visible (audible) set, or some sort of bounding volume hierarchy to calculate sound attenuation and propogation. Again, not really a normal sort of scene graph, though it may well be a graph of geometry in your scene. Ultimately... ...it really just depends on the exact needs of your application. I'd suggest using a flat list to start with, and seeing where your issues arise. You might even try a flat list with a transform hierarchy, because that is perhaps the one sort of scenegraph useful for reasons other than efficiency. Good luck! A: There's one good reason not to use the scene graph as the container for game objects, and that's instancing. If you want to reuse some resources, it makes much more sense to just refer to the resource from your scene graph several times than to have several actual copies.
{ "pile_set_name": "StackExchange" }
Hanson Shingles Hanson Shingles Company Overview Hanson Roof Tiles is “the leading manufacturer of concrete roof tile in the US.” (official site) Part of the larger Heidelberg Cement Group, with locations in 6 countries, the firm manufactures its roofing products at 9 locations in four US states. Hanson serves the South, Southwest, Florida, Texas, and California. The company offers an incredible array of shingle profiles in a wide variety of stock and custom colors. Hanson Shingles Hanson’s products emulate the look of traditional slate, clay tile, and cedar shake roofs to blend with any architectural style. Tile surfaces can be smooth or textured with uniform or ragged edges. Hanson’s regional plants use “more oxides to generate better, longer-lasting colors.” (official site) Tiles can be single-color, or use blended shades and patterns. Installation Concrete tile is very heavy and is best installed by trained and licensed professionals. Special attention must be paid to the underlying roof structure. However, once installed, tile roofs are essentially maintenance-free. Rating: 4 out of 10 Durability Hanson’s concrete shingles will outlast most traditional roofing materials. They will stand up to the worst tropical weather and harsh sunlight. Concrete roof systems won’t rot and are highly pest-resistant. Warranty Hanson Roofing Tile provides a first-class limited Lifetime warranty, transferable to subsequent owners. Designed to withstand the rigors of stormy tropical regions, concrete tile has an expected defect-free life in excess of 50 years.
{ "pile_set_name": "Pile-CC" }
OUTLOOK EMAIL NOTIFICATION Your Date of Migration is: May 23rd YOU WILL BE UNABLE TO SEND E-MAIL unless you take the following action: Please go through your Notes email and clean out as many old/un-needed email items as possible BEFORE your date of migration.? After you are migrated to Outlook you will only be allocated 100MB of total Mailbox space.?? If more than this amount of data is migrated to Outlook YOU WILL NOT BE ABLE TO SEND E-MAIL until it is below the 100MB limit.? Cleaning up your Notes email now will prevent this from happening to YOU. Enron's messaging platform is migrating from Lotus Notes to Microsoft Outlook 2000 worldwide. You will be accessing Outlook for all of your email functions. WHY IS ENRON MIGRATING TO OUTLOOK 2000? Many factors contributed to the decision to migrate from Lotus Notes to Microsoft Exchange/Outlook. The most prominent factors were: ? Significant advantages to moving to a product that is more integrated with current Enron apps (Windows 2000, Office and Internet Explorer) ? More efficient Shared PC and Roaming User features ? Improved support and integration for Palm/CE devices ? Instant Messaging capabilities WHAT IS BEING MIGRATED TO OUTLOOK 2000? ? Email Messages. From the date of your scheduled migration, the last (30) thirty days of your Email will be converted for use in Outlook. ? All your folders in Notes you use to store email messages in. ? To Do Items ? Journal Items ? Calendar Entries dating from (1) one year in the past to (10) ten years in the future will be converted. ? Address Books, but NOT your Distribution Lists that you created. You will need to re-create these in Outlook. Thank you, Outlook 2000 Migration Team
{ "pile_set_name": "Enron Emails" }
832 F.Supp. 209 (1993) Suella DEBOLT, et al., Plaintiffs, v. Mike ESPY, Secretary, U.S. Department of Agriculture, et al., Defendants. No. C2-91-157. United States District Court, S.D. Ohio, E.D. July 18, 1993. *210 *211 Sandra A. Scott, Southeastern Ohio Legal Service, Zanesville, OH, Gary Michael Smith, Southeastern Ohio Legal Service, New Philadelphia, OH, for Suella Debolt. Sylvia T. Kaser, U.S. Dept. of Justice, Chief, Special Litigation Section, Washington, DC, O. Charles Hosterman, U.S. Atty., Columbus, OH, for all other defendants. James D. Thomas, Robert L. Hust, Squire, Sanders and Dempsey, Columbus, OH, for Woodrose Ltd. MEMORANDUM AND ORDER BECKWITH, District Judge. Background This case is currently before the Court to consider several motions filed by the parties in this action. This matter arose when Suella Debolt filed a complaint against two private Defendants, the owner and management company of the housing project in which she resided, and against several federal Defendants, the Secretary of Agriculture, and the Administrator, State Director, and a District Director of the Farmers Home Administration (hereinafter the "FmHA"). Following their settlement with the Plaintiff, the private Defendants were dismissed from this case in February of 1992. In her complaint, the Plaintiff contends that the FmHA's occupancy limits combined with the agency's administration of the Rural Rental Housing program produce a discriminatory impact on families with children. Beginning in 1986, Ms. Debolt resided in the Village Green Apartments, a "Section 515" project. The FmHA administers a program called the Rural Rental Housing program or Section 515 program. Under Section 515, the FmHA administers the Section 515 program through loan programs and through project operations. The loan programs aid in the construction of rental housing for very low, low, or moderate income persons or families residing in rural areas experiencing a shortage of adequate housing. 42 U.S.C. § 1485. Ms. Debolt's lease contained a provision that limited the number of occupants in her apartment to four persons. In 1991, when Ms. Debolt gave birth to a fourth child, she was in violation of the lease's four person occupancy limit. Accordingly, the management of the Village Green Apartments notified Ms. Debolt that she was required to move at the end of her lease term. However, as part of the settlement of the eviction action pending against her, Ms. Debolt stayed in her apartment for an additional year. Later, in December of 1991, Ms. Debolt had a fifth child and she was unable to find a larger unit in FmHA's Rural Rental Housing Program, so she moved in with relatives. On September 30, 1992, this Court granted the Plaintiffs' motion to certify this matter as a class action pursuant to Rule 23 of the Federal Rules of Civil Procedure. Accordingly, the Plaintiff class has been certified as: *212 all persons who either are or would be eligible to reside, or to continue to reside within a project financed under FmHA's Section 515 Rural Rental Housing Program, but for the fact that their family size exceeds that permitted to reside in a two bedroom apartment under FmHA's occupancy standards. The Plaintiffs' First Amended Complaint pleads a class action challenging the promulgation and enforcement of an FmHA regulation, 7 C.F.R. § 1944.553, as conflicting with 42 U.S.C. §§ 1471, 1480, and 1485. The Plaintiffs argue that § 1944.553 was promulgated in violation of the Administrative Procedure Act (hereinafter the "APA"). The Plaintiffs also argue that the Defendants improperly administer the Section 515 programs in the State of Ohio. The Plaintiffs assert that the Defendants have a duty to review and disapprove non-complying termination notices to tenants, but that they have failed to do so. The Plaintiffs also assert that the Defendants have approved a model rental agreement which does not provide for a yearly rental term. The Plaintiffs also allege that these federal officials failed to administer the Section 515 program to meet the needs of eligible families. The Plaintiffs specifically assert that this improper administration arbitrarily and unlawfully denies or terminates eligibility for financially eligible tenants and applicant families needing more than two bedrooms under FmHA's restrictive occupancy limits. The Plaintiffs further allege that these occupancy limits, along with the Defendants' improper administration, produce a discriminatory and unlawful disparate impact upon families with children, in violation of the Fair Housing Act. The Federal Defendants' Motion for Judgment on the Pleadings The federal Defendants have filed a motion for judgment on the pleadings pursuant to Rule 12(c) of the Federal Rules of Civil Procedure. In their motion, the federal Defendants assert that this Court is without jurisdiction to adjudicate the Plaintiffs' claims, except for those claims contained in Count 5 of the Plaintiffs' complaint. The federal Defendants first contend that Counts 3, 4, 6, 7, 8, 9, and 10 are barred by the doctrine of sovereign immunity. The federal Defendants also contend that the Plaintiffs have no private right of action under either the United States Housing Act of 1949 (hereinafter "USHA") or the Fair Housing Act, if sovereign immunity has been waived. The federal Defendants finally argue that the Plaintiffs lack standing to assert their claims that FmHA must finance rental housing units of a particular size. However, the Plaintiffs argue that their claims are not barred by the doctrine of sovereign immunity as the law is allegedly well settled that statutory and constitutional claims for equitable relief are not barred by sovereign immunity. Also, the Plaintiffs argue that their claims for individual damages and attorney fees under Title VIII are not barred by sovereign immunity, since such immunity was waived by Congress. Under the Administrative Procedure Act, Title 5 Section 702 provides, in part: ... An action in a court of the United States seeking relief other than money damages and stating a claim that an agency or an officer or employee thereof acted or failed to act in an official capacity or under color of legal authority shall not be dismissed nor relief therein be denied on the ground that it is against the United States or that the United States is an indispensable party. Thus, Section 702[1] of the APA acts to waive sovereign immunity for the Plaintiffs' USHA and constitutional claims. However, in their complaint, the Plaintiffs have only asserted one of their eight remaining claims under the APA. After a careful review of the authorities and arguments advanced by the parties in their memoranda, *213 this Court finds that it agrees with the federal Defendants that all of the Plaintiffs' claims should be asserted under the APA. Accordingly, the next question is whether the Plaintiffs should be given leave to amend their complaint to assert their claims under the APA. The federal Defendants argue that the Plaintiffs should not be given leave to amend their complaint in this case to invoke the Administrative Procedure Act, since the litigation has been pending for more two years. However, the Court notes that the federal Defendants did not raise this issue until they filed this motion for judgment on the pleadings. The first mention of this issue was contained in the federal Defendants' motion for judgment on the pleadings which was filed almost two years after the institution of the case. The Court first notes the rationale expressed by Judge Whipple of the Western District of Missouri in the case of Tinsley v. Kemp, 750 F.Supp. 1001 (W.D.Mo.1990). In Tinsley, Judge Whipple stated, in part: The intent of the complaint is obvious, so the amendment would be almost a formality. Nevertheless, plaintiffs' basis for bringing civil rights claims against a federal agency should be established explicitly in their complaint. Accordingly, leave will be granted to amend the complaint. Id. at 1010. In another case, Judge Haight of the Southern District of New York allowed plaintiffs to amend their complaint to invoke the Administrative Procedure Act. Almonte v. Pierce, 666 F.Supp. 517, 524-5 (S.D.N.Y. 1987). In Almonte, Judge Haight noted that the case was at the early stage of litigation and that the federal Defendants had not demonstrated that any prejudice would result from allowing the plaintiffs to amend their complaint. Id. at 525. In this case, the federal Defendants have not established that any specific prejudice would result from allowing the Plaintiffs to amend their complaint at this late date. The federal Defendants do allege that "voluminous" discovery has occurred in this case, although they do not allege how a technical amendment to the Plaintiffs' complaint would affect whatever discovery has already occurred in this case. The Court simply can not infer that prejudice would result from an amendment which is "almost a formality." See, 750 F.Supp. at 1010. Moreover, Rule 15(a) of the Federal Rules of Civil Procedure provides that "leave [to amend] shall be freely given when justice so requires." As in Tinsley, the Plaintiffs' intent as expressed by their complaint is evident, and the amendment in this case is thus a mere formality. Under the circumstances presented by this case, the Court finds that justice mandates that the Plaintiffs be given leave to amend their complaint. The Court hereby DEEMS the Plaintiffs' complaint to be amended so that their claims are now asserted under the Administrative Procedure Act. The federal Defendants' motion for judgment on the pleadings is hereby DENIED.[2] The Motions for Summary Judgment Standard of Review Rule 56(c) of the Federal Rules of Civil Procedure provides: [Summary judgment] ... shall be rendered forthwith if the pleadings, depositions, answers to interrogatories, and admissions on file, together with the affidavits, if any, show that there is no genuine issue as to any material fact and that the moving party is entitled to judgment as a matter of law. The purpose of a summary judgment motion is not to resolve factual issues, but to determine if there are genuine issues of fact to be tried. Lashlee v. Sumner, 570 F.2d 107, 111 (6th Cir.1978). In 1986, the United States Supreme Court issued three decisions which gave new life to Rule 56 as a mechanism for weeding out certain claims at the summary judgment stage. Anderson v. Liberty Lobby, Inc., 477 U.S. 242, 106 S.Ct. 2505, 91 L.Ed.2d 202 (1986); Celotex Corp. v. Catrett, 477 U.S. 317, 106 S.Ct. 2548, 91 L.Ed.2d 265 (1986); and Matsushita Electric Industrial Co. v. Zenith Radio Corp., 475 U.S. 574, 106 S.Ct. 1348, 89 L.Ed.2d 538 (1986). It is well recognized *214 that these cases brought about a "new era" in summary judgment practice. Street v. J.C. Bradford & Co., 886 F.2d 1472, 1476 (6th Cir.1989). The three opinions by the Supreme Court reflect a return to the original purpose of the summary judgment motion. Id. Accordingly, the summary judgment "standard provides that the mere existence of some alleged factual dispute between the parties will not defeat an otherwise properly supported motion for summary judgment; the requirement is that there be no genuine issue of material fact." Anderson, 477 U.S. at 247-8, 106 S.Ct. at 2510 (emphasis in original). Moreover, when a party cannot establish the existence of an element essential to that party's case on which the party will have the burden of proof at trial, the Court must enter summary judgment against that party, pursuant to Rule 56. Celotex, 477 U.S. at 322, 106 S.Ct. at 2552. Thus, in order to survive a motion for summary judgment, [w]hen the moving party has carried its burden under Rule 56(c), its opponent must do more than simply show that there is some metaphysical doubt as to the material facts.... In the language of the Rule, the nonmoving party must come forward with "specific facts showing that there is a genuine issue for trial." Matsushita, 475 U.S. at 586-87, 106 S.Ct. at 1356 (emphasis in the original) (footnote and citations omitted). Rule 56(e) of the Federal Rules of Civil Procedure provides: When a motion for summary judgment is made and supported as provided in this rule, an adverse party may not rest upon the mere allegations or denials of the adverse party's pleading, but the adverse party's response by affidavits or as otherwise provided in this rule, must set forth specific facts showing that there is a genuine issue for trial. If the adverse party does not so respond, summary judgment if appropriate, shall be entered against the adverse party. Accordingly, mere allegations are not sufficient to defeat summary judgment. The Court can now apply this standard to the Plaintiffs' and the federal Defendants' motions for summary judgment. The Federal Defendants' Motion for Summary Judgment The federal Defendants first contend that neither the United States Housing Act of 1949 ("USHA") nor the Fair Housing Act commands the FmHA to finance construction of additional three and four bedroom units or to dictate to private developers that they must build such units. Since no three bedroom apartments were available for Ms. Debolt when her family size increased to five, the Plaintiffs assert that the FmHA through its improper administration has violated Ms. Debolt's rights under the USHA, Section 515 of USHA, and the Fair Housing Act. However, the Plaintiffs have not indicated exactly what statutory directive supports these alleged rights. The Court has already determined that the Plaintiffs' claims are reviewable solely under the Administrative Procedure Act, 5 U.S.C. § 551 et seq. Pursuant to the APA, the FmHA's administration of the Rural Rental Housing Program may only be set aside if it is "arbitrary, capricious, an abuse of discretion or otherwise not in accordance with law; ... [or] in excess of statutory right...." 5 U.S.C. § 706; Jaimes v. Toledo Metropolitan Housing Authority, 715 F.Supp. 835, 839 (N.D.Ohio 1989). The Court first notes that pursuant to Section 515 of the Rural Rental Housing program, the FmHA is authorized to utilize its discretion in its financing of elderly, family or handicapped housing. 42 U.S.C. § 1485. The Court agrees with the federal Defendants that the FmHA is not required to finance any particular proportion of each type of housing. Moreover, the Court agrees with the federal Defendants that the provisions cited by the federal Defendants in their memoranda support their argument that it was Congress' intention that the private sector take an active role in meeting the nation's housing needs. Additionally, the federal Defendants argue that the familial status provision of the Fair Housing Act does not require the FmHA to *215 finance, or private developers to construct, housing for large families. On the other hand, the Plaintiffs argue that the Fair Housing Act obligates the FmHA to evaluate and consider the impact of proposed housing on existing discriminatory housing patterns, to refrain from approving housing that reinforces those patterns, and to affirmatively promote non-discriminatory housing, relying on Jaimes v. Lucas Metropolitan Housing Authority, 833 F.2d 1203 (6th Cir.1987); and Garrett v. Hamtramck, 503 F.2d 1236 (6th Cir.1974). In 1988, Congress passed the Fair Housing Amendments Act of 1988 which amended the Fair Housing Act of 1968, which is located in 42 U.S.C. §§ 3601-3631. The Fair Housing Act is also referred to as Title VIII of the Civil Rights Act of 1968. Pursuant to Title VIII, discriminatory housing practices based on race, color, national origin, religion, and sex are prohibited. The 1988 Amendments added "familial status" as a protected category with respect to discriminatory housing practices. The Court notes that as opposed to families in general, "large families" are not a specifically protected class under Title VIII. Indeed, the Court notes that in this case Ms. Debolt did not violate her lease and receive an eviction notice until the birth of her fourth child. Accordingly, Ms. Debolt resided in the apartment with children for some length of time before she violated the four person occupancy limit. The Court finds that the Plaintiffs' position here is not supported by the Fair Housing Act, the legislative history of the Act, the administrative interpretation of the Act by the Secretary of HUD, and the relevant case law. Indeed, the Plaintiffs simply have no right to public housing, and the FmHA is not obligated to finance or to compel private developers to build large apartment units. See, Citizens Comm. for Faraday Wood v. Lindsay, 507 F.2d 1065, 1070-71 (2d Cir. 1974), cert. denied, 421 U.S. 948, 95 S.Ct. 1679, 44 L.Ed.2d 102 (1975); Mahaley v. Cuyahoga Metro. Housing Authority, 500 F.2d 1087, 1093 (6th Cir.1974), cert. denied, 419 U.S. 1108, 95 S.Ct. 781, 42 L.Ed.2d 805 (1975). The federal Defendants also contend that the Secretary's regulations effectuate the policies and purposes of the Rural Rental Housing Program and the Fair Housing Act. The Plaintiffs contend that the FmHA's duties under the Act are found in the case law interpreting Section 3608. The federal Defendants concede that Section 3608(d) is applicable to the various FmHA programs. The Plaintiffs' complaint here focuses on alleged "improper administration" of the Rural Rental Housing Program by the FmHA. The Plaintiffs' claims here originate because Ms. Debolt was not able to obtain a larger, FmHA subsidized apartment and because as many as 15% of eligible families are not able to obtain such housing. In this area, the Plaintiffs' claims are governed by the narrow scrutiny of Section 3608 and the Administrative Procedure Act. See, N.A.A.C.P. v. Secretary of Housing & Urban Dev., 817 F.2d 149, 157-58 (1st Cir.1987). Section 3608(d) of Title VIII provides that the FmHA "shall administer [its] programs and activities relating to housing ... in a manner affirmatively to further the purposes of [the Fair Housing Act]." 42 U.S.C. § 3608(d). The Court finds that the Plaintiff has not presented any evidence that demonstrates that the FmHA acted arbitrarily in deciding its obligations under Section 3608. Additionally, the federal Defendants contend that no equal protection violations occurred in this case, since the FmHA's actions were rationally related to the USHA's legitimate purpose. The federal Defendants have demonstrated that the existing stock of housing is the product of a rational, market-based process. The Plaintiffs have not presented evidence to the contrary. Moreover, the federal Defendants assert that the Plaintiffs' allegations do not rise to the level of a substantive due process violation. In support of this claim, the Court finds that the Plaintiffs have not presented any evidence that supports the extremely high standard of proof required for a substantive due process claim. Also, the federal Defendants argue that the Plaintiffs' claim against the FmHA as a third party beneficiary must be dismissed, *216 since the Plaintiffs have no actionable rights as a third-party beneficiary. The Plaintiffs claim that the "financial, contractual and fiduciary relationship" with respect to the operation of the Village Green Apartments gives rise to certain third party beneficiary rights, and that these rights were violated. The federal Defendants assert that the Plaintiffs have failed to identify any particular contractual arrangement which would give rise to her rights as an alleged third party beneficiary. The courts have generally concluded that tenants are not third-party beneficiaries to regulatory agreements under the USHA. See, e.g., Perry v. Housing Authority of City of Charleston, 664 F.2d 1210, 1218 (4th Cir. 1981); Falzarano v. United States, 607 F.2d 506, 511 (1st Cir.1979); Angleton v. Pierce, 574 F.Supp. 719, 735-6 (D.N.J.1983), aff'd, 734 F.2d 3 (3d Cir.), cert. denied, 469 U.S. 880, 105 S.Ct. 245, 83 L.Ed.2d 183 (1984); Carson v. Pierce, 546 F.Supp. 80, 86-7 (E.D.Mo.1982), aff'd, 719 F.2d 931 (8th Cir. 1983). Thus, the Court finds that the Plaintiffs have no actionable rights as third-party beneficiaries with respect to the operation of the Village Green Apartments. The federal Defendants also argue that the Plaintiffs cannot establish that FmHA failed to review her notice of eviction and that such a failure violated her rights. The Plaintiffs' claim here appears to be that the FmHA failed to meet its regulatory duty to disapprove legally deficient eviction notices utilized by borrowers and landlords. The Court notes first that the Plaintiffs have not asserted that the FmHA failed to review the particular eviction notice that was served upon Ms. Debolt. Thus, it appears to the Court that the Plaintiffs lack standing on this issue. See, Lujan v. Defenders of Wildlife, ___ U.S. ___, ___, 112 S.Ct. 2130, 2140, 119 L.Ed.2d 351 (1992). The federal Defendants also argue that the Plaintiffs have procedural safeguards with respect to eviction notices under Ohio landlord-tenant law. The Court finds that the existence of this procedural safeguard removes the potential of a due process violation arising from the FmHA's failure to review the eviction notices. Additionally, the federal Defendants assert that the Plaintiffs' claims concerning FmHA's model lease terms are moot. The Plaintiffs assert that the FmHA's model lease violated both the USHA and the APA. The Plaintiffs argument is that the month-to-month renewal term allowed by the Ohio model lease violates USHA's regulations which mandate that "[l]eases for units for which tenants are eligible must cover a period of one year...." 7 C.F.R. Ch. XVIII Pt. 1930, Subpt. C., Exh. B at par. VIII(A)(1). On August 28, 1992, the director of FmHA's Ohio office issued Administrative Notice 956, that contained a new form of model lease that provided for rental terms of one year. Accordingly, the Court finds that the Plaintiffs' claim regarding the month-to-month rental term is now moot. As it has determined that the federal Defendants' motion for summary judgment is meritorious, the Court need not discuss why it has concluded that the Plaintiffs' two motions for summary judgment lack merit. Conclusion For the reasons outlined above, the Court hereby DENIES the Defendants' motion for judgment on the pleadings. The Court hereby DEEMS the Plaintiffs' complaint to be amended so that their claims are asserted under the Administrative Procedures Act. For the reasons outlined above and for the reasons stated by the federal Defendants in their memoranda, the Court hereby GRANTS the federal Defendants' motion for summary judgment, and this case is hereby DISMISSED. IT IS SO ORDERED. NOTES [1] Under the APA, the judicial scope of review has been established by Section 706. Section 706 provides, in part, that a reviewing court shall "hold unlawful and set aside agency action, findings, and conclusions found to be (a) arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law; (b) contrary to constitutional right, power, privilege or immunity; (c) in excess of statutory jurisdiction, authority, or limitations or short of statutory right; (d) without observance of procedure by law...." [2] Due to its ruling on the Section 702 jurisdictional issue, the Court will not discuss the other arguments advanced by the parties on the motion for judgment on the pleadings.
{ "pile_set_name": "FreeLaw" }
Suppression of juvenile social behavior requires antagonism of central opioid systems. Pairs of male and female rats were injected with either tertiary naltrexone (NTX) which readily crosses the blood-brain barrier, or quaternary naltrexone (QNTX) which does not, to determine the importance of central opioid systems in the elaboration of juvenile social behavior. In the first experiment, only intraperitoneal injections of NTX (1.0 mg/kg) suppressed the frequency of wrestling pins. Peripheral injections of QNTX (10.0 mg/kg) were without effect. In a second experiment, QNTX (2.0, 4.0, or 8.0 micrograms/4.0 microliters) was injected directly into the lateral ventricles. Intracerebroventricular injection of the moderate dose reliably reduced frequency of pinning while the higher dose was severely incapacitating and the low dose was without effect. The results of these two experiments confirm an important role for brain opioid systems in the control of juvenile social interaction.
{ "pile_set_name": "PubMed Abstracts" }
Apoptosis induction by hypercross-linking of the surface antigen CD5 with anti-CD5 monoclonal antibodies in B cell chronic lymphocytic leukemia. We evaluated cells from 24 patients with B cell chronic lymphocytic leukemia (B-CLL) to determine apoptosis induced by CD5 hypercross-linking. Following the CD5 hypercross-linking with anti-CD5 monoclonal antibodies (MoAbs), we identified 10 patients where CD5 hypercross-linking induced apoptosis (group A) and 14 patients whose cells were resistant to the anti-CD5 MoAbs (group B). The programmed cell death pathway of the cells from patient group A was caspase-3 and poly (ADP-ribose) polymerase (PARP)-dependent, involved a reduction of the mitochondrial transmembrane potential DeltaPsi and a down-regulation of the anti-apoptotic Bcl-2, Mcl-1 and iNOS proteins. Early activation-associated molecules such as CD25 and CD69 were expressed at higher levels than in controls after 6 h of culture with anti-CD5 MoAb. The expression of CD5 and of CD72, the ligand for CD5, were significantly lower in group A compared with group B. Anti-CD20 MoAb had similar activity with anti-CD5 MoAb and the combination of the two MoAbs seemed to be additive. In this study, it is suggested that the cells from some B-CLL patients can be induced into programmed cell death by CD5 hypercross-linking with anti-CD5 MoAbs.
{ "pile_set_name": "PubMed Abstracts" }
A self-organized network (SON) may provide mechanisms for self-configuration, self-discovery, and self-organization. Self-configuration and self-discovery enable network devices (e.g., managed nodes) of the SON to be transparent to ordinary users. Self-organization ensures robustness of the SON during dynamic network topology changes and link breakages. It also ensures optimal and efficient bandwidth utilization. The SON operational and maintenance (OAM) architecture includes a domain manager and its managed nodes, an enterprise management system (EMS), etc. A managed node represents a radio base station (e.g., of a wireless network), home devices (e.g., Internet routers, television set-top boxes (STBs), etc.), etc. Current SON OAM architectures have several disadvantages. For example, the EMS needs to track the addresses of all its managed nodes. The tracking may include registering Internet protocol (IP) addresses and/or port numbers associated with the managed nodes in a directory within or without the EMS. The tracking may also include registering managed node name and IP address/port number pairs associated with the managed nodes in a database within or without the EMS. Such tracking becomes a major task when the number of managed nodes increases and when the managed nodes become mobile (e.g., acquire new addresses). Furthermore, when the EMS wishes to provide a command and/or information to all its managed nodes, the EMS sends the command and/or information, via the domain manager, individually to each managed node (e.g., one method invocation per each managed node).
{ "pile_set_name": "USPTO Backgrounds" }
Mifepristone sensitizing cisplatin for cervical adenocarcinoma HeLa cell sensitivity to chemotherapy and its mechanism. The study was designed to investigate proliferation inhibition for cervical adenocarcinoma HeLa cell treated with cisplatin combined with mifepristone and access its possible mechanism. HeLa cell was processed by different concentrations of mifepristone, cisplatin, and their combination respectively. Cell's proliferation inhibition rate and induction apoptosis ability were detected by MTT assay, FCM; the expression of P53, survivin and HPV E6 protein were measured by Western Blot. The results showed that cisplatin inhibits proliferation of HeLa cells in different concentrations (p <0.01). Mifepristone had no effect on HeLa cell proliferation inhibition rate during 24 and 48 hours (p > 0.05). Mifepristone at low concentrations (< or = 10 micromol/l) combined with cisplatin can significantly enhance the inhibitory effect of cisplatin on HeLa cell line. Flow cytometry showed that mifepristone at low concentrations (< or = 10 micromol/l) combined with cisplatin can induce apparent apoptosis of HeLa cell line in concentration dependent manner. Western blotting demonstrated that the expression of P53 protein increased and the expression of HPV E6 survivin protein decreased in HeLa cells treated with MIF at low concentrations (< or = 10 micromol/l) combined with cisplatin. Mifepristone at low concentrations (< or = 10 micromol/1) can enhance chemosensitivity and capability of inducing apoptosis of cisplatin to HeLa cells. The strengthening effect of growth inhibition and chemosensitivity to cisplatin of mifepristone are associated with down-regulating HPV E6 survivin protein and upregulating p53 protein.
{ "pile_set_name": "PubMed Abstracts" }
Whigs, Marxists, and Poachers Albion’s Fatal Tree: Crime and Society in Eighteenth-Century England by Douglas Hay, by Peter Linebaugh, by John G. Rule, by E. P. Thompson, by Cal Winslow Pantheon, 352 pp., $5.95 (both books will be available in mid-February) (paper) Whigs and Hunters: The Origins of the Black Act by E. P. Thompson Pantheon, 313 pp., $5.95 (both books will be available in mid-February) (paper) Twelve years ago, in 1963, Mr. E. P. Thompson exploded upon the historical scene with a book of erudition, imagination, and moral passion, The Making of the English Working Class. It is one of those books that inspire generations of scholars and students to either emulation or debunking, and it matters relatively little whether or not the major hypotheses stand the test of time. Maybe he was speaking only about a literate labor aristocracy and not about the working class generally; maybe he was grossly unfair to the Methodists; maybe the working class was not “made” as and when he said it was. The book will still remain a towering work of historical literature. Since then Mr. Thompson has been digging back into the eighteenth century in pursuit of that study of elite and popular mentalités that the more advanced sectors of the historical profession now recognize to be as central to the process of historical change as shifts in economic, social, or political structures. The subject matter of these two new books by Mr. Thompson and his associates is the social significance of crime and the law, and they are thus part of this new drive to investigate the historical interactions of society and culture (in the anthropological sense of the term). The keynote essay in these twin volumes is that by Douglas Hay, “Property, Authority and the Criminal Law,” which forms the introduction to Albion’s Fatal Tree. Here he sketches out a new interpretation of the social role of the law in eighteenth-century England. He tries to explain two paradoxes. Why was it that although the legislature kept adding—from about 50 to 200—to the number of offenses against property which carried the death penalty, yet the number of hangings was only about a quarter of what it had been in the seventeenth century, and if anything was tending to fall? Secondly, why did the propertied classes so obstinately refuse until the 1830s to alter this archaic system, in which practice was so wildly at variance with the statute law, despite overwhelming evidence that a milder but more regularly enforced system of punishments would protect their property more effectively and would be more in accord with natural justice and Englightenment thought? The answer to both questions lies in the true functions of law in that society. In 1688 the ruling elite had finally rejected, as an unacceptable threat to its own power, the imposition of a Continental legal apparatus, including the abolition of the jury system and the establishment of an ubiquitous police force. This being the case, social control over the remaining 97 percent of the population had to be maintained by a mixture of terror tempered by mercy, consensus in the rough justice of the system, and an awesome display of the majesty of the law. The passage of more and more penal legislation was not intended to increase the number of hangings but merely to expand the area of the arbitrary exercise of … This article is available to online subscribers only. Please choose from one of the options below to access this article: Purchase a trial Online Edition subscription and receive unlimited access for one week to all the content on nybooks.com. If you already have one of these subscriptions, please be sure you are logged in to your nybooks.com account. If you subscribe to the print edition, you may also need to link your web site account to your print subscription. Click here to link your account services.
{ "pile_set_name": "Pile-CC" }
Sign up for our newsletter Stay in touch with Dr. Levy as he travels the world sharing helpful hints for healthy relationships. Newsletters will hit your email inbox once a month. We won’t share your email with anyone for any reason. Success! Our Treatment Adult/Couples Therapy Though there are many similarities in the human experience, we all have unique qualities and problems. You are an important part of the treatment team. We develop a customized program to meet an individual’s and/or couple’s needs through an ongoing blend of assessments and interventions. Family Therapy Parents often tell us traditional psychotherapeutic approaches have not been effective with their severely attachment-disordered children, who lack the trust and ability to form a working alliance basic to success in therapy. Our family-systems approach and use of Corrective Attachment Therapy focus on assessing and changing relationship patterns. FREE RESOURCES Outpatient Therapy For more than 30 years, our practice has provided outpatient therapy to the people of Evergreen, Colo., and surrounding Colorado communities. Our team offers outpatient therapy in individual and group settings. Our services help adults, couples, children and families. We are experts in treating substance abuse, trauma, school issues, parenting, problem solving, anxiety, depression and family issues — especially in blended families. Intensive Outpatient Program World-recognized psychologists Terry Levy and Michael Orlans originally developed this treatment format to provide help for children and families needing specialized services unavailable where they live. For more than 20 years, families and couples have traveled to Evergreen from every region of the United States and from other countries to receive therapy for attachment-related trauma. Professional Training Terry Levy, Ph.D., B.C.F.E. is an internationally respected teacher, trainer and clinician with an expertise in the treatment of trauma and attachment disorders in children, teens, adults and couples. Throughout the year, he offers training for mental health and social service professionals from his home base at the Evergreen Psychotherapy Center in Evergreen, Colo.; and at seminars hosted across the United States and around the world. Raising a child with attachment issues is incredibly challenging. It can – at times – be exhausting, upsetting, frustrating, depressing, frightening or infuriating. And, it can take a toll on even the strongest, most devoted parent. Secondary traumatic stress (STS)... Over the years we’ve learned that there are certain challenges almost every parent faces somewhere along the way. Parents come to us with similar questions about how to address specific behaviors like lying, eating issues and public tantrums over and over. Here are... Countless studies, surveys, books and articles have been published on what makes a successful relationship. While there is no single formula for a perfect relationship, not surprisingly the research overwhelmingly concludes that good communication skills and habits...
{ "pile_set_name": "Pile-CC" }
1634 1574848011319 httpcache-v1 Method: POST URL: https://www.notion.so/api/v3/getRecordValues Body:+110 { "requests": [ { "id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "table": "block" } ] } Response:+1434 { "results": [ { "role": "comment_only", "value": { "alive": true, "content": [ "e204cc5a-6d54-4c9a-aec7-ba7dda2738bd", "3d6cf5ef-5339-4b1e-903d-0c9a5718d3e4", "c931de19-15e4-42af-8b25-4b91c23aa69e", "abd1ba9e-4450-481a-a596-e9a5883d0e23", "99c91818-8d5c-43f2-8c50-9120f31b066b", "7536037c-e9cb-4d60-bf6d-81db01f4ee20", "7b6188a1-2316-47fa-91cc-b5e98a64f65f", "37de4e0f-64d8-469d-ba64-1ed58bbab771" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_by_id": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_by_table": "notion_user", "created_time": 1551946003534, "format": { "page_full_width": true, "page_small_text": true }, "id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "ignore_block_count": true, "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_by_id": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_by_table": "notion_user", "last_edited_time": 1551946003534, "parent_id": "dc371b5c-6e7e-4e07-8e13-217dd1e6172f", "parent_table": "block", "properties": { "title": [ [ "overflow scroll" ] ] }, "type": "page", "version": 7 } } ] } 15376 1574848011320 httpcache-v1 Method: POST URL: https://www.notion.so/api/v3/loadPageChunk Body:+152 { "chunkNumber": 0, "cursor": { "stack": [] }, "limit": 50, "pageId": "f136e223-a6e8-4428-bbf4-676ef4725a68", "verticalColumns": false } Response:+15135 { "cursor": { "stack": [] }, "recordMap": { "block": { "18bfe038-1096-49f4-8904-e71ca18d76ed": { "role": "comment_only", "value": { "alive": true, "content": [ "de69ce46-4a84-4664-a79d-a8437cc023a2", "6447d07e-7279-4ef8-ad31-221f6202a958", "b5232adc-60a1-4030-aec3-ac8cfbc40c48", "fd36e28b-d46e-4e85-8e33-850a518cc83b", "98e6c7b5-3bf2-46c6-80d3-c5cbd31a66f9", "7a254330-19a7-4078-9eb8-742c9947c27e", "46bc5a5e-461b-4eea-ae50-1350e8c216f2", "700875db-418d-425e-a5bc-4f233df26393", "8d4886c1-0f85-4e70-8001-4f310f6668e5", "a532af1c-38ff-4edb-a80f-44aaf3ccc3dc", "5fee4d4d-52a7-4aa3-a0a2-aa7058cc0bf7", "e012f912-2f98-4998-b135-e4e66d4b296a", "8af638cc-537f-4b8f-a653-1a9437d3ac91", "8ad0b607-0b54-4717-af1f-aaad85004d7a", "c50f71ff-6d61-466c-bcd2-5bf31d7f79f2", "07e243d7-106d-41a6-a70a-54841524dfcf", "ee0aa8d5-16eb-4655-a3e4-1207a664ff8d", "2f766cdc-785a-4190-bb87-fe34634ce93e", "cfe69373-b211-484f-859d-994b14c21101", "04974825-61b2-4800-ae16-c05860d63e8b", "bb456a6b-e908-423a-9f01-4100ba169355", "dbaba9a4-66d7-412d-b2c9-3da6f51a9e9a", "2abad8a1-f82a-4e33-b817-7e35df65d648", "6ef12527-82ae-43bf-9d83-10c7e4f6b679", "325b6445-db10-4fc3-a31f-291360b669b7", "b21963da-f818-4709-8a20-7a8c6cb88159", "b4240597-9643-4c68-b3a4-8effe38559fd", "c07e46a0-dbd4-4d9a-a7dd-7fa847bce4ff", "ae18c98c-9032-491b-b984-171019e74029", "0e939055-c399-4797-bc21-990b347dae23", "f6444ef0-3c83-43a3-86be-f9a075d908f4", "cf0362b8-6e4e-49e6-9095-491801d0527a", "7178e7e9-93fe-4d60-84de-29271ca4ed9a", "13d6e8c9-4b63-4f68-8735-21b9e3ecde18", "67afe3a0-9215-4b71-a6ba-01022485f703", "1fa750ee-637a-4507-a179-de35cff27ee6", "96ef4707-530e-46e7-83ea-44b8e0942142", "74c97777-0820-445d-a5e9-e83344297798", "e7833d5c-379d-420f-ba51-ef44b8c4115e", "cdcda7af-0abd-41da-9da0-df9cd78cd933", "3dad8422-9fd0-44f5-af2d-9f073bdf094d", "e9225abc-d9b1-4d31-a198-f7989d53b201", "569b653b-8c46-41da-bbb6-43badd1b8184", "846517a8-7e11-4dbb-95f6-d465b22654cc", "a0f52163-92b9-4ebe-a5a6-f92f4bbb23ec", "3a8f7033-0e15-46d4-8e08-f182fcbbd38d", "918e8e2b-d2af-4106-b9de-f23c6e6848ed", "dc371b5c-6e7e-4e07-8e13-217dd1e6172f", "17762587-a566-4b52-9f4c-d879db1cdfd7", "e3859e10-527d-4464-bb5e-abd159e7debe", "567b8685-62e7-458f-84e5-812f701291bf", "293d87a0-951a-45c1-b5e6-000a3f655a44", "bd677d4f-1848-4e73-b270-c999c66de3bf", "26e07e04-43ea-4728-be9d-d1023c1766cc", "26228db1-a709-45ac-b18b-f6da01bf2005", "cc368be6-de28-48b0-bff1-5d7584737114", "f8858db0-ce6b-4454-879a-c60a0cfbcd09", "f83a172b-4b07-4ba9-ac68-3ee297828b82" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_by_id": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_by_table": "notion_user", "created_time": 1551944923897, "format": { "page_full_width": true, "page_small_text": true }, "id": "18bfe038-1096-49f4-8904-e71ca18d76ed", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_by_id": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_by_table": "notion_user", "last_edited_time": 1570604820000, "parent_id": "ca9c0a7c-eb82-42d5-879f-ef8a96839b12", "parent_table": "block", "permissions": [ { "allow_search_engine_indexing": false, "role": "comment_only", "type": "public_permission" } ], "properties": { "title": [ [ "Essential CSS" ] ] }, "type": "page", "version": 100 } }, "37de4e0f-64d8-469d-ba64-1ed58bbab771": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1551946003534, "id": "37de4e0f-64d8-469d-ba64-1ed58bbab771", "ignore_block_count": true, "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1551946003534, "parent_id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "parent_table": "block", "properties": { "title": [ [ "Most desktop browsers will display both horizontal and vertical scrollbars, whether or not any content is clipped. This can avoid problems with scrollbars appearing and disappearing in a dynamic environment. Printers may print overflowing content." ] ] }, "type": "text", "version": 1 } }, "3d6cf5ef-5339-4b1e-903d-0c9a5718d3e4": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1551946003533, "id": "3d6cf5ef-5339-4b1e-903d-0c9a5718d3e4", "ignore_block_count": true, "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1551946003533, "parent_id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "parent_table": "block", "properties": { "language": [ [ "Plain Text" ] ], "title": [ [ "\u003cdiv\u003e\n This div is too small to display its contents to display the effects of the overflow property.\n\u003c/div\u003e" ] ] }, "type": "code", "version": 1 } }, "7536037c-e9cb-4d60-bf6d-81db01f4ee20": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1551946003534, "id": "7536037c-e9cb-4d60-bf6d-81db01f4ee20", "ignore_block_count": true, "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1551946003534, "parent_id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "parent_table": "block", "properties": { "source": [ [ "/tmp/69224181-5b73-4bbb-b7f3-8ded3fd7567d/4e8c030606ae608fe4e13b33697951d91c330b19.png" ] ] }, "type": "image", "version": 1 } }, "7b6188a1-2316-47fa-91cc-b5e98a64f65f": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1551946003534, "id": "7b6188a1-2316-47fa-91cc-b5e98a64f65f", "ignore_block_count": true, "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1551946003534, "parent_id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "parent_table": "block", "properties": { "title": [ [ "The content above is clipped in a 100px by 100px box, with scrolling available to view overflowing content." ] ] }, "type": "text", "version": 1 } }, "99c91818-8d5c-43f2-8c50-9120f31b066b": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1551946003534, "id": "99c91818-8d5c-43f2-8c50-9120f31b066b", "ignore_block_count": true, "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1551946003534, "parent_id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "parent_table": "block", "properties": { "title": [ [ "Result", [ [ "b" ] ] ] ] }, "type": "text", "version": 1 } }, "abd1ba9e-4450-481a-a596-e9a5883d0e23": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1551946003534, "id": "abd1ba9e-4450-481a-a596-e9a5883d0e23", "ignore_block_count": true, "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1551946003534, "parent_id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "parent_table": "block", "properties": { "language": [ [ "Plain Text" ] ], "title": [ [ "div {\n width:100px;\n height:100px;\n overflow:scroll;\n}" ] ] }, "type": "code", "version": 1 } }, "c931de19-15e4-42af-8b25-4b91c23aa69e": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_by_id": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_by_table": "notion_user", "created_time": 1551946003534, "id": "c931de19-15e4-42af-8b25-4b91c23aa69e", "ignore_block_count": true, "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_by_id": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_by_table": "notion_user", "last_edited_time": 1551946003534, "parent_id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "parent_table": "block", "properties": { "title": [ [ "CSS", [ [ "b" ] ] ] ] }, "type": "text", "version": 5 } }, "dc371b5c-6e7e-4e07-8e13-217dd1e6172f": { "role": "comment_only", "value": { "alive": true, "content": [ "e4bc3b7d-5c46-4cb5-acf3-fbb678263825", "d221c32f-ad78-4859-a98e-a211a6770995", "f136e223-a6e8-4428-bbf4-676ef4725a68", "1bbd320c-21cf-4d39-a7bf-259f7e9ffaec", "b7d0c8c4-3db8-4394-ae8d-28d689693cc1", "f96fbd13-2083-4544-85f0-d9be6bdf20d0" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1551945960000, "format": { "page_full_width": true, "page_small_text": true }, "id": "dc371b5c-6e7e-4e07-8e13-217dd1e6172f", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1551946200000, "parent_id": "18bfe038-1096-49f4-8904-e71ca18d76ed", "parent_table": "block", "permissions": [ { "role": "editor", "type": "user_permission", "user_id": "bb760e2d-d679-4b64-b2a9-03005b21870a" } ], "properties": { "title": [ [ "Overflow" ] ] }, "type": "page", "version": 24 } }, "e204cc5a-6d54-4c9a-aec7-ba7dda2738bd": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1551946003531, "id": "e204cc5a-6d54-4c9a-aec7-ba7dda2738bd", "ignore_block_count": true, "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1551946003531, "parent_id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "parent_table": "block", "properties": { "title": [ [ "HTML", [ [ "b" ] ] ] ] }, "type": "text", "version": 1 } }, "f136e223-a6e8-4428-bbf4-676ef4725a68": { "role": "comment_only", "value": { "alive": true, "content": [ "e204cc5a-6d54-4c9a-aec7-ba7dda2738bd", "3d6cf5ef-5339-4b1e-903d-0c9a5718d3e4", "c931de19-15e4-42af-8b25-4b91c23aa69e", "abd1ba9e-4450-481a-a596-e9a5883d0e23", "99c91818-8d5c-43f2-8c50-9120f31b066b", "7536037c-e9cb-4d60-bf6d-81db01f4ee20", "7b6188a1-2316-47fa-91cc-b5e98a64f65f", "37de4e0f-64d8-469d-ba64-1ed58bbab771" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_by_id": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_by_table": "notion_user", "created_time": 1551946003534, "format": { "page_full_width": true, "page_small_text": true }, "id": "f136e223-a6e8-4428-bbf4-676ef4725a68", "ignore_block_count": true, "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_by_id": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_by_table": "notion_user", "last_edited_time": 1551946003534, "parent_id": "dc371b5c-6e7e-4e07-8e13-217dd1e6172f", "parent_table": "block", "properties": { "title": [ [ "overflow scroll" ] ] }, "type": "page", "version": 7 } } }, "notion_user": { "bb760e2d-d679-4b64-b2a9-03005b21870a": { "role": "reader", "value": { "clipper_onboarding_completed": true, "email": "kkowalczyk@gmail.com", "family_name": "Kowalczyk", "given_name": "Krzysztof", "id": "bb760e2d-d679-4b64-b2a9-03005b21870a", "mobile_onboarding_completed": true, "onboarding_completed": true, "profile_photo": "https://s3-us-west-2.amazonaws.com/public.notion-static.com/2dcaa66c-7674-4ff6-9924-601785b63561/head-bw-640x960.png", "version": 231 } } }, "space": {} } }
{ "pile_set_name": "Github" }
--- abstract: 'Noisy measurements of a physical unclonable function (PUF) are used to store secret keys with reliability, security, privacy, and complexity constraints. A new set of low-complexity and orthogonal transforms with no multiplication is proposed to obtain bit-error probability results significantly better than all methods previously proposed for key binding with PUFs. The uniqueness and security performance of a transform selected from the proposed set is shown to be close to optimal. An error-correction code with a low-complexity decoder and a high code rate is shown to provide a block-error probability significantly smaller than provided by previously proposed codes with the same or smaller code rates.' address: | Information Theory and Applications Chair, Technische Universität Berlin\ {[guenlue]{}, [rafael.schaefer]{}}[@tu-berlin.de]{}\ bibliography: - 'references.bib' title: | LOW-COMPLEXITY AND RELIABLE TRANSFORMS FOR\ PHYSICAL UNCLONABLE FUNCTIONS --- physical unclonable function (PUF), no multiplication transforms, secret key agreement, low complexity. Introduction ============ Biometric identifiers such as fingerprints are useful to authenticate a user. Similarly, secret keys are traditionally stored in non-volatile memories (NVMs) to authenticate a physical device that contains the key. NVMs require hardware protection even when the device is turned off since an attacker can try to obtain the key at any time. A safe and cheap alternative to storing keys in NVMs is to use physical identifiers, e.g., fine variations of ring oscillator (RO) outputs, as a randomness source. Since invasive attacks to physical identifiers permanently change the identifier output, there is no need for continuous hardware protection for physical identifiers [@pufintheory]. Physical unclonable functions (PUFs) are physical identifiers with reliable and high-entropy outputs [@GassendThesis; @PappuThesis]. PUF outputs are unique to each device, so they are used for safe and low-complexity key storage in digital devices. These keys can be used for private authentication, secure computation, and encryption. Replacing such identifiers is expensive, so key-storage methods should limit the information the public data leak about the identifier outputs. Moreover, the same device should be able to reconstruct a secret key generated from the noiseless outputs by using the noisy outputs and public information. The ultimate secret-key vs. privacy-leakage rate tradeoffs are given in [@IgnaTrans; @LaiTrans; @benimdissertation]. The secret-key and privacy-leakage rate limits for a suboptimal chosen-secret (CS) model called *fuzzy commitment scheme* (FCS) [@FuzzyCommitment] are given in [@IgnatenkoFuzzy]. We consider the FCS to compare different post-processing methods applied to PUFs. Asymptotically optimal CS model constructions are given in [@bizimWZ] and similar comparison results can be obtained by using these constructions. Physical identifier outputs are highly correlated and noisy, which are the two main problems in using PUFs. If errors in the extracted sequences are not corrected, PUF reliability would be low. If correlations are not eliminated, machine learning algorithms can model the PUF outputs [@MLPUF]. To solve the two problems, the discrete cosine transform (DCT) is used in [@bizimtemperature] to generate a uniformly-distributed bit sequence from PUFs under varying environmental conditions. Similarly, the discrete Walsh-Hadamard transform (DWHT), discrete Haar transform (DHT), and Karhunen-Loève transform (KLT) are compared in [@bizimMDPI] in terms of the maximum secret-key length, decorrelation efficiency, reliability, security, and hardware cost. The DCT, DWHT, and DHT provide good reliability and security results, and a hardware implementation of the DWHT in [@bizimMDPI] shows that the DWHT requires a substantially smaller hardware area than other transforms. There are two main reasons why the DWHT can be implemented efficiently. Firstly, the matrix that represents the DWHT has elements $1$ or $-1$, so there is no matrix multiplication. Secondly, an input-selection algorithm that is an extension of the algorithm in [@InputSelection] allows to calculate two-dimensional (2D) DWHT recursively. Based on these observations, we propose a new set of transforms that preserve these properties and that significantly improve the reliability of the sequences extracted from PUFs. The FCS requires error-correction codes (ECCs) to achieve the realistic block-error probability of $\displaystyle P_\text{B}\!=\!10^{-9}$ for RO PUFs. The ECCs proposed in [@bizimMDPI] have better secret-key and privacy-leakage rates than previously proposed codes, but in some cases it is assumed that if multiple bits are extracted from each transform coefficient, each bit is affected by independent errors. This assumption is not valid in general. Thus, we extract only one bit from each transform coefficient. The contributions of this work are as follows. - We propose a new set of 2D orthogonal transforms that have low-complexity hardware implementations and no matrix multiplications. The new set of transforms are shown to provide an average bit-error probability smaller than the most reliable transform considered in the PUF literature, i.e., DCT. - Bit sequences extracted using a transform selected from the new set of transforms are shown to give good uniqueness and security results that are comparable to state-of-the-art results. - We propose a joint transform-quantizer-code design method for the new set of transforms in combination with the FCS to achieve a block-error probability substantially smaller than the common value of $10^{-9}$ with perfect secrecy. This paper is organized as follows. In Section \[sec:fuzzycommitment\], we review the FCS. The transform-coding algorithm to extract secure sequences from RO PUFs is explained in Section \[sec:commonsteps\]. A new set of orthogonal transforms that require a small hardware area and that result in bit-error probabilities smaller than previously considered transforms is proposed in Section \[sec:neworth\]. In Section \[sec:comparisons\], we compare the new transforms with previous methods and show that the proposed ECC provides a block-error probability for the new selected transform (ST) that is smaller than for previously considered transforms. Review of the Fuzzy Commitment Scheme {#sec:fuzzycommitment} ===================================== Fig. \[fig:fuzzycommitment\] shows the FCS, where an encoder ${\mathsf{Enc}}(\cdot)$ adds a codeword $\displaystyle C^N$, uniformly distributed over a set with cardinality $|\mathcal{S}|$, modulo-2 to the binary noiseless PUF-output sequence $\displaystyle X^N$ during enrollment. We show in Section \[sec:commonsteps\] that the sequence $X^N$ and its noisy version $Y^N$ can be obtained by applying the post-processing steps in Fig. \[fig:postprocessing\] to RO outputs $\widetilde{X}^L$ and its noisy version $\widetilde{Y}^L$, respectively. The sum $\displaystyle W^N=C^N{\mathbin{\oplus}}X^N$ is publicly sent through a noiseless and authenticated channel, and it is called *helper data*. The modulo-2 sum of $W^N$ and the noisy PUF-output sequence $Y^N =X^N {\mathbin{\oplus}}E^N$, where $E^N$ is the binary error vector, gives the noisy codeword $\displaystyle C^N{\mathbin{\oplus}}E^N$. Using the noisy codeword, a channel decoder $\displaystyle {\mathsf{Dec}}(\cdot)$ estimates the secret key $S$ during reconstruction. A reliable secret-key agreement is possible by using $X^N$, $Y^N$, and $W^N$ [@AhlswedeCsiz; @Maurer]. One can achieve a (secret-key, privacy-leakage) rate pair $ (R_\text{s}\text{,}R_\ell)$ using the FCS with perfect secrecy if, given any $\epsilon\!>\!0$, there is some $N\!\geq\!1$, and an encoder and a decoder for which $\displaystyle R_\text{s}=\frac{\log|\mathcal{S}|}{N}$ and $$\begin{aligned} {2} &\Pr[S\ne\hat{S}] \leq \epsilon && (\text{reliability}) \label{eq:reliabilityconst}\\ &I\big(S;W^N\big)\!=\!0 && (\text{perfect secrecy})\label{eq:secrecyconst}\\ &\frac{1}{N}I\big(X^N;W^N\big) \leq R_\ell+\epsilon. \quad\quad\quad&&(\text{privacy}) \label{eq:privacyconst}\end{aligned}$$ Condition (\[eq:secrecyconst\]) ensures that the public side information $W^N$ does not leak any information about the secret key, so one achieves perfect secrecy. The normalized information that $W^N$ leaks about the PUF output sequence $X^N$ is considered in (\[eq:privacyconst\]). If one should asymptotically limit the unnormalized privacy leakage $I(X^N;W^N)$, private keys available during enrollment and reconstruction are necessary [@IgnaTrans], which is not realistic or practical; see the discussions in [@bizimWZ]. Suppose the measurement channel $P_{Y|X}$ is a binary symmetric channel (BSC) with crossover probability $p$, and $X$ is independent and identically distributed (i.i.d.) according to a uniform distribution. Define $\displaystyle H_b(p)\!=\!-p\log p-(1\!-p)\log(1\!-p)$ as the binary entropy function. The region $\displaystyle \mathcal{R}$ of all achievable (secret-key, privacy-leakage) rate pairs for the FCS with perfect secrecy is [@IgnatenkoFuzzy] $$\begin{aligned} \mathcal{R}\! =\! \big\{ \left(R_\text{s},R_\ell\right)\!\colon\!\quad 0\leq R_\text{s}\leq 1-H_b(p),\quad R_\ell\geq 1\!-\!R_\text{s} \big\}.\label{eq:ls0}\end{aligned}$$ We plot this region in Section \[sec:comparisons\] to evaluate the secret-key and privacy-leakage rates achieved by the proposed ECC. The FCS is a particular realization of the CS model. The region $\mathcal{R}_{\text{cs}}$ of all achievable (secret-key, privacy-leakage) rate pairs for the CS model, where a generic encoder is used to confidentially transmit an embedded secret key to a decoder that observes $Y^N$ and the helper data $W^N$, is given in [@IgnaTrans; @LaiTrans] as the union over all $P_{U|X}$ of the set of achievable rate pairs $\left(R_\text{s},R_\ell\right)$ such that $$\begin{aligned} \Big\{0\leq R_\text{s}\leq I(U;Y),\qquad R_\ell\geq I(U;X)-I(U;Y)\!\Big\}\label{eq:chosensecret}\end{aligned}$$ where $P_X$ is the probability distribution of $X$ and the alphabet $\mathcal{U}$ of the auxiliary random variable $U$ can be limited to have the size $\displaystyle |\mathcal{U}|\!\leq\!|\mathcal{X}|+1$ as $U-X-Y$ forms a Markov chain. The FCS achieves a boundary point of $\mathcal{R}_{\text{cs}}$ for a BSC $P_{Y|X}$ only at the point $\displaystyle (R_\text{s}^*,R_\ell^*)\!=\!(1\!-\!H_b(p),H_b(p))$. To achieve the other points on the rate-region boundary, one should use a nested code construction as in [@bizimWZ] or a binning based construction as in [@MatthieuPolar], both of which require careful polar code [@Arikan] designs. This is not necessary to illustrate the gains from the new set of transforms and it suffices to combine the new set with the FCS. Post-processing Steps {#sec:commonsteps} ===================== We consider a 2D array of $r\!\times\!c$ ROs. Denote the continuous-valued outputs of $L\!=\!r\!\times\!c$ ROs as the vector random variable $\widetilde{X}^L$, distributed according to $\displaystyle f_{\widetilde{X}^L}$. Suppose that the noise component $\widetilde{E}_j$ on the $j$-th RO output is Gaussian distributed with zero mean for all $j=1,2,\ldots,L$ and that the noise components are mutually independent. Denote the noisy RO outputs as $\widetilde{Y}^L\!=\!\widetilde{X}^L\!+\!\widetilde{E}^L$. We extract binary vectors $X^N$ and $Y^N$ from $\widetilde{X}^L$ and $\widetilde{Y}^L$, respectively, and define binary error variables $\displaystyle E_i\!=\!X_i{\mathbin{\oplus}}Y_i$ for $i\!=\!1,2,\ldots,N$. ![The transform-coding steps.[]{data-label="fig:postprocessing"}](./Transformcodingmodel.eps){width="48.50050%" height="0.5005\textheight"} The post-processing steps used during the enrollment (and reconstruction) to extract a bit sequence $X^N$ (and its noisy version $Y^N$) are depicted in Fig. \[fig:postprocessing\]. These steps are transformation, histogram equalization, quantization, Gray mapping, and concatenation. Since RO outputs $\widetilde{X}^L$ are correlated, we apply a transform $\emph{T}_{r\!\times\!c}(\cdot)$ for decorrelation. We model all transform coefficients and noise components as random variables with Gaussian marginal distributions. A transform-coefficient output $T$ that comes from a distribution with mean $\mu\neq 0$ and variance $\sigma^2\neq 1$ is converted into a standard Gaussian random variable during histogram equalization, which reduces the hardware area when multiple bits are extracted. Independent bits can be extracted from transform coefficients by setting the quantization boundaries of a $K$-bit quantizer to $$\label{eq:quantsteps} b_k=Q^{-1}\left(1-\dfrac{k}{2^K}\right) \text{ for } k=0,1,\dots,2^K$$ where $Q(\cdot)$ is the $Q$-function. Quantizing a coefficient $\hat{T}$ to $k$ if $\displaystyle b_{k-1}\!<\!\hat{T}\!\leq\!b_k$ ensures that $X^N$ is uniformly distributed, which is necessary to achieve the rate point where the FCS is optimal. One can use scalar quantizers without a performance loss in security if the RO output statistics satisfy certain constraints [@benimdissertation]. We do not use the first transform coefficient, i.e., DC coefficient, for bit extraction since it corresponds to the average over the RO array, known by an attacker [@benimdissertation]. Furthermore, Gray mapping ensures that the neighboring quantization intervals result in only one bit flip. This is a good choice as the noise components $E_i$ for all $i=1,2,\ldots,N$ have zero mean. The sequences extracted from transform coefficients are concatenated to obtain the sequence $X^N$ (or $Y^N$). New Orthogonal Transforms {#sec:neworth} ========================= A useful metric to measure the complexity of a transform is the number of operations required for computations. Consider only RO arrays of sizes $ r\!=\!c\!=\!8$ and $16$, which are powers of 2, so fast algorithms are available. In [@benimdissertation], the DWHT is suggested as the best candidate among the set of transforms {DCT, DHT, KLT, DWHT} for RO PUF applications with a low-complexity constraint such as internet of things (IoT) applications. In [@bizimMDPI], we extend an input-selection algorithm to compute the 2D $16\times16$ DWHT by applying a $2\times2$ matrix operation recursively to illustrate that the DWHT requires a small hardware area in a field programmable gate array (FPGA) since it does not require any multiplications. Following this observation, we propose a set of transforms that are orthogonal (to decorrelate the RO outputs better), that have matrix elements $1$ or $-1$ (to eliminate multiplications), and that have size of $16\times16$ (to apply the input-selection algorithm given in [@bizimMDPI] to further reduce complexity). We show in the next section that these transforms provide higher reliability than other transforms previously considered in the literature. Orthogonal Transform Construction and Selection {#subsec:orthtransselection} ----------------------------------------------- Consider an orthogonal matrix $A$ with elements $1$ or $-1$ and of size $k\times k$, i.e., $AA^{T}= I$, where $T$ is the matrix transpose and $I$ is the identity matrix of size $k\times k$. It is straightforward to show that the following matrices are also orthogonal: $$\begin{aligned} &\Biggl[ \begin{matrix} A&A\\ A&\!-\!A \end{matrix} \Biggr], \Biggl[ \begin{matrix} A&A\\ \!-\!A&A \end{matrix} \Biggr], \Biggl[ \begin{matrix} A&\!-\!A\\ A&A \end{matrix} \Biggr], \Biggl[ \begin{matrix} \!-\!A&A\\ A&A \end{matrix} \Biggr],\nonumber\\ \Biggl[ &\begin{matrix} \!-\!A&\!-\!A\\\! -\!A&A \end{matrix} \Biggr], \Biggl[ \begin{matrix} \!-\!A&\!-\!A\\ A&\!-\!A \end{matrix} \Biggr], \Biggl[ \begin{matrix} \!-\!A&A\\ \!-\!A&\!-\!A \end{matrix} \Biggr], \Biggl[ \begin{matrix} A&\!-\!A\\ \!-\!A&\!-\!A \end{matrix} \Biggr].\label{eq1}\end{aligned}$$ Since $2^{k^2}$ possible matrices should be checked for orthogonality, we choose $k\!=\!4$ to keep the complexity of the exhaustive search for orthogonal matrices low. The result of the exhaustive search is a set of orthogonal matrices $A$ of size $4\!\times\! 4$. By applying the matrix construction methods in (\[eq1\]) twice consecutively, we obtain $12288$ unique orthogonal transforms of size $16\!\times\! 16$ with elements $1$ or $\displaystyle -1$. We apply these orthogonal transforms, one of which is the DWHT, to an RO dataset to select the orthogonal transform whose maximum bit-error probability over the transform coefficients is minimum. This selection method provides reliability guarantees to every transform coefficient. An ECC that has a higher code dimension than it is achievable according to the Gilbert-Varshamov (GV) bound [@GilbertGV; @varshamovGV] for the maximum error probability over the transform coefficients of the ST, is given in Section \[subsec:codeselection\]. This illustrates that our selection method is conservative and the block-error probability is substantially smaller than $10^{-9}$. There are also other orthogonal transforms of size $16\times 16$ but we illustrate in the next section that the new set suffices to significantly increase the reliability of the extracted bits as compared to previously considered transforms and previous RO PUF methods. Performance Evaluations {#sec:comparisons} ======================= We use RO arrays of size $16\!\times \!16$ from the RO dataset in [@ROPUF] and apply the transform-coding steps in Fig. \[fig:postprocessing\] to compare the previously considered transforms with the new set of transforms in terms of their reliability, uniqueness, and security. We illustrate that a Bose-Chaudhuri-Hocquenghem (BCH) code can be used for error correction in combination with the FCS to achieve a block-error probability smaller than the common value of $10^{-9}$. Transform Comparisons --------------------- We compare the orthogonal transform selected from the new set, i.e., the ST, with the DCT and DWHT in terms of the bit-error probabilities of the $255$ transform coefficients obtained from the RO dataset in [@ROPUF]. Fig. \[fig:BERComparisonsofTrans\] illustrates the bit-error probabilities of the DCT, DWHT, and the ST. The mean of the ST is smaller than the means of the DCT and DWHT. Furthermore, the maximum bit-error probability of the DCT and ST are almost equal and are less than the maximum error probability of the DWHT. Most importantly, the ST has a large set of transform coefficients with bit-error probabilities close to zero, so an ECC design for the maximum or mean bit-error probability of the ST would give pessimistic rate results. We propose in the next section an ECC for the ST to achieve a smaller block-error probability than the block-error probability for the DCT. Uniqueness and Security {#subsec:uniqueness} ----------------------- A common measure to check the randomness of a bit sequence is uniqueness, i.e., the average fractional Hamming distance (HD) between the sequences extracted from different RO PUFs [@bizimpaper]. The rate region in (\[eq:ls0\]) is valid if the extracted bit sequences are uniformly distributed, making the uniqueness a valid measure for the FCS. Uniqueness results for the DCT, DWHT, KLT, and DHT have a mean HD of $0.5000$ and HD variances of approximately $\displaystyle 7\!\times \!10^{-4}$ [@bizimMDPI], which are close to optimal and better than previous RO PUF results. For the ST, we obtain a mean HD of $0.5001$ and a HD variance of $\displaystyle 2.69\!\times \!10^{-2}$. This suggests that the ST has good average uniqueness performance, but there might be a small set of RO PUFs from which slightly biased bit sequences are extracted. The latter can be avoided during manufacturing by considering uniqueness as a parameter in yield analysis of the chip that embodies the PUF. We apply the national institute of standards and technology (NIST) randomness tests [@NIST] to check whether there is a detectable deviation from the uniform distribution in the sequences extracted by using the ST. The bit sequences generated with the ST pass most of the randomness tests, which is considered to be an acceptable result [@NIST]. A correlation thresholding approach in [@bizimtemperature] further improves security. Code Selection {#subsec:codeselection} -------------- Consider the scenario where secret keys are used as an input to the advanced encryption standard (AES), a symmetric-key cryptosystem, with a key size of $128$ bits, so the code dimension of the ECC should be at least $128$ bits. The maximum error probability over the transform coefficients of the ST is $p_{\text{max}}=0.0149$, as shown in Fig. \[fig:BERComparisonsofTrans\]. Furthermore, assume that we use an ECC with a bounded minimum distance decoder (BMDD) to keep the complexity low. A BMDD can correct all error patterns with up to $\lfloor\frac{d_{\text{min}-1}}{2}\rfloor$ errors, where $d_{\text{min}}$ is the minimum distance of the code. It is straightforward to show that the ECC should have at least a minimum distance of $d_{\text{min}}=41$ to achieve a block-error probability of $P_\text{B}\leq 10^{-9}$ if all transform coefficients are assumed to have a bit-error probability of $p_{\text{max}}$. None of binary BCH and Reed-Solomon (RS) codes, which have good minimum-distance properties, can satisfy these parameters. Similarly, the GV bound computed for $p_{\text{max}}$ shows that there exists a linear binary ECC with code dimension $98$. Consider the binary BCH code with the block length $255$, code dimension $131$ that is greater than the code dimension of $98$ given by the GV bound, and minimum distance $\displaystyle d_{\text{min,BCH}}=37$ that is close to the required value of $d_{\text{min}}=41$. We illustrate in the next section that this BCH code provides a block-error probability significantly smaller than $10^{-9}$. Reliability, Privacy, and Secrecy Analysis of the Code ------------------------------------------------------ We now show that the proposed ECC satisfies the block-error probability constraint. The block-error probability $P_\text{B}$ for the $\text{BCH}(255,131,37)$ code with a BMDD is equal to the probability of having more than $18$ errors in the codeword, i.e., we have $$\begin{aligned} P_\text{B} = \sum_{j=19}^{255}\Bigg[\sum_{\mathcal{D}\in\mathcal{F}_j}\prod_{i\in \mathcal{D}}p_{i}\,\bigcdot\prod_{i\in \mathcal{D}^{c}}(1-p_{i}) \Bigg] \label{eq:blockerrorforbch}\end{aligned}$$ where $p_{i}\leq p_{\text{max}}$ is the bit-error probability of the $i$-th transform coefficient, as in Fig. \[fig:BERComparisonsofTrans\], for $i\!=\!2,3,\ldots,256$, $\displaystyle \mathcal{F}_j$ is the set of all size-$j$ subsets of the set $\displaystyle\{2,3,\ldots,256\}$, and $\mathcal{D}^{c}$ denotes the complement of the set $\mathcal{D}$. The bit-error probabilities $p_{i}$ represent probabilities of independent events due to the mutual independence assumption for transform coefficients and one-bit quantizers used. The evaluation of (\[eq:blockerrorforbch\]) requires $ \sum_{j=0}^{18}{255\choose j}\approx 1.90\!\times\!10^{27}$ different calculations, which is not practical. We therefore apply the discrete Fourier transform - characteristic function (DFT-CF) method [@DFTCF] to (\[eq:blockerrorforbch\]) and obtain the result $P_\text{B}\!\approx\!2.860\!\times\!10^{-12}\!<\!10^{-9}$. This value is smaller than the block-error probabilitiy $P_{\text{B,DCT}}= 1.26\times 10^{-11}$ obtained in [@benimdissertation] for the DCT with the same code. The block-error probability constraint is thus satisfied by using the $\text{BCH}$ code although the conservative analysis suggests otherwise. The rate regions given in (\[eq:ls0\]) and (\[eq:chosensecret\]) are asymptotic results, i.e., they assume $N\rightarrow \infty$. Since separate channel and secrecy coding is optimal for the FCS, we can use the finite length bounds for a BSC $P_{Y|X}$ with crossover probability $p\!=\! \frac{1}{L-1}\sum_{i=2}^Lp_{i}\!\approx\!0.0088$, i.e., the error probability averaged over all used coefficients. In [@benimdissertation], we show that the $\text{BCH}(255,131,37)$ code achieves $(R_{\text{s,BCH}},R_{\ell,\text{BCH}})\approx(0.514,\,0.486)$ bits/source-bit, significantly better than previously proposed codes in the RO PUF literature, so it suffices to compare the proposed code with the best possible finite-length results for the FCS. We use Mrs. Gerber’s lemma [@WZ], giving the optimal auxiliary random variable $U$ in (\[eq:chosensecret\]), to compute all points in the region $\mathcal{R}_{\text{cs}}$. We plot all achievable rate pairs, the (secret-key, privacy-leakage) rate pair of the proposed BCH code, and a finite-length bound for the block length of $N=255$ bits and $P_\text{B}\!=\!10^{-9}$ in Fig. \[fig:ratecomparison\]. The maximum secret-key rate is $R_\text{s}^*\!\approx\!0.9268$ bits/source-bit with a corresponding minimum privacy-leakage rate of $R_\ell^*\!\approx\!0.0732$ bits/source-bit. The gap between the points $(R_{\text{s,BCH}},R_{\ell,\text{BCH}})$ and $(R_{\text{s}}^*,R_\ell^*)$ can be partially explained by the short block length of the code and the small block-error probability. The finite-length bound given in [@Polyanskiy Theorem 52] shows that the rate pair $(R_\text{s},R_\ell)\!=\!(0.7029,0.2971)$ bits/source-bit is achievable by using the FCS, as depicted in Fig. \[fig:ratecomparison\]. One can thus improve the rate pairs by using better codes and decoders with higher hardware complexity, which is undesirable for IoT applications. Fig. \[fig:ratecomparison\] also illustrates the fact that there are operation points of the region $\mathcal{R}_{\text{cs}}$ that cannot be achieved by using the FCS and, e.g., a nested polar code construction from [@bizimWZ] should be used to achieve all points in $\mathcal{R}_{\text{cs}}$. Conclusion {#sec:conclusion} ========== We proposed a new set of transforms that are orthogonal (so that the decorrelation efficiency is high), that have elements $1$ or $-1$ (so that the hardware complexity is low), and that have a size of $k\times k$ where $k$ is a power of 2 (so that an input-selection algorithm can be applied to further decrease complexity). By using one-bit uniform quantizers for each transform coefficient obtained by applying the ST, we obtained bit-error probabilities that are on average smaller than the bit-error probabilities obtained from previously considered transforms. We proposed a BCH code as the ECC for RO PUFs in combination with the FCS. This code achieves the best rate pair in the RO PUF literature and it gives a block-error probability for the ST that is substantially smaller than for the DCT. We illustrated that the FCS cannot achieve all possible rate points. In future work, in combination with the new set of transforms, we will apply a joint vector quantization and error correction method by using nested polar codes to achieve rate pairs that cannot be achieved by the FCS.
{ "pile_set_name": "ArXiv" }
Olivier Kamanda Olivier Kamanda is the Director of Learning and Impact Strategy at the John S. and James L. Knight Foundation. He is a former Presidential Innovation Fellow and previously served as speechwriter and senior advisor to Secretary of State Hillary Clinton. Education He obtained a bachelor of science degree from Princeton University in 2003 and his Juris Doctor from the University of Pennsylvania Law School in 2009. It was during his third year at Penn Law that he founded the Foreign Policy Digest. Also while in law school, he was executive editor of the school's Journal of International Law and a columnist for The Huffington Post. Career He is the founding editor-in-chief of Foreign Policy Digest. Kamanda is a former Trustee of Princeton University and a fellow with the Truman National Security Project. Kamanda was president of the Montgomery County Young Democrats from 2004 to 2006. Since 2010, he has been an associate lawyer at White & Case in Washington, D.C. In 2011, Kamanda was named one of Washington, D.C.'s "Most Influential Leaders Under 40" by Washington Life Magazine. References External links Olivier Kamanda's blog on the Huffington Post Foreign Policy Digest website Penn Current Student Spotlight White & Case bio Category:American activists Category:American columnists Category:Living people Category:People from Chevy Chase, Maryland Category:African-American people Category:Princeton University alumni Category:University of Pennsylvania Law School alumni Category:Year of birth missing (living people)
{ "pile_set_name": "Wikipedia (en)" }
Beta-endorphin decreases fatigue and increases glucose uptake independently in normal and dystrophic mice. beta-Endorphin and a C-terminal analogue have been shown to decrease muscle fatigue and increase glucose uptake in muscles of normal mice. In order to provide evidence whether these peptides might be useful in muscle-wasting conditions and whether the two actions of the peptides are interdependent, the effect of beta-endorphin on muscle fatigue and glucose uptake was studied using isolated hemidiaphragm preparations of dystrophic mice as well as normal mice. Muscle contractions were elicited by high-frequency stimulation of the phrenic nerve. Glucose uptake was measured using (nonmetabolizable) 2-deoxy-D-[1-(3)H]glucose. beta-Endorphin and the C-terminal analogue reduced fatigue in normal muscles of males but not females. Insulin had no effect in either sex. The peptides increased 2-deoxyglucose uptake in contracting and noncontracting muscles of normal males and females. beta-Endorphin reduced fatigue and increased deoxyglucose uptake in dystrophic muscles. The effect on fatigue was not due to increased glucose uptake, as the energy substrate present was pyruvate. Nerve stimulation released beta-endorphin immunoreactivity from intramuscular nerves of dystrophic mice. It is hypothesized that beta-endorphin released from motor nerves as well as from the pituitary could be responsible for improving muscle function during exercise. beta-Endorphin or analogues could have therapeutic use in muscle-wasting disease.
{ "pile_set_name": "PubMed Abstracts" }
204 Va. 316 (1963) LESTER POLLARD v. ELIZABETH SMITH POLLARD. Record No. 5548. Supreme Court of Virginia. April 22, 1963. William Davis Butts, on brief for the appellant. Present, All the Justices. Lester Pollard's bill for divorce on the ground of wilful desertion by his wife Elizabeth Pollard was dismissed because it was shown that she became and was adjudged insane after the date of the alleged desertion. The evidence showed the desertion without cause on January 28, 1947; the adjudication of insanity on February 26, 1947; and that defendant had given no indication of insanity prior to the date of the desertion. On this evidence it was error to refuse the divorce. Code 1950, section 20-93, changes the prior rule of the cases in such situations and expressly states that insanity so occurring is no defense to a bill for divorce by the deserted spouse. Appeal from a decree of the Circuit Court of the city of Hampton. Hon. Frnk A. Kearney, judge presiding. The opinion states the case. William Alfred Smith, on brief for the appellee. Case submitted on briefs. CARRICO CARRICO, J., delivered the opinion of the court. In this divorce case we are, for the first time, presented the question of the application of Code, | 20-93, the pertinent provisions of which are as follows: "Insanity of guilty party after commencement of desertion no defense. -- When the suit is for divorce from the bond of matrimony for wilful desertion or abandonment, it shall be no defense that the *317 guilty party has, since the commencement of such desertion, and within one year thereafter, become and has been adjudged insane, but at the expiration of one year from the commencement of such desertion the ground for divorce shall be deemed to be complete. . . ." The question here presented arises from an appeal granted Lester Pollard, the complainant, from a final decree dismissing his bill of complaint for divorce, alleging wilful desertion and abandonment, filed against Elizabeth Smith Pollard, the defendant. The bill was dismissed because it was shown that the defendant had been adjudged insane subsequent to the date of the alleged desertion and prior to the expiration of one year from such date. The bill alleged, and the evidence showed, that the Pollards were married on April 19, 1941; that they lived together for six years, during which time the complainant was a dutiful husband; that the defendant deserted the complainant on January 28, 1947, without just cause or excuse; that the desertion had continued uninterrupted since that date; that on February 26, 1947, the defendant was adjudged insane and was committed to Central State Hospital at Petersburg, where she was still confined when the case was heard. The evidence further showed that the defendant displayed no signs of mental illness at the time she left the complainant on January 28, 1947. Prior to the enactment, in 1926, of what is now Code, | 20-93, it was the law in this state that when a defendant in a divorce case became and was adjudged insane between the date of desertion and the running of the statutory period prescribed to make the ground for divorce complete, such insanity was a bar to the granting of a divorce. We had so held in Wright Wright, 125 Va. 526, 99 S.E. 515, decided June 12, 1919, where it was stated that the reason for the rule was that, "an insane person is incapable of forming the intent, either to continue the desertion or to seek a reconciliation." 125 Va., at pp. 528, 529. In the Wright case, Judge Prentis conceded that the rule there enunciated would, in some cases, cause undue hardship. He said, however, that, "[if] there be hardship, the question is one of public policy for the consideration of the General Assembly." 125 Va., at p. 529. The legislature, perhaps motivated by the cases of hardship pointed to by Judge Prentis but, in any event, in sound consideration of public policy, saw fit to change the rule adopted in the Wright case. In *318 clear and unambiguous language it provided that insanity, occurring between the commencement of desertion and the running of the statutory period, is not a bar to divorce for wilful desertion or abandonment. A defense based upon such insanity, previously provided by judicial rule was, by legislative rule, declared no longer to exist. Now, when desertion occurs and continues uninterrupted for one year the ground of divorce is complete, notwithstanding that the defendant meanwhile has become and has been adjudged insane. It is the duty of the courts to recognize and give effect to such a legislative rule. In the case before us, the evidence was sufficient to sustain the complainant's ground for divorce, and it was error to refuse him a decree because the defendant became and was adjudged insane in the one-year period following the desertion. Accordingly, the decree will be reversed and the cause remanded with direction to enter a decree awarding the complainant a divorce from the defendant for wilful desertion and abandonment for more than one year. Reversed and remanded.
{ "pile_set_name": "FreeLaw" }
Downloading a file regularly - how hard can it be? - joeyespo https://adblockplus.org/blog/downloading-a-file-regularly-how-hard-can-it-be ====== sophacles A common solution to this problem, is to make a 2 stage process, where step 1 is a request of "should I download?", where there are 2 possible replies: "no, check again in N time" and "yes, here is a token". Step 2 is then presenting the token to the api point for download, and getting the file. On the server side, you don't even need specific instance tracking, just a simple decision based on current resource usage, and a list of valid tokens (optionally, they can expire in some short time to avoid other thundering herd type issues). Say, you set a max number of file transfers, or bandwidth or whatever metric makes sense to you, and you simply reply based on that metric. Further, you can smooth out your load with a bit of intelligence on setting N. Even better, you get a cool side-effect: since the check isn't so resource intensive, you can set the time between checks lower, and make the updates less regular. Now that I think of it: it seems that this would be a nice nginx plugin, with a simple client side library to handle it for reference. Anyone want to collaborate on this over the weekend? Should be relatively straight-forward. ~~~ masklinn > A common solution to this problem, is to make a 2 stage process, where step > 1 is a request of "should I download?", where there are 2 possible replies: > "no, check again in N time" and "yes, here is a token". Step 2 is then > presenting the token to the api point for download, and getting the file. You don't even need two steps, just have one step with previously known data. That's how HTTP conditional requests (Last-Modified/If-Modified-Since and ETag/If-None-Match) work: the client states "I want this file, I already have one from such moment with such metadata", and the server replies either "you're good" (304) or "here's your file (200). Issue is, that only works when the file changes rarely enough, or you need additional server logic to reply that the file is still good when it's not. > Now that I think of it: it seems that this would be a nice nginx plugin, > with a simple client side library to handle it for reference. Anyone want to > collaborate on this over the weekend? I'd be _very_ surprised if nginx didn't support conditional requests already. edit: according to [0] and [1] — which may be outdated — Nginx provides built- in support for last-modified on static files, it does not provide ETag support (the developer believes this is not useful for static files — which is usually correct[2]) but [1] has apparently written a module to do so [3]. The module being 4 years old, it might be way out of date. [0] [http://serverfault.com/questions/211637/what-headers-to- add-...](http://serverfault.com/questions/211637/what-headers-to-add-for-most- efficient-file-caching) [1] [https://mikewest.org/2008/11/generating-etags-for-static- con...](https://mikewest.org/2008/11/generating-etags-for-static-content- using-nginx) [2] There are two situations in which it is not (keep in mind that this is for _static_ content, dynamic is very different): if somebody willfully touches a file, it will change its Last-Modified but not its checksum, triggering a new send without ETag but not with it; and ETags can be coherent across servers (even in CDNs), the chances of last-modified being exactly the same on all your servers is far smaller. On the other hand, no etag is better than a shitty etag, and both Apache and IIS generate dreadful etags — which may hinder more than help — by default. [3] <https://github.com/mikewest/nginx-static-etags/> ~~~ sophacles Yes, this work for cache updating, and it is fantastic for that purpose. It does not solve the actual stated problem, which is that periodic checks in an attempt to smooth server loading away from peaks don't usually drift towards extremely bursty behavior. When the file does change, you still get a large number of clients trying to download the new content all at once. The solution I was suggesting is similar to what you are talking about, but also has the feature of smoothing the load curves. _Issue is, that only works when the file changes rarely enough, or you need additional server logic to reply that the file is still good when it's not._ My algorithm is that logic -- albeit implemented with client side collusion rather than pure server side trickery (this allows better control should the client ignore the etags). ~~~ masklinn > The solution I was suggesting is similar to what you are talking about, but > also has the feature of smoothing the load curves. It has no more feature of smoothing the load curve than using Cache-Control with the right max-age. > My algorithm is that logic It is no more that logic than doing what I outlined with proprietary behaviors. > this allows better control should the client ignore the etags by making the whole client use a custom communication channel? I'd expect ensuring the client correctly speaks HTTP would be easier than implementing a custom client from scratch. ~~~ sophacles You still seem to be missing the point. Cache-Control as implemented commonly, and by your description, will instantly serve every request the new file as soon as a new file is available. It takes into account exactly one variable: file age. The algorithm I describe takes into account variables which affect current system loading, and returns a "no, try again later", even when the file is actually different, because the server is trying to conserve some resource (usually in such cases it is bandwidth). Like I said, this can be done with etags, but a more explicit form of control is nicer. Which brings us to this: _> this allows better control should the client ignore the etags by making the whole client use a custom communication channel? I'd expect ensuring the client correctly speaks HTTP would be easier than implementing a custom client from scratch._ A client speaking proper http would be perfect for this. So point your http client to: domain.com/getlatest if there is a token available, respond with a: 307 domain.com/reallatest?token=foo If no token is available and no if-modified headers are sent, reply with: 503 + Retry-After N if there is not a token available, and the requestor supplied approrpiate if modified headers respond with a: 304 + cache control for some scheduled time in the future (which the client can ignore or not) Of course that last condition is strictly optional and not really required, since then it would be abusing cache control, rather than the using 503 as intended. (also note, a request to domain.com/reallatest with an invalid token or no token could result in a 302 to /getlatest or a 403, or some other form of denial, depending on the specifics of the application). edit: Strictly speaking, the multiple url scheme above isn't even needed, just a smart responder associated with the 503 is needed, however the url redirect method above was there because there may be a larger application context around system, in which getlatest does more than just serve the file, or in which multiple urls would redirect to reallatest, both easily imaginable situations. ~~~ masklinn > If no token is available and no if-modified headers are sent, reply with: > 503 + Retry-After N That's cool. There's still no reason for the second url and the 307, and you're still getting hit with requests so you're not avoiding the request load, only the download. You're smoothing out bandwidth, but not CPU & sockets. ~~~ sophacles This is sort of true. I don't know of a way to simply limit the number of incoming sockets without getting a lot of ISP level involvement or just outright rejecting connections. It does limit the number of long-lived sockets for file transfer. On static file serves, I am assuming the cpu has plenty of spare capacity for doing the algorithm, so I am not worried about that. Finally I am assuming the limiting factor is bandwidth here, so bandwidth smoothing the main goal. ------ moe I assume changes are usually small, you may want to try serving diffs? I.e. have the clients poll for the md5 of their _current_ list-version. On the server store the diff that will upgrade them to the current version under that filename. If a client requests an unknown md5 (e.g. because he has no list or his list is corrupted) default him to a patch that contains the full file. This requires a little logic on both ends (diff/patch), but would probably slash your bandwidth requirements to a fraction. A little napkin math: 25 lists * 150kb * 1mio fetches = ~3.75T vs 25 lists * 1kb (patch) * 1mio fetches = 25G (0.025T) ~~~ pjscott This is probably the Right Way, but it would be more work than minor tweaks to the delay logic. ------ K2h call me oldschool, but having a huge peak demand is the perfect application for distributed source, like torrent. I know it is much more complicated to introduce P2P and way more risky if it gets poisoned, but it seems to me this underlying problem of huge peak demand was solved 10 years ago. ~~~ nitrox but there is a problem with bittorrent. Most Schools and works places block bittorrent. We would need to fallback to http or any other method that works in restricted places. ~~~ skeletonjelly I wonder if there's a market for Bittorrent over HTTP? Node.js, websockets...surely it's possible? ~~~ icebraining All of those are strictly client-to-server, not P2P. You could in theory proxy bittorrent over it, but you wouldn't gain anything over just serving the file from the server. You can probably write a true P2P client as a Firefox extension, since its API gives you very low level access (raw sockets, for example), but certainly not for e.g. Chrome. ~~~ AntiRush WebRTC[1] seems to be the perfect platform for these sorts of things. It's in Chrome dev channel / Firefox Alpha right now. [1] <http://www.webrtc.org/> ------ fleitz I love random numbers for distribution. I had a similar problem with a set of distributed clients that needed to download email, but only one client downloading at a time. The email servers also had an issue where a large number of emails in the inbox would cause the server to slow down exponentially. (eg. it didn't matter how many MB of email were in the inbox but it did matter if there were more than about 1000 emails) The downloaders would download the list of inboxes to be fetched, randomize them and then lock the inbox when they started downloading, then the downloader would randomly pick a size cutoff for the max email size it would download, 10K, 1 MB, unlimited with a n inversely proportional maximum email count so that about 100MB could be downloaded at anytime. We even had an issue with one server behind an old cisco router that barf'd on window scaling, so a few machines in the pool had window scaling disabled and that account would naturally migrate to those servers with window scaling disabled. It worked wonders for distributing the load and keeping the Inbox counts to a minimum. ------ fromhet I know it's overkill for a browser extension, but wouldnt this be easily solved by having built-in bittorrent for updates? The publisher would always be seeding the latest version, and the clients would connect maybe every other day. It would lower the preassure on the publishers servers and make sure everyone could always have the latest version. With theese fancy magnet links, the publisher would only have to send the magnet and the actual file a couple of times, and then the peer to peer swarm would do the rest. ------ kogir I would just sign it, stick it on S3, and forget it. Did I miss why that wasn't considered? ~~~ nitrox It is too expensive. 1TB of bandwidth costs about $120. A project like adblock plus will be consuming about 3 - 4 TB a month which will add up to around $450 a month. Adblock list subscriptions are maintained and hosted by individual people who do at their spare time. They mostly pay for the servers out of their pockets. As one of the co-author of popular adblock list, I wouldn't want to break my bank to pay for S3 hosting. Our current solutions works out and when we reach our bandwidth limit, we could just simply buy addition TB of bandwidth at a much cheaper price than S3. Btw, i just made a rough calculation using AWS simple monthly calculator. So correct me if I am wrong about S3 pricing. ~~~ tedunangst Terabytes per month? That's insane. That's a million users (I can believe) downloading a megabyte (I can't quite believe). It appears my patterns.ini file is 600K, or about 150K compressed, so if I download it 30/5 = 6 times a month, that's... a megabyte. Wow. ~~~ tripzilch Wow, that suggestion elsewhere in the thread, to serve diffs instead seems rather important now :) ------ antihero Why not assign people a day and time, and then if they regularly miss that time, assign them a different one? ------ tantalor > with the effect that people always download on the same weekday What's so bad about that? ~~~ rmc Server load goes really high on that day, and if you get more popular, you'll need more servers and hence more money. ~~~ rb2k_ Isn't that something that nginx/varnish should easily be able to handle? It is just a static file download after all... ~~~ ComputerGuru CPU and bandwidth are entirely different issues. Sure, nginx can handle the processing. But do you have the piping to match? A run of the mill dedicated server has a 100mbit uplink. Do the math. (Hint: it's easy to saturate in no time). ~~~ prostoalex Has anybody tried <https://developers.google.com/speed/pagespeed/service> for this? ~~~ oconnor0 This is just downloading a single static text file so there's nothing to optimize.
{ "pile_set_name": "HackerNews" }
// -*- C++ -*- // Copyright (C) 2005, 2006, 2009 Free Software Foundation, Inc. // // This file is part of the GNU ISO C++ Library. This library is free // software; you can redistribute it and/or modify it under the terms // of the GNU General Public License as published by the Free Software // Foundation; either version 3, or (at your option) any later // version. // This library is distributed in the hope that it will be useful, but // WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU // General Public License for more details. // Under Section 7 of GPL version 3, you are granted additional // permissions described in the GCC Runtime Library Exception, version // 3.1, as published by the Free Software Foundation. // You should have received a copy of the GNU General Public License and // a copy of the GCC Runtime Library Exception along with this program; // see the files COPYING3 and COPYING.RUNTIME respectively. If not, see // <http://www.gnu.org/licenses/>. // Copyright (C) 2004 Ami Tavory and Vladimir Dreizin, IBM-HRL. // Permission to use, copy, modify, sell, and distribute this software // is hereby granted without fee, provided that the above copyright // notice appears in all copies, and that both that copyright notice // and this permission notice appear in supporting documentation. None // of the above authors, nor IBM Haifa Research Laboratories, make any // representation about the suitability of this software for any // purpose. It is provided "as is" without express or implied // warranty. /** * @file info_fn_imps.hpp * Contains implementations of cc_ht_map_'s entire container info related * functions. */ PB_DS_CLASS_T_DEC inline typename PB_DS_CLASS_C_DEC::size_type PB_DS_CLASS_C_DEC:: size() const { return m_num_used_e; } PB_DS_CLASS_T_DEC inline typename PB_DS_CLASS_C_DEC::size_type PB_DS_CLASS_C_DEC:: max_size() const { return m_entry_allocator.max_size(); } PB_DS_CLASS_T_DEC inline bool PB_DS_CLASS_C_DEC:: empty() const { return (size() == 0); } PB_DS_CLASS_T_DEC template<typename Other_HT_Map_Type> bool PB_DS_CLASS_C_DEC:: operator==(const Other_HT_Map_Type& other) const { return cmp_with_other(other); } PB_DS_CLASS_T_DEC template<typename Other_Map_Type> bool PB_DS_CLASS_C_DEC:: cmp_with_other(const Other_Map_Type& other) const { if (size() != other.size()) return false; for (typename Other_Map_Type::const_iterator it = other.begin(); it != other.end(); ++it) { const_key_reference r_key =(const_key_reference)PB_DS_V2F(*it); const_mapped_pointer p_mapped_value = const_cast<PB_DS_CLASS_C_DEC& >(*this). find_key_pointer(r_key, traits_base::m_store_extra_indicator); if (p_mapped_value == NULL) return false; #ifdef PB_DS_DATA_TRUE_INDICATOR if (p_mapped_value->second != it->second) return false; #endif } return true; } PB_DS_CLASS_T_DEC template<typename Other_HT_Map_Type> bool PB_DS_CLASS_C_DEC:: operator!=(const Other_HT_Map_Type& other) const { return !operator==(other); }
{ "pile_set_name": "Github" }
Q: natural language query processing I have a NLP (natural language processing application) running that gives me a tree of the parsed sentence, the questions is then how should I proceed with that. What is the time \-SBAR - Suborginate clause |-WHNP - Wh-noun phrase | \-WP - Wh-pronoun | \-What \-S - Simple declarative clause \-VP - Verb phrase |-VBZ - Verb, 3rd person singular present | \-is \-NP - Noun phrase |-DT - Determiner | \-the \-NN - Noun, singular or mass \-time the application has a build in javascript interpreter, and was trying to make the phrase in to a simple function such as function getReply() { return Resource.Time(); } in basic terms, what = request = create function, is would be the returned object, and the time would reference the time, now it would be easy just to make a simple parser for that but then we also have what is the time now, or do you know what time it is. I need it to be able to be further developed based on the english language as the project will grow. the source is C# .Net 4.5 thanks in advance. A: As far as I can see, using dependency parse trees will be more helpful. Often, the number of ways a question is asked is limited (I mean statistically significant variations are limited ... there will probably be corner cases that people ordinarily do not use), and are expressed through words like who, what, when, where, why and how. Dependency parsing will enable you to extract the nominal subject and the direct as well as indirect objects in a query. Typically, these will express the basic intent of the query. Consider the example of tow equivalent queries: What is the time? Do you know what the time is? Their dependency parse structures are as follows: root(ROOT-0, What-1) cop(What-1, is-2) det(time-4, the-3) nsubj(What-1, time-4) and aux(know-3, Do-1) nsubj(know-3, you-2) root(ROOT-0, know-3) dobj(is-7, what-4) det(time-6, the-5) nsubj(is-7, time-6) ccomp(know-3, is-7) Both are what-queries, and both contain "time" as a nominal subject. The latter also contains "you" as a nominal subject, but I think expressions like "do you know", "can you please tell me", etc. can be removed based on heuristics. You will find the Stanford Parser helpful for this approach. They also have this online demo, if you want to see some more examples at work.
{ "pile_set_name": "StackExchange" }
The present invention relates generally to reclining chairs and, more particularly, to an improved "wall proximity" reclining chair. Traditionally, reclining chairs are equipped with an actuation mechanism which is operatively interconnected between a prefabricated chair frame and a stationary base assembly. The actuation mechanism is typically a combination of various mechanical linkages operable for providing various comfort features such as independent reclining movement of a seat assembly as well as actuation of an extensible leg rest assembly and associated tilting of the chair frame. In "wall proximity" reclining chairs, the actuation mechanism must also be operable to maintain a generally constant clearance between the reclinable seat assembly and an adjacent stationary structure (i.e., wall surface, table, etc.) during the entire range of reclining movement. Generally, the actuation mechanism includes a track arrangement for causing longitudinal movement of the entire chair frame relative to the stationary base assembly during "wall proximity" reclining movement to accommodate for rearward angular movement of the seat back relative to the chair frame. Due to the relative complexity of conventional actuation mechanisms, it is common practice in the furniture industry to assemble the various mechanical linkages into a "stand-alone" mechanism frame assembly. A prefabricated U-shaped chair frame is frequently bolted around the mechanism frame with the open portion of the "U" corresponding to the front of the chair. Accordingly, such reclining chairs having a mechanism frame assembly located within a prefabricated chair frame are commonly referred to as having a "frame within a frame" construction. As such, most furniture manufacturers do not upholster the exterior surfaces of the prefabricated chair frame until after the mechanism frame assembly has been installed. Unfortunately, the upholstering operation is very inefficient and expensive in that the frequently heavy and cumbersome prefabricated chair frame must be manually manipulated in an extremely labor-intensive manner. Another disadvantage associated with reclining chairs equipped with conventional actuation mechanisms is that a relatively large amount of frictional drag is typically generated between the upholstered components which must be overcome for smooth movement of the seat assembly between the "upright" and "reclined" positions. As such, lighter weight seat occupants must normally exert a deliberate leveraged thrust or force, in addition to pulling the actuator lever, for completely extending a leg rest assembly and/or moving the seat assembly to its "reclined" position. Moreover, it is often difficult for the seat occupant to return the seat assembly to the "upright" position from the fully "reclined" position due to the relatively large included angle between the seat member and the reclined seat back. Therefore, the seat occupant must exert a relatively large and deliberate leveraged force to return the reclined seat assembly to its full "upright" position. Furthermore, in many conventional recliners, the leg rest assembly cannot be retracted to its "stowed" position from an extended or elevated position until after the seat occupant has completely returned the seat assembly to its fully "upright" position. Likewise, some reclining chairs do not permit independent actuation of the leg rest assembly during the entire range of reclining motion. While many conventional reclining chairs operate satisfactorily, furniture manufacturers are continually striving to develop improved frames and actuation mechanisms for reducing system complexity and increasing structural soundness and smoothness of operation as well as occupant comfort. Such advanced development is particularly important for "wall proximity" reclining chairs since their actuation mechanisms are inherently more complex due to the requirement of accommodating rearward reclining movement of the seat back relative to a stationary structure. Furthermore, there is a continuing desire to develop improved fabrication and assembly techniques which will result in reduced costs while promoting increased efficiency and improved product quality.
{ "pile_set_name": "USPTO Backgrounds" }
rob of picking 1 x and 2 g? 7/380 Calculate prob of picking 1 l and 3 u when four letters picked without replacement from uuuuuluu. 1/2 Two letters picked without replacement from {q: 7}. What is prob of picking 2 q? 1 Three letters picked without replacement from wywyiwt. What is prob of picking 2 w and 1 y? 6/35 Calculate prob of picking 1 a, 1 r, and 1 i when three letters picked without replacement from {y: 1, a: 1, k: 4, i: 2, r: 3}. 2/55 What is prob of picking 1 r and 1 t when two letters picked without replacement from bbrbtbrrbbrrbrrrrrrt? 11/95 Two letters picked without replacement from {n: 2, o: 1, i: 7, l: 1, t: 5}. What is prob of picking 1 o and 1 i? 7/120 Two letters picked without replacement from {d: 2, j: 2, q: 1}. Give prob of picking 2 j. 1/10 Calculate prob of picking 3 w and 1 q when four letters picked without replacement from qjwjwwjrqqwjg. 12/715 Two letters picked without replacement from cbbjccccbbbcc. What is prob of picking 1 j and 1 b? 5/78 What is prob of picking 3 f when three letters picked without replacement from effffffffffef? 15/26 Four letters picked without replacement from feefffeefeeeeelfeel. Give prob of picking 1 l and 3 e. 55/646 What is prob of picking 1 s and 1 d when two letters picked without replacement from {s: 2, d: 2, p: 4}? 1/7 Four letters picked without replacement from {x: 3, d: 5}. What is prob of picking 4 d? 1/14 Two letters picked without replacement from {x: 1, i: 1, f: 1, t: 1, p: 1}. What is prob of picking 1 t and 1 x? 1/10 Three letters picked without replacement from nxhyhtwwttwhx. What is prob of picking 1 h, 1 w, and 1 y? 9/286 Three letters picked without replacement from {o: 7, a: 4}. Give prob of picking 1 o and 2 a. 14/55 Three letters picked without replacement from {o: 7, a: 1, l: 3, d: 2, n: 2, g: 3}. Give prob of picking 2 o and 1 n. 7/136 What is prob of picking 1 c and 2 w when three letters picked without replacement from {c: 1, w: 10, j: 9}? 3/76 Calculate prob of picking 2 p when two letters picked without replacement from {f: 1, p: 2, q: 7}. 1/45 Calculate prob of picking 2 r when two letters picked without replacement from {q: 4, g: 3, r: 13}. 39/95 Calculate prob of picking 2 f and 1 t when three letters picked without replacement from ffffhthhft. 1/6 What is prob of picking 2 m, 1 l, and 1 i when four letters picked without replacement from mmmllmimi? 20/63 Four letters picked without replacement from ttttjjtjtsjsj. Give prob of picking 4 j. 1/143 Two letters picked without replacement from assaqqqaqsszaaoq. What is prob of picking 1 a and 1 z? 1/24 What is prob of picking 1 g and 1 o when two letters picked without replacement from qqogooq? 1/7 Calculate prob of picking 1 t and 1 x when two letters picked without replacement from xnstx. 1/5 Calculate prob of picking 2 t and 2 n when four letters picked without replacement from nnnnntttntttn. 63/143 Two letters picked without replacement from owhgcwhshwsh. Give prob of picking 1 w and 1 h. 2/11 What is prob of picking 2 c and 1 w when three letters picked without replacement from cwcrwwwwrc? 1/8 Two letters picked without replacement from vvvvv. Give prob of picking 2 v. 1 What is prob of picking 2 c and 2 m when four letters picked without replacement from {c: 2, m: 17}? 2/57 What is prob of picking 1 h and 1 w when two letters picked without replacement from {w: 12, r: 1, h: 6}? 8/19 What is prob of picking 1 k and 2 h when three letters picked without replacement from hhwhwkhkhhwwffhfh? 7/85 Four letters picked without replacement from laylyyllylk. What is prob of picking 1 k, 1 l, and 2 a? 0 What is prob of picking 1 o and 1 q when two letters picked without replacement from ococcaaaojcaeoojqooc? 7/190 Calculate prob of picking 1 h, 1 u, and 1 p when three letters picked without replacement from hpppppuhppupppppppup. 3/38 What is prob of picking 1 q and 1 p when two letters picked without replacement from sqxop? 1/10 Two letters picked without replacement from nnnrrrnnnrrrrnrrrrr. Give prob of picking 1 r and 1 n. 28/57 Calculate prob of picking 2 x when two letters picked without replacement from uuuxdxxx. 3/14 Two letters picked without replacement from {u: 7, y: 9}. What is prob of picking 2 u? 7/40 Three letters picked without replacement from {h: 1, e: 5, k: 6, l: 2, m: 1, j: 1}. Give prob of picking 1 e, 1 j, and 1 m. 1/112 Calculate prob of picking 2 a, 1 l, and 1 x when four letters picked without replacement from {a: 8, l: 5, x: 4}. 4/17 Two letters picked without replacement from aaaaaaaaaama. What is prob of picking 2 a? 5/6 Calculate prob of picking 1 g, 1 b, 1 d, and 1 z when four letters picked without replacement from {d: 1, z: 1, b: 1, w: 2, g: 1, p: 1}. 1/35 Four letters picked without replacement from ffffjfjfjffjff. What is prob of picking 3 f and 1 j? 480/1001 Calculate prob of picking 1 p and 1 d when two letters picked without replacement from ddckpdpdd. 5/18 Four letters picked without replacement from xxrodxssoxsxouuxood. Give prob of picking 1 r and 3 o. 5/1938 Four letters picked without replacement from bmbmbbmmfmfbbb. Give prob of picking 2 f and 2 m. 10/1001 Four letters picked without replacement from {x: 3, q: 6, i: 4, a: 1}. Give prob of picking 1 q, 1 i, and 2 x. 72/1001 Two letters picked without replacement from dkuuxkuuxuuduuuuuu. Give prob of picking 1 d and 1 x. 4/153 Three letters picked without replacement from komkkomomeomkio. What is prob of picking 1 o, 1 i, and 1 k? 4/91 What is prob of picking 2 c and 2 o when four letters picked without replacement from {c: 6, l: 9, o: 4}? 15/646 Calculate prob of picking 1 b, 1 i, and 1 t when three letters picked without replacement from {b: 3, i: 1, t: 4}. 3/14 Calculate prob of picking 1 h and 1 y when two letters picked without replacement from tttththhkggtykhh. 1/24 Calculate prob of picking 1 k and 1 m when two letters picked without replacement from kfkkgkofomokmkm. 6/35 Two letters picked without replacement from {u: 5, s: 2, x: 5}. What is prob of picking 2 s? 1/66 Two letters picked without replacement from {p: 4, z: 1, e: 8}. Give prob of picking 2 p. 1/13 What is prob of picking 1 z and 1 q when two letters picked without replacement from zzqzqzzzzzzzzzzq? 13/40 Calculate prob of picking 4 e when four letters picked without replacement from {a: 5, e: 13}. 143/612 What is prob of picking 1 d and 1 e when two letters picked without replacement from neendeeed? 5/18 Three letters picked without replacement from {r: 2, g: 2, o: 10}. Give prob of picking 3 o. 30/91 Two letters picked without replacement from pllklllkz. Give prob of picking 1 z and 1 k. 1/18 What is prob of picking 1 c and 1 d when two letters picked without replacement from mdmdccdmmmcmmdmm? 1/10 Four letters picked without replacement from {v: 3, b: 4, q: 1, e: 5}. Give prob of picking 1 b, 1 v, and 2 e. 24/143 What is prob of picking 2 d when two letters picked without replacement from eeddefdsef? 1/15 Calculate prob of picking 2 l and 1 j when three letters picked without replacement from jjjljjjjjlljjl. 15/91 Two letters picked without replacement from dcgnnncgug. What is prob of picking 1 n and 1 u? 1/15 Four letters picked without replacement from {i: 10, u: 9}. What is prob of picking 3 u and 1 i? 70/323 Two letters picked without replacement from {j: 2, o: 1, e: 1, a: 1, g: 2}. Give prob of picking 1 o and 1 a. 1/21 Two letters picked without replacement from ttttpttddtod. Give prob of picking 1 p and 1 o. 1/66 Two letters picked without replacement from oxnleolmeolooelo. What is prob of picking 1 o and 1 l? 1/5 What is prob of picking 3 k when three letters picked without replacement from {r: 1, o: 6, d: 5, j: 2, k: 3, u: 3}? 1/1140 Three letters picked without replacement from {o: 4, a: 6, v: 3}. Give prob of picking 1 o, 1 v, and 1 a. 36/143 Four letters picked without replacement from yrrrryrniyrryyrry. What is prob of picking 1 i, 1 n, and 2 r? 9/595 Calculate prob of picking 1 q and 2 t when three letters picked without replacement from {t: 5, u: 5, q: 3}. 15/143 Two letters picked without replacement from blpblqqllpbpxqpbqm. What is prob of picking 1 l and 1 p? 16/153 What is prob of picking 2 i when two letters picked without replac
{ "pile_set_name": "DM Mathematics" }
Antibody responses of mice to intragastric and parenterally administered aeroallergens. Intragastric administration of aeroallergens (pollen extract)-primed mice to produce transient serum IgE antibody responses following subsequent parenteral stimulation while the same initial dose of extract, given parenterally, did not have this effect. In previously immunized animals, intragastric administration of pollen extract was found to enhance systemic antibody production. These observations indicate that exposure of gut-associated lymphoid tissue to aeroallergens can have a profound effect on subsequent reaginic antibody production. This procedure provides a useful model for studying IgE responses to allergens without the complication of an initial injection with adjuvant. A combination of parenteral immunization with oral administration may therefore offer a convenient immunotherapeutic manoeuvre for patients with seasonal rhinitis/asthma.
{ "pile_set_name": "PubMed Abstracts" }
INTRODUCTION ============ Silicon (Si), in the form of dissolved silicic acid---often referred to as silicate and hereafter abbreviated as DSi---is an inorganic nutrient instrumental to ocean functioning. Its availability modulates processes as relevant as ocean primary productivity ([@R1]) and the exchange of CO~2~ with the atmosphere ([@R2]). Regional patterns of DSi availability largely result from the consumption of this nutrient by marine organisms (silicifiers) to build their skeletons of biogenic silica (BSi), with diatom utilization among the best known and quantified ([@R1], [@R3]). However, sponges, radiolarians, silicoflagellates, choanoflagellates, testate amoebae, and chrysophyceans, among others, also consume DSi in the ocean ([@R4]), but their activity remains poorly quantified and little understood from a physiological and molecular perspective. In the diatoms, the kinetics of DSi uptake have been investigated in a large variety of species, all of which were reported initially to follow a saturable Michaelis-Menten model. It is also that those saturable kinetics shift into nonsaturable uptake when DSi availability drastically increases and/or under particular physiological conditions ([@R5], [@R6]). Likewise, membrane silicon transporters (SITs) incorporating actively ambient DSi into the diatom cell have long been described and, although passive transporters have not been identified yet, a diffusion-based uptake has been described for at least some diatoms at high DSi availability \[reviewed in ([@R7])\]. In contrast, little is known about these physiological and molecular processes in other groups of marine silicifiers. Because diatoms are restricted to the photic zone of the ocean, a major gap in knowledge relative to DSi utilization in the dark ocean by nonphototrophic silicifiers persists. The discoveries that extensive aggregations of highly silicified sponges are common in the deep sea ([@R8]), that they can accumulate substantial amounts of BSi at the regional scale ([@R9], [@R10]), and that they trigger significant losses of BSi from the ocean ([@R11]) have raised considerable interest in deciphering how Si is processed in these singular, sponge-dominated, deep-sea systems. The lack of knowledge regarding these processes in sponges currently hinders the ability to understand the Si utilization in the dark ocean and makes it difficult to model adequately the role of the biological component in the marine biogeochemical cycle of silicon ([@R3], [@R11]). Information on the physiology of DSi consumption by sponges is sparse and derived exclusively from shallow-water species in the class Demospongiae. These studies indicate Michaelis-Menten kinetics but with optimal DSi consumption attained at environmental DSi concentrations of \>100 μM ([@R12]--[@R16]). Because DSi concentrations higher than 100 µM are virtually never reached in the shallow waters of the modern ocean ([@R17]), the skeletal growth of all shallow-water demosponges investigated to date is therefore chronically limited by Si availability ([@R15], [@R16], [@R18]--[@R20]). Whether this kinetic limitation also applies to sponges in the class Hexactinellida---deep-sea specialists characterized by impressive siliceous skeletons---remains unknown. The physiology of DSi consumption and the molecular pathways of DSi uptake remain largely uninvestigated in hexactinellids, hampered by impediments to conducting in situ and laboratory experimentation with such relatively large and delicate deep-sea animals. Nevertheless, the interest is enormous. Major differences in the kinetics of DSi utilization between the two major lineages of siliceous sponges (i.e., Demospongiae and Hexactinellida) cannot be discarded, as silicification in demosponges revolves around the activity of the nonsoluble silicatein enzyme ([@R21]), while the process in hexactinellids appears to be governed by a phylogenetically unrelated, soluble enzyme, glassin ([@R22]). In addition, because hexactinellids have essentially syncytial organization while demosponges have a conventional cellular organization, the membrane transporters involved in Si utilization may not be shared but may be lineage specific instead. Very little is known on molecular Si transport in demosponges ([@R23], [@R24]) and nothing in hexactinellids. This lack of knowledge obscures the understanding of the evolution of the biosilicification process in the animal kingdom and its relationships to that in other organisms. Here, we characterized experimentally the kinetics of DSi consumption in the hexactinellid sponge *Vazella pourtalesii* (Schmidt, 1870), a rosellid distributed from \~100 to 935 m in the northwest Atlantic that forms extensive monospecific aggregations on the deep continental shelf off Nova Scotia ([Fig. 1A](#F1){ref-type="fig"} and fig. S1), eastern Canada ([@R25]). We tested the laboratory-based kinetic model by comparing its predictions to both in situ determinations of DSi consumption rates using incubation chambers ([Fig. 1, B to D](#F1){ref-type="fig"}, and movies S1 and S2) and rates of BSi production derived from individuals of a known age grown in the wild on an artificial substrate ([Fig. 1, E and F](#F1){ref-type="fig"}). In combination with those physiological experiments, we conducted a quantitative large-scale assessment of gene expression as a function of DSi availability. The results of this study offer a mechanistic explanation for the kinetics of DSi utilization in hexactinellids and provide fresh insights into the molecular systems of Si transport and their evolution within sponges and across other silicifying organisms. ![Imagery depicting various aspects of field work.\ (**A**) General view of the aggregation of *V. pourtalesii* in the Sambro Bank Sponge Conservation Area in Emerald Basin. (**B**) Collected sponge transferred to the floor piece of the incubation chamber. (**C**) Sponge enclosed in incubation unit, which is being clutched by the ROV arm for deployment on the seabed. (**D**) Incubation unit deployed on the sponge ground. (**E**) Recovered Ocean Tracking Network (OTN) mooring with *V. pourtalesii* (arrows) recruitment. Scale bar, 25 cm. (**F**) Close-up of a sponge recruited on the mooring showing its protruding BSi skeleton. Scale bar, 1 cm. Pictures (A) to (D) are frames from movies extracted and processed by M. Maldonado (CEAB-CSIC). Pictures (E) and (F) were taken by M. Maldonado (CEAB-CSIC).](aba9322-F1){#F1} RESULTS ======= Modeling and testing the physiology of DSi consumption ------------------------------------------------------ Live sponges were collected using the remotely operated vehicle (ROV) Remotely Operated Platform for Ocean Sciences (ROPOS) and taken to the laboratory for incubation in progressively increasing DSi concentrations (12, 30, 60, 100, 150, 200, and 250 μM DSi; see Materials and Methods). Initially, all 11 assayed individuals increased their DSi consumption rate in response to the progressive increase of DSi availability in the seawater ([Fig. 2, A and B](#F2){ref-type="fig"}, and tables S1 and S2). As also known for demosponges, the DSi consumption rate notably varied among individuals, with an average maximum consumption of 0.106 ± 0.050 μmol Si per milliliter of sponge tissue and per hour (hereafter given as μmol Si ml^−1^ hour^−1^) at an average DSi concentration of 150.9 ± 69.3 μM. Over that concentration threshold, the consumption rate of most individuals did not increase with increasing DSi availability, revealing that the Si transport system reaches the maximum speed (i.e., optimal utilization) at about 150 μM DSi and saturates at higher concentrations. ![Summary of DSi consumption as a function of experimental DSi availability.\ (**A**) DSi consumptions of 11 individuals of *V. pourtalesii* as a function of experimental silicic acid (DSi) concentration in the laboratory. The averaged response fits an ERM model (blue lines) better than a Michaelis-Menten (MM) kinetics (red lines). (**B**) Statistics of the average consumption (±SD) best fitting to an ERM model. Note that DSi consumption calculated both from in situ incubations and from BSi produced under field conditions fall within the 95% confidence band of the model.](aba9322-F2){#F2} Unlike in all demosponges studied to date, the model best fitting the average DSi consumption rate of *V. pourtalesii* in response to DSi availability did not follow Michaelis-Menten kinetics (*r*^2^ = 0.841, *P* = 0.004; [Fig. 2A](#F2){ref-type="fig"}). An exponential rise to a maximum (ERM) model showed the best fit (*r*^2^ = 0.898, *P* = 0.002; [Fig. 2, A and B](#F2){ref-type="fig"}) to the empirical data on DSi consumption as a function of DSi availability, "consumption rate = *a* (1 − *b*^\[DSi\]^)". Although the difference in statistical fit between the "Michaelis-Menten" and "ERM" models was apparently small, the ERM model was built on parameters with higher statistical significance (*a* = 0.093 ± 0.008, *P* \< 0.001; *b* = 0.978 ± 0.006, *P* \< 0.001) than those of the Michaelis-Menten model \[*V*~max~ = 0.114 ± 0.019 μmol Si ml^−1^ hour^−1^, *P* = 0.002; Michaelis constant (*K*~m~) = 44.876 ± 23.934 μM Si, *P* = 0.120\]. The "*a*" parameter of the ERM model is the exact conceptual equivalent of the *V*~max~ in the Michaelis-Menten model, indicating a maximum velocity of DSi utilization of 0.093 ± 0.008 μmol Si ml^−1^ hour^−1^. Likewise, the exact ERM equivalent of the *K*~m~ parameter (i.e., the DSi concentration at which half-saturation or half *V*~max~ is achieved) can also be calculated as "\[DSi\] = log 0.5/log *b*," after having replaced in the ERM equation "consumption rate = 0.5*a*". It yields a value of 31.16 μM, revealing comparatively low affinity for the DSi in this sponge species (see [Fig. 3](#F3){ref-type="fig"}). ![Comparative summary of the DSi consumption kinetics in sponges.\ (**A**) The kinetics of *V. pourtalesii* is compared against all other sponge species investigated to date ([@R13], [@R15], [@R16], [@R18]), which were demosponges with Michaelis-Menten kinetics. For relevant physiological comparison, DSi consumption rates were normalized to ash-free dry weight (AFDW), which represents essentially the organic component of the sponge that could be involved in silicification. The DSi consumption kinetics of *V. pourtalesii*, which does not follow a Michaelis-Menten model, is among the less efficient, except for that characterizing a group of slow-growing species in the genus *Axinella.* (**B**) A zoom on the graph within the range of natural DSi concentrations illustrates how *V. pourtalesii* is also less efficient than most demosponges at low DSi availability.](aba9322-F3){#F3} We tested the predictions of the developed kinetic model against empirical determinations of DSi consumption and BSi production rates in field conditions. Using five custom-manufactured methyl methacrylate chambers incorporating two ROV-operated seawater collectors, we incubated four sponge individuals and a control chamber under natural settings ([Fig. 1, B to D](#F1){ref-type="fig"}, and movies S1 and S2). Incubations were conducted in the densest sponge aggregations of both the Sambro Bank Sponge Conservation Area and LaHave Basin (fig. S1) at depths of \~160 to 185 m, respectively, for periods varying from 19 to 28 hours, and at an average DSi concentration of 15.56 ± 0.68 μM. Individuals of different sizes were assayed (64, 126, 323, and 492 ml in volume; table S3), so that a relatively wide range of the size spectrum in the natural population was considered (all but very small or very large sponges). In situ consumption rates ranged from 0.007 to 0.034 μmol Si ml^−1^ hour^−1^, averaging 0.024 ± 0.012 μmol Si ml^−1^ hour^−1^. This average consumption was markedly similar to the one predicted (0.027 ± 0.006 μmol Si ml^−1^ hour^−1^) by the laboratory kinetics at a DSi concentration of 15.56 μM, falling within the 95% confidence range of the model ([Fig. 2B](#F2){ref-type="fig"}). We also estimated the rate at which BSi---that is, the siliceous skeleton---was produced by the sponges under natural conditions to compare BSi production rates to DSi consumption rates obtained both from the in situ incubations and the predictions of the laboratory kinetic model. The recovery of two moorings that were immersed for 15 and 58 months brought up sponges that had settled on them, making the approach possible ([Fig. 1, G and H](#F1){ref-type="fig"}; Materials and Methods). The two largest sponges on the mooring deployed for 15 months were about 14 months old, 1.4 and 2.9 cm in height, 1 and 3 ml in body volume, and 0.03 and 0.11 g in BSi content, respectively. The three largest sponges on the mooring deployed for 58 months were about 54 months old and ranged from 10 to 13 cm in height, 100 to 158 ml in volume, and 3.5 to 8.3 g in BSi content. These data indicated that skeletal BSi was produced at a rate threefold higher (0.056 ± 0.008 μmol Si ml^−1^ hour^−1^) during the first 14 months of life than in subsequent years (0.019 ± 0.004 μmol Si ml^−1^ hour^−1^), a growth pattern also known from other aquatic invertebrates ([@R26]). When the data from all five individuals were pooled together, an average BSi production rate of 0.033 ± 0.021 μM Si ml^−1^ hour^−1^ emerged. Again, this value fell within the 95% confidence range of the model prediction (0.029 ± 0.007 μM Si ml^−1^ hour^−1^) at a DSi availability of 16.93 μM ([Fig. 2B](#F2){ref-type="fig"}), which is the average DSi concentration in the bottom water on the central Scotian Shelf (table S4), as measured during a 20-year monitoring program ([@R27]). The general agreement among the rates of Si utilization predicted by the kinetic model, those measured through in situ incubations and those derived from the BSi production in field conditions, indicates consistency in the responses of the sponges in different situations, confirming that laboratory experiments are a suitable proxy for DSi utilization. It also suggests that, approximately, all consumed DSi (at least at natural ambient concentrations) becomes BSi production. The DSi consumption kinetics of *V. pourtalesii* reaching optimal utilization at about 150 μM DSi suggests that the natural population suffers from chronic DSi limitation, as mean DSi availability in the bottom water of the central Scotian Shelf averages only 16.93 ± 8.65 μM (table S4). Concentrations larger than 50 μM have not been measured at any depth in the North Atlantic ([@R17]). A comparison of the DSi utilization kinetics known for sponges to date ([Fig. 3](#F3){ref-type="fig"}) indicates that *V. pourtalesii* is comparatively less efficient in DSi consumption than all demosponges but *Axinella* spp. This suggests that to build and maintain the highly silicified skeletons characterizing hexactinellid sponges, they may need continuous exposure to DSi levels higher than those characterizing the modern photic ocean \[\<10 μM; ([@R1])\]. The question remains, however, as to why the DSi consumption system of these sponges persists largely maladapted, unable to evolve in response to a shrinking DSi availability that started in the oceans at least 60 million years (Ma) ago ([@R19], [@R28]) or even earlier ([@R29]). Molecular insight into DSi consumption -------------------------------------- To elucidate why hexactinellid sponges are particularly inefficient when using DSi at the relatively modest concentrations of the modern ocean, we attempted to activate and identify in *V. pourtalesii* (see Materials and Methods) the Si transporters potentially involved in the process of both DSi uptake and its internal transport. To this end, we quantified gene expression in six of the sponge individuals that had been exposed to progressive DSi enrichment (from 12 to 250 μM DSi) during the kinetic experiment, as indicated in the "Ex situ incubations for kinetics of DSi consumption" section. Their gene expression was contrasted with that of six individuals not exposed to any DSi enrichment but to the natural (12 to 17 μM DSi) concentration (hereafter referred to as "control individuals"). The set of treated individuals (hereafter referred to as "DSi-enriched individuals") consisted of the sponges \#3, \#4, \#5, \#7, \#9, and \#10 used in the kinetic experiment (as indicated in [Fig. 2A](#F2){ref-type="fig"}). A de novo reference transcriptome was obtained after pooling the reads from the 12 sponge libraries, and its resulting summary metrics (table S5) showed it to be well assembled and with very high BUSCO completeness scores (95.1% of the eukaryotic cassette and 87.3% of the metazoan cassette). In the DSi-enriched individuals, 597 genes were differentially up-regulated relative to the control group, of which 269 had a BLAST hit against the RefSeq and 197 against Swiss-Prot databases (fig. S2 and data file S1). In the control, 980 genes were found up-regulated when compared with the DSi-enriched individuals (fig. S2 and data file S1). Among the genes up-regulated in the DSi-enriched individuals, only 131 (33%) had a gene ontology (GO) term annotation. Identified overexpressed genes in the DSi-enriched individuals belonged to a wide array of functional categories (fig. S3 and table S5). We identified abundant transmembrane transport and vesicle-mediated transport categories, as well as responses to stress, lipid metabolism, and mRNA processing, among others in the Biological Process category. In addition, certain molecular functions were up-regulated, such as lysosome-related genes (e.g., solute carriers and cathepsins among others), transporter activity, chitin binding, and oxidoreductase activity ([Fig. 4A](#F4){ref-type="fig"} and fig. S3). Indirect evidence indicates that silicification in sponges is a complex, energy-consuming biological process ([@R10], [@R12]). Therefore, up-regulation of multiple gene pathways not directly related to Si utilization was not unexpected. For our study, we focused only on those genes that had previously been demonstrated in the preexisting literature of other organisms as being involved either in Si transport or in Si polymerization. This circumvents the need to carry out further unrealistic heterologous expressions or knockout experiments with *V. pourtalesii*, a deep-sea animal for which any subsequent gene functionalization would require additional unaffordable economic and logistic transnational investments for additional experimental work with live individuals. ![Differential expression of Si-related genes in *V. pourtalesii*.\ (**A**) Heatmap of the genes putatively related to silicon (Si) utilization and ion transporters. Relative expression was obtained from normalized expression levels using trimmed mean of *M*-values (TMMs) of potential target genes in biomineralization. Genes DE are shown in red bold letters. (**B**) Normalized expression levels using TMMs of genes known to be involved in Si processing in sponges or other organisms. Asterisks indicate DE genes with statistical significance between the two groups of individuals (DSi-enriched group versus control group) following the criterion of at least twofold expression and a *P* value corrected by false discovery rate (FDR) of 0.001. SD, standard deviation.](aba9322-F4){#F4} We found two homologs of the gene *glassin*, which code for the only silicifying protein identified in hexactinellids to date ([@R22]). Unexpectedly, only *glassin 1* was slightly up-regulated (see Discussion) but not differentially expressed (DE), with its expression level in the DSi-enriched group being barely twofold that of the control group ([Fig. 4](#F4){ref-type="fig"}). *Silicatein* genes, which are members of the cathepsin family of cysteine proteases and code for the silicifying enzyme of demosponges, had occasionally been reported from hexactinellids ([@R30], [@R31]). However, no *silicatein* sequences were found in *V. pourtalesii*, a result consistent with the growing suspicion that initial reports of silicatein in hexactinellids were either contamination from demosponge samples or misidentified cathepsin-like proteins not involved in the silicification ([@R22], [@R32]). Two gene groups of transmembrane proteins related to Si transport (*aquaporins* and *ArsB*) were up-regulated in the DSi-enriched individuals. Aquaporins are ancient channel proteins that facilitate bidirectional passive---but relatively selective ([@R33])---flux of water and/or small noncharged solutes across membranes and that are present in all kingdoms of life ([@R34]). The overexpressed *aquaporins* consisted of three genes ([Figs. 4](#F4){ref-type="fig"} and [5A](#F5){ref-type="fig"}). One of the *V. pourtalesii* protein sequences---aquaporin 3--like---showed high similarity to aquaglyceroporin 3 of chordates, a second one---aquaporin 9--like---was more similar to the aquaglyceroporin 9 of chordates, while a third one---aquaporin 3/9--like---had equal sequence similarity to both aquaglyceroporin 3 and 9. Only this later aquaglyceroporin 3/9 showed a much higher---and statistically significant---overexpression in the DSi-enriched individuals ([Fig. 4](#F4){ref-type="fig"}). Aquaglyceroporins 3 and 9 are known to be major intrinsic proteins that function in chordates as passive channels facilitating a Si inflow from the intercellular medium into cells ([@R35]). ![Phylogenetic relationships of transmembrane silicon transporters.\ (**A**) Phylogenetic hypothesis of aquaporin protein family relationships and (**B**) low-silicon (Lsi2) and arsenite-antimonite (arsB) efflux transporters, which are also related to the protein family pink-eyed dilution (PED) transporters. In both cases, phylogenetic trees were obtained with ML, and the topology was congruent with that obtained from a Bayesian inference analysis. Therefore, posterior probabilities from the Bayesian inference were mapped on the nodes. Only bootstrap values more than 70 and posterior probabilities more than 0.90 are shown on the nodes. Accession numbers and contig names are in parentheses. Names in blue are new data from this study.](aba9322-F5){#F5} In a maximum likelihood (ML), noncomprehensive, phylogenetic analysis of the aquaporin protein family, which produced a tree topology with 100% congruence to that of an alternative Bayesian approach, four main clades were obtained ([Fig. 5A](#F5){ref-type="fig"} and fig. S4): (i) one containing the aquaglyceroporins 3, 7, and 9 as well as the plant nodulin26 transporters (NOD26) and Lsi1 sequences from plants, which are all involved in passive transport of silicic acid, glycerol, and possibly water to the cells; (ii) a clade containing plasma membrane intrinsic (PIP) and tonoplast intrinsic (TIP) aquaporins, involved in water, glycerol, and ammonia transport; (iii) a clade containing aquaporin 8 (present in some sponges but not in hexactinellids) and small basic intrinsic proteins (SIP), mediating water, ammonia, and urea passive flow; and (iv) a clade of several aquaporins and PIPs involved in water transport ([Fig. 5A](#F5){ref-type="fig"}). Only the first clade, recovering aquaglyceroporins 3, 7, and 9, showed robust nodal support. The three *V. pourtalesii* aquaglyceroporin sequences involved in DSi transport also clustered (with statistical significance) with other unnamed aquaporins that we have recovered from the transcriptome of the hexactinellid *Rosella fibulata* and several demosponges, including both silicifying (i.e., *Lubomirskia baikalensis*, *Cliona varians*, and *Petrosia ficiformis*) and nonsilicifying species (i.e., *Dendrilla antarctica* and *Ircinia fasciculata*; see Discussion), collectively forming a robust "sponge clade" ([Fig. 5A](#F5){ref-type="fig"} and fig. S1). While demosponges had a single member of this aquaporin clade (3, 7, and 9), hexactinellids showed expansion into three members ([Fig. 5A](#F5){ref-type="fig"}), which could be the result of gene duplication and initial subfunctionalization for improved Si transport (see Discussion). The other gene group related to Si utilization that was differentially overexpressed in the DSi-enriched individuals were *arsB*- and/or *Lsi2*-like genes ([Figs. 4](#F4){ref-type="fig"} and [5B](#F5){ref-type="fig"}), members of a superfamily of active ion transporters (Na^+^/H^+^ antiporters) that mediate a selective efflux of metalloids from the cytoplasm. The independent discovery of the *arsB* genes---initially described from bacteria as for transmembrane transporters of arsenic ([@R36]) and other metalloids ([@R37])---and the *Lsi2* (low silicon 2) genes ([@R38], [@R39])---first identified in plants, as coding for transmembrane transport of Si and other metalloids---favored two different gene denominations (i.e., *arsB* versus *Lsi2*). This historical nomenclature still persists despite these genes showing clear sequence orthology and strong similarity in function (see Discussion). Two *arsB/Lsi2* genes (herein referred to as *arsB 1* and *arsB 2*) were found differentially overexpressed in *V. pourtalesii* ([Fig. 4](#F4){ref-type="fig"}). *ArsB* genes also occur in the demosponge *Amphimedon queenslandica* and are known in several silicifying eukaryotes ([Fig. 5B](#F5){ref-type="fig"} and fig. S4), including diatoms, radiolarians, and choanoflagellates ([@R23]). These eukaryotic versions of *Lsi2* and *arsB* genes also show sequence similarity to the prokaryotic arsenic transporters ([Fig. 5B](#F5){ref-type="fig"}). In our phylogenetic hypothesis for the evolution of arsB/Lsi2 transporter proteins, in which the ML and the Bayesian approaches showed complete tree topology congruence ([Fig. 5B](#F5){ref-type="fig"}), four main subclades were recognized among the arsB/Lsi2 proteins: (i) one consisting of the arsB homologs of bacteria, with a single arsB domain; (ii) another large clade composed by the homologs of diatoms and plants, with the diatom sequence containing a CitMHS domain \[citrate-Mg^2+^:H^+^(CitM)-citrate-Ca^2+^:H (CitH) symporter\] and the plant homologs both Nhab and arsB domains; (iii) a small clade containing the Lsi2-like homologs of choanoflagellates, which only have an arsB domain; and (iv) a large clade containing the animal homologs of arsB/Lsi2, with a single CitMHS and one or several transmembrane domains. Of note, SITs, which are sodium-coupled transmembrane proteins operating as active silicic acid--specific transporters in a variety of unicellular silicifying eukaryotes ([@R23]), such as diatoms, choanoflagellates, and haptophytes, were absent in *V. pourtalesii*, in agreement with other studies on sponge genomes ([@R23]). Related silicon transporter--like genes (*SIT-L*s), occurring in some metazoans such as annelids, copepods, and tunicates ([@R23]), were also absent. Likewise, the singular active Si transporter of vertebrates, solute carrier transporter Slc34a2 ([@R40]), was also absent. The NBC (Na^+^/HCO~3~) transporter, which was tentatively suggested to be involved in cotransporting DSi in the demosponge *Suberites domuncula (*[@R24]*)*, was present in the transcriptome of *V. pourtalesii* (TRINITY_DN38500_c0_g1_i1) but not up-regulated by DSi enrichment, a response that does not support a direct involvement in DSi transport in this hexactinellid sponge. DISCUSSION ========== Hypothesis of action mechanism for DSi transport ------------------------------------------------ The results of this study suggest that molecular DSi transport in sponges appears to function through cooperation between a passive Si inflow (mediated essentially by aquaglyceroporins 3/9) and a coupled active Si efflux (mediated by the arsB transporters). Cooperation between active and passive pathways for DSi transport---but based on different transporters---has recently been found during silicification in plants ([@R41]) and mammals ([@R40]). Aquaporins are bidirectional passive channels and, therefore, are unable to act uphill against a solute gradient. They would only be able to mediate a monodirectional Si influx effectively into the sponge cells if the DSi concentration in the seawater is high enough to build initially a steep gradient and the passive Si inflow is subsequently coupled with the active Si efflux of the arsB/Lsi2-like transporter to transport the incoming Si out of the cytoplasm. This new destination should be the mesohyl for either intercellular silicification or subsequent transport into the silicifying cells (sclerocytes) to accomplish silicification within the silica deposition vesicle of the sclerocytes ([Fig. 6](#F6){ref-type="fig"}). Logistic constraints inherent to collecting and working with living deep-sea sponges prevent further experimental work on *V. pourtalesii* in the near future to empirically resolve the exact location of the DSi membrane transporters, which is tentatively proposed in [Fig. 6](#F6){ref-type="fig"}. Nevertheless, the information gained by this study will considerably ease future attempts to locate the DSi transporters in shallow-water demosponges, which are more accessible for experimentation. The participation of aquaporin channels explains why the consumption of DSi by sponges happens efficiently only at very high DSi concentrations: because high concentrations are needed to build initially a steep DSi concentration gradient between the extracellular and the intracellular environment. The coupling of the gradient-facilitated DSi inflow and the active Si efflux maintains the gradient steepness as the arsB transporters continuously expel DSi from the cell cytoplasm to somewhere else, either to the mesohyl or to the silica deposition vesicle ([Fig. 6](#F6){ref-type="fig"}), depending on whether the DSi is required for intracellular or intercellular silicification. The saturation of this active arsB transporter is the most likely reason why the DSi consumption saturates at high DSi concentrations, despite those concentrations being better for passive aquaporins working more efficiently. The absence of SITs (i.e., active adenosine triphosphate--consuming transporters) in favor of passive aquaporin channels to mediate the Si influx into sponge cells and the deposition vesicle is likely the reason for the low efficiency of sponges when processing DSi at low concentrations, compared to diatoms, which do have SITs. ![Hypothesis of pathways for utilization of ambient DSi by sponges.\ Schematic summary of the routes putatively mediated by passive aquaglyceroporins (Aqua) and active arsB transporters across membranes of cells and the silica deposition vesicle (SDV) for BSi production. For the sake of clarity, this diagram does not include putative intercellular steps of BSi deposition and reproduces the cellular organization of demosponges rather than the syncytial structure of hexactinellids.](aba9322-F6){#F6} Physiological consequences -------------------------- Our physiological results suggest that modern hexactinellids may be even less adapted than their demosponge counterparts to the relatively low DSi concentrations that characterize the modern diatom-dominated photic ocean ([Fig. 3](#F3){ref-type="fig"}). In contrast, at DSi concentrations of \<10 μM, diatoms are known to reach transport rates ([@R42]--[@R44]) that are two to three orders of magnitude those of any sponge investigated in this regard ([@R10], [@R15]), clearly favored by their active DSi transport system based on SITs. Therefore, competitive exclusion of the hexactinellids by the better performance of diatoms and demosponges in DSi uptake is likely the main reason why hexactinellids predominate in deep-sea environments, where diatoms cannot grow because of light limitation and where DSi concentrations remain slightly higher than in the photic zone ([@R45]). This scenario suggests that the impressive BSi skeletons that characterize most hexactinellid sponges can only be built at slow rates (i.e., over long time periods) and when encountering DSi concentrations higher than those characterizing the upper modern ocean, which are typically less than 10 μM ([@R17]). The hexactinellid *V. pourtalesii* is the only known sponge investigated to date with DSi consumption kinetics that depart from the typical Michaelis-Menten model. Further research on other hexactinellids will be needed to elucidate whether this difference consistently applies to the level of phylogenetic class. In our experimental approach to the kinetic model, we selected for the minimum number of DSi concentration steps that would identify a kinetic model with statistical significance. The reason for such tight design relates to the well-known difficulties of maintaining live deep-sea sponges under laboratory conditions for long periods. Therefore, we decided not to risk the success of the experiment unnecessarily by avoiding intermediated DSi concentrations that would have extended the duration of the assay without adding significant resolution to the essence of the outcome. Fortunately, casualties did not occur at any time during the experiment, with all sponges remaining healthy, as also reflected in the molecular message of the transcriptomes. Molecular responses ------------------- Earlier work has demonstrated that exposure of a Mediterranean demosponge species in the laboratory to DSi concentrations much higher (30 and 100 µM) than those occurring in its natural habitat (1 µM), yields production of some types of skeletal pieces that do not occur in wild populations ([@R19]). In addition to revealing that skeletal production in natural conditions was chronically limited by DSi, the finding also suggested that increasing DSi up-regulates genes involved in its utilization that are not expressed at low DSi. On this basis and in the absence of further experimental progress since, we designed our experiment and focused our analysis on those genes that were overexpressed as the result of increased DSi availability. The fact that the optimal DSi utilization rate in *V. pourtalesii* was herein demonstrated to be attained at DSi concentrations between 100 and 150 μM ([Fig. 2](#F2){ref-type="fig"}), which are not naturally available to the sponges (table S4), further encouraged the expectations of our approximation. The opposite response can be obtained in diatoms, the main Si competitors of sponges, which reach optimum DSi transport at relatively low concentrations and may down-regulate their active DSi transporters (SITs) when exposed to abnormally high DSi concentrations ([@R46]). As a result of our DSi enrichment and subsequent RNA sequencing analysis, many genes related to the GO categories of vesicle- and membrane-mediated transport and lysosome transport were overexpressed, including several *lysosome-related*, *solute carriers*, *sorting nexin-1*, *transport Sec61 subunit gamma*, *BPC intracellular cholesterol transporter 2*, *cystinosin*, *magnesium transporter NIPA2-like*, *sodium bile acid cotransporter 7 isoform 2*, *Ras-related Rab genes*, *vacuolar-sorting protein genes*, etc. However, there is no evidence to date that any of those genes are directly involved in Si utilization. Therefore, their study and functionalization fall beyond the scope of the present work. Alternatively, we obtained up-regulation of both active and passive membrane transporters that had been related to DSi transport in previous studies on organisms other than sponges. Regarding silicifying proteins, it was interesting that the *glassin* genes ([@R22]), which code for the protein that catalyzes silica polycondensation in hexactinellid sponges, were not particularly overexpressed. This finding is plausible for several reasons. Experiments on freshwater demosponges showed that enzymatic axial filaments were automatically produced by the sponges even when no DSi was available in the environment to undertake spicule silicification ([@R47]). Likewise, high experimental DSi concentrations increased the length and thickness of spicules in a marine demosponge, but the total number of spicules (which is determined by the produced number of silicatein axial filaments) did not increase substantially ([@R19]). These two previous studies on demosponges agree with our results and support the view that expression of silicifying enzymes (either silicatein or glassin), unlike that of the membrane transporters (*aquaglyceroporin 3/9* and *ArsB*), may not be controlled directly by environmental DSi concentrations but is subject to more complex levels of regulation. Regarding the passive DSi transporters, we found that *aquaglyceroporins 3* and *9* were only slightly up-regulated in DSi-enriched individuals, while *aquaglyceroporin 3/9* was significantly overexpressed ([Fig. 4B](#F4){ref-type="fig"}). Previous evidence already has indicated that *aquaglyceroporins 3* and *9* (and also *7*) are involved in Si transport in vertebrates ([@R35]). The fact that homologous genes were up-regulated in the DSi-enriched individuals of *V. pourtalesii* provides strong additional evidence for their involvement in transmembrane Si transport. Regarding the active DSi transporters, the overexpression of *arsB 1 and arsB 2* genes in the DSi-enriched individuals strongly supports their involvement in DSi transport in sponges. Such a function had already been demonstrated for other organisms, mostly land plants ([@R41], [@R48]), but it had also tentatively been suggested---based on sequences similarity---for sponges ([@R23]). In rice, the same protein (but named Lsi2) has the dual capability of transporting both arsenic and Si ([@R41], [@R48]), probably due to molecular mimicry between these two metalloids and also others, such as boron, germanium, arsenic, antimony, and tellurium ([@R33]). Likewise, *arsB* homologs in the diatom *Thalassiosira pseudonana* show coexpression with a silicification-related gene ([@R46]), suggesting that the arsB transporters, initially related to arsenic transport, can also expel actively Si across cell and vesicle membranes. ArsB/Lsi2 transporters are sister of the transporter family including pink-eyed dilution (PED) proteins ([Fig. 5B](#F5){ref-type="fig"}), which mediate transport and processing of tyrosinase and other melanosomal proteins in melanocytes and other cells in mammals but can also transport sucrose ([@R49]). Evolutionary implications for biosilicification ----------------------------------------------- Collectively, evaluation of our results in the context of the available information supports the view that while Si membrane transporters appear to be shared by the two major siliceous lineages of sponges (Hexactinellida and Demospongiae), each lineage has evolved independently its own silicifying enzymes (glassin versus silicatein). This scenario also suggests that the mechanisms for Si transport across cell membranes in sponges predate the acquisition of the mechanisms for polymerizing DSi into BSi, a pattern that appears to apply to other silicifying organisms as well. The available evidence that aquaglyceroporins 3 and 9 and arsB/Lsi2 proteins can be used for transporting, in addition to Si, other elements fundamental for survival (e.g., glycerol, several metalloids, sucrose, seawater, etc.) decreases notably the chances that those channels can be modified through evolution to improve Si transport without fatally affecting their functionality for the other elements. This is probably the reason why the DSi consumption system of siliceous sponges remained unchanged through the global DSi decrease that started at least some 100 to 65 Ma ago with the expansion of diatoms ([@R19], [@R28]) and still persists maladapted. Yet, we have found evidence of duplication and possible subfunctionalization of aquaglyceroporins in *V. pourtalesii*. *Aquaglyceroporins 3* and *9* were only moderately up-regulated in response to the DSi treatment, while *aquaglyceroporin 3/9* was significantly overexpressed ([Fig. 4](#F4){ref-type="fig"}). This last aquaglyceroporin could be the result of duplication and further subfunctionalization to more efficiently transport Si, while the two others remain more generalist passive transporters. It remains unknown at what level aquaglyceroporin 3/9 may still be involved in the flux of other essential metalloids. Initially, a disconcerting feature was the presence of *aquaporin 3*--, *aquaporin 9*--, and *aquaporin 3/9*--like genes in transcriptomes of *D. antarctica* and *I. fasciculata* ([Fig. 5A](#F5){ref-type="fig"} and fig. S4), which are nonsilicifying demosponges characterized by protein (i.e., spongin) rather than silica skeletons. The presence of those DSi passive transporters, in addition to the absence of silicatein genes in these species, has two potential explanations: (i) Passive aquaporin channels for Si, which are also used for other fundamental metalloids, were in place before the evolution of the silicifying enzymes, a hypothesis also supported by the independent acquisition of silicateins and glassin by demosponges and hexactinellids, respectively; (ii) silicatein and aquaporin channels for Si evolved concomitantly, but silicatein would have been lost secondarily and independently in various nonsilicifying members of the several demosponge lineages, while aquaporins were retained because those pore channels are also used for elements other than Si. This second option, because of invoking multiple independent loses, is less likely. Whether the silica skeleton was the primitive skeletal condition for sponges, which was subsequently lost homoplasically in several demosponge lineages, is a hypothesis that can be definitely resolved only by direct genomic evidence of the presence or absence of genes rather than only from the evidence of gene expression captured by transcriptomes. However, embryological evidence ([@R50]) supports the hypothesis that silica skeletons were replaced in at least some demosponge lineages through parallel evolution in favor of alternative skeletal materials (spongin, collagen, etc.). This skeletal evolution could have been forced by the drastic decrease in DSi availability triggered by the evolutionary expansion and proliferation of diatoms about 100 to 65 Ma ago, which would have operated as a negative selective pressure on siliceous skeletons ([@R10], [@R19]). Molecular clocks could attempt to date whether the emergence of spongin skeletons (in the Verongimorpha and Keratosa subclasses) was coincidental with the expansion of diatoms or any other past event ([@R51]) that could have caused a drastic decrease in the availability of DSi in the upper ocean. Nevertheless, such studies have not been conducted to date. The most unexpected aspect in the phylogenetic distribution of DSi transporters across lineages of silicifying organisms is perhaps the absence of SITs in siliceous sponges, because choanoflagellates, which do have SITs ([@R23]), share a common ancestor with sponges. Likewise, SIT-L genes, which appear to be ancestral and have originated SIT genes of other silicifiers by duplication, inversion, and fusion of subunits ([@R52]), occur not only in several groups of silicifiers but also in some groups that are not essentially silicifiers (but rather calcifiers), such as foraminifera, coccolitophorid haptophytes, and some nonsponge metazoans. Why sponges secondarily lost the ancestral SIT complement and are now forced to transport Si through a less efficient aquaglyceroporin system remains unclear. We suggest that passive channeling is energetically less costly and could facilitate silicification even during periods of starvation or limited food supply. In a paleo-ocean with very high DSi concentrations ([@R53]) and moderate food supply, the replacement of active transporters by passive channeling could even be advantageous. However, when the rising activity of biosilicifiers began decreasing the environmental DSi concentration in the global paleo-ocean during the Late Mesozoic or even earlier, the lack of specialized active uptake mechanisms in favor of passive channeling became detrimental to many lineages of siliceous sponges, causing extinctions, impelling bathymetric migrations to the aphotic ocean, and forcing the skeletal evolution of siliceous sponges toward other materials to reduce their dependence on Si availability ([@R19], [@R50]). It is also remarkable that the *aquaglyceroporin* genes used for passive Si transport in sponges have been conserved through evolution across lineages of animals that never were---or have never been found to be---silicifiers, arriving still functional to the genome of vertebrates, where they facilitate a silicification step that has become incorporated into the general calcifying process of bone formation ([@R35]). This now explains why a diet rich in Si is known to favor correct bone formation in vertebrates ([@R54], [@R55]) and why treatments based on Si also improve bone regeneration ([@R56]). Several lineages of demosponges contain relict species and/or genera (collectively alluded as "sclerosponges" or "coralline sponges") in which a basal skeleton of calcium carbonate coexists with isolated siliceous spicules ([@R57], [@R58]), recalling the association between calcifying and silicifying processes recently found not only in vertebrate bone but also in cyanobacteria ([@R59]), coccolitophorid haptophytes ([@R52]), and crustaceans ([@R59], [@R60]). Our discovery that the role in Si transport of aquaporins 3 and 9 is conserved from the Porifera to the Vertebrata gives rise to the possibility that biosilicification and biocalcification are not alternative biomineralization systems but instead mechanisms that have been functionally intertwined since the early stages of animal evolution, and perhaps even earlier. MATERIALS AND METHODS ===================== Species habitat, sampling, and data collection ---------------------------------------------- *V. pourtalesii* is found at depths from 100 to 935 m along the continental margin of North America from Florida (United States) to Nova Scotia (Canada), where it forms extensive, dense aggregations in the deep basins and channels that excise the continental shelf of Nova Scotia, Canada ([@R8], [@R25]). The aggregations are considered monospecific, consisting of an abundance of vase-shaped individuals ([Fig. 1A](#F1){ref-type="fig"}) measuring up to 40 cm in height. Some areas of the Scotian Shelf where *V. pourtalesii* occurs more densely aggregated---i.e., forming "*Vazella* grounds"---have recently been closed (Conservation Areas; fig. S1) for protection from bottom fishing ([@R25]). During an oceanographic mission from 2 to 7 September 2017 to collect data from the *Vazella* grounds for the European Union--funded SponGES project, the ROV "ROPOS" was deployed from the Canadian Coast Guard Ship *Martha L Black* in the *Vazella* grounds located in the Sambro Bank Sponge Conservation Area and north of nearby LaHave Basin. ROPOS is a 40-hp Science/Work Class ROV owned and operated by the nonprofit Canadian Scientific Submersible Facility (CSSF), based in North Saanich, B.C., Canada. During deployment, forward- and downward-facing video footage of the seabed was recorded to determine the fine-scale distribution and densities of *V. pourtalesii*, and using the manipulator arms of ROPOS, live sponge individuals were collected for DSi consumption laboratory experiments and for conducting in situ incubations using benthic chambers, as described below. Ex situ incubations for kinetics of DSi consumption --------------------------------------------------- For investigating the kinetics of DSi consumption ex situ, a total of 11 sponges were collected from the Sambro Bank Sponge Conservation Area and LaHave Basin (fig. S1). The depth range of collected specimens was \~160 m in the Sambro Bank closure to 185 m north of LaHave Basin. Each sponge was collected along with the small rock on which it was attached, so that the manipulator arm of the ROV never handled the sponge tissue but just the rock (movie S1). Once on board, sponges, which were at no time exposed to air during transportation and further experimental work, were maintained for 5 days in an insulated 750-liter polyethylene holding tank inside a refrigerated container. The seawater, which was also refrigerated to 6° ± 1°C, was recirculated continuously and exchanged every 8 hours. Surface water (\<5 m) was pumped using a portable pump into a tank on deck. From there, it was distributed to holding tanks inside the refrigerated container, chilled using a portable 1/3-hp chiller, and subsequently slowly pumped via a peristaltic pump (1.2 liter/min) to the tank containing the sponge specimens. The chilled water tank was refilled twice a day, resulting in four water exchanges/day in the sponge holding tank. The sponges, which were attached to a small rock in all cases, were maintained on the bottom of the holding tank by placing them within individual compartments of a polyviniyl chloride (PVC) grid. Upon return to the Bedford Institute of Oceanography (BIO) in Dartmouth, Nova Scotia, the sponges were transferred to a 500-liter aquarium for 24 hours. The saltwater intake at BIO is 200 m from shore and at a depth of 17 m (\~3 m off bottom). The seawater was passed through a sand filter and then a 20-μm (nominal) bag filter before being delivered to the lab. The filtered seawater entered a 1000-liter aerated tank (header tank) where it was heated/chilled to \~6° to 9°C. This water was then gravity-fed into the sponge holding tank (500 liters of insulated polyethylene). Flow rates were maintained at \~3 to 5 liter/min. A small magnetic drive pump was added to the bottom of the holding tank to provide circulation and horizontal flow across the sponges. The pump was modified with a 12.5 mm--by--300 mm vertical pipe with 6-mm holes added to provide horizontal flow. Upon initiation of the ex situ experimentation, sponges attached to their respective rocky substratum were transferred to a 360-liter tank (hereafter referred to as the "preconditioning tank") and left there for 24 hours for acclimation to a refrigerated (9° ± 0.5°C) seawater system with recirculation. To characterize the kinetic pattern, DSi consumption by the sponges under increasing DSi availability was measured. The experiment ended when saturation was reached by the sponges, that is, when an increase in DSi availability did not stimulate any further increase in the rate of DSi consumption. Seven levels of DSi concentration were progressively offered to the 11 assayed sponges: 12 (approximate field values), 30, 60, 100, 150, 200, and 250 μM Si. Under each concentration, sponges were incubated separately in a polypropylene 16-liter container (hereafter referred to as the "incubation aquaria") for 24 hours. Before each 24-hour incubation, sponges were maintained in the preconditioning tank for 24 hours. Thus, the approach consisted of an alternation of 24-hour "preconditioning" and "incubation" periods for 2 weeks (from 9 to 22 September 2017). The alternation of preconditioning and incubating steps had three main purposes. The first was to facilitate the survival of the sponges across the battery of incubations in the relatively small (16 liters) incubating aquaria, an objective successfully met, as there were no casualties over the course of the 2-week experiment. Second, to render the approach conservative, during each preconditioning step, the sponges were exposed for 24 hours to the same DSi concentration that was to be assayed in the following incubation. Therefore, during the preconditioning step, the sponges were able to take up as much DSi as needed to satisfy their chronic avidity for DSi, an approach that has been empirically demonstrated to lead to slightly lower DSi consumption rates during the following 24-hour incubation period ([@R15]). Third, the sum of the duration of the preconditioning period and the incubation period collectively provided a total time of exposure to each DSi concentration long enough (48 hours) to allow the sponges to unfold a complete physiological response in the silicification process. Theoretically, such a response is expected to involve the generation of new populations of silicifying cells to deal with the increasing availability of DSi, the activation of new sets of genes, and the completion of massive production of silicifying proteins ([@R10], [@R19]). Along with the 11 assayed sponges, we used three control aquaria, each containing seawater and a rock as the one used by the sponges for attachment, but with no sponge. These controls served to correct for potential processes of either DSi release from the rocks or DSi precipitation at the rock surface. It was also difficult to obtain exactly the intended DSi concentrations in the large preconditioning tank because the transfer of sponges to it upon conclusion of each incubation involved an unavoidable transference of seawater at a lower DSi concentration, causing minor dilution of the concentration in the preconditioning tank. This logistical constraint resulted in the following concentrations during the incubations: 12.5, 30.4, 59.0, 93.0, 141.6, 191.8, and 234.0 μM Si. The seawater used for the experiments was pumped in from the Bedford Basin, Nova Scotia (at 7°C) and filtered on a 1-μm mesh, a pore size small enough to prevent the passage of planktonic DSi users (e.g., diatoms, radiolarians, etc.) but allowing, in part, the passage of natural sponge food, that is, the smallest bacterioplankton. In addition, we fed the sponges during the entire experiment by adding 35 ml of a DSi-free, concentrated culture (approximately 10^6^ cells/ml) of the haptophyte *Isochrysis galbana* to the 360-liter tank at the beginning of each preconditioning step. We assumed that, after passing repetitively throughout the mechanical water pump, part of the microphytoplankton cells would be lysed, resulting in a mix of particulate and dissolved organic matter available to the sponges. The assayed DSi concentrations were prepared by adding the corresponding volume of a buffered 0.1 M sodium metasilicate solution \[Na~2~SiO~3~ (pH 10)\] to the 360 liters of filtered seawater contained in the preconditioning tank, followed by mixing of the water for 18 hours with a submersible pump to ensure complete molecular diffusion before transferring the sponges to the tank for their corresponding preconditioning period. To determine the rate of DSi utilization by the assayed sponges during the incubations at each DSi concentration step, a 50-ml water sample was collected at the beginning and end of each 24-hour incubation period. Seawater samples, collected using acid-cleaned plastic syringes, were immediately filtered through 0.22-μm pore, polycarbonate syringe filters (Millex-GS Millipore) and stored in the fridge no longer than 2 days until analysis. Samples from the same DSi concentration step were analyzed together into a single analysis using a Technicon AutoAnalyzer 3 (AA3, SEAL Analytical), a service provided by the CERC.OCEAN research group based at the Dalhousie University (Halifax, Nova Scotia, Canada). Analyses were run in triplicate following the standard colorimetric method, with a determination accuracy (as percent error) of \<5%. Samples with a DSi concentration higher than 60 μM were diluted before analysis with artificial seawater prepared with the same salinity as the water samples (35 practical salinity unit). The rate of DSi utilization by a sponge at a given DSi availability was inferred by calculating the difference in DSi concentration between the start and end of an incubation and after correcting by the average concentration change (often negligible) that occurred in the set of control aquaria. At the end of the experiment, we measured the volume (ml) of both the assayed individuals and their rock substratum by water displacement. Sponges were subsequently wet-weighed (g), dried at 60°C to a constant dry weight (g), and combusted at 540°C for 10 hours for ash-free dry weight (AFDW; g). Rates of DSi consumption were normalized by sponge volume (ml) and/or AFDW (g), volume of seawater in the incubating aquaria (liters) after discounting sponge and rock volume, and duration of the incubation (hours). We preferentially expressed data normalized to sponge volume because it facilitates their future applicability to field sponge populations using ROV images without the need of collecting individuals. However, for correct physiological between-species comparison, we used normalization by AFDW. The relationship between normalized DSi consumption rates (in μmol Si ml^−1^ hour^−1^or g^−1^) and DSi availability (μM) was analyzed by nonlinear regression to identify the best-fitting model for the empirical observations. In situ incubations for DSi consumption --------------------------------------- We built five benthic incubation chambers using methylmethacrilate and inox steel, with an incubation volume of either 17.3 or 13.3 liters ([Fig. 1, B to D](#F1){ref-type="fig"}). Chambers incorporated a floor piece of Delrin acetal resin ([Fig. 1B](#F1){ref-type="fig"}), which allowed for the incubation of sponges in isolation from the external environment, thus avoiding interference by nutrient fluxes from sediments that may be resuspended during deployment. The chambers incorporated two external sampling bottles (120 ml) made of steel and internally folded with polytetrafluoroethylene. Through a steel capillary (20 cm long and 0.6 mm wide) that pierced the wall of the chamber, each bottle was designed to collect a water sample (under negative pressure conditions) from inside the incubation chamber while it was opened for 5 min and then closed using the ROV manipulator arms (movies S1 and S2). During the 2017 oceanographic mission on the Canadian Coast Guard Ship *Martha L Black*, four sponges and a control treatment were incubated in situ. Sponges were collected along with the small rock to which they were attached using the manipulator arms of the ROV (movie S1). Each sponge was then placed on the floor piece and covered with the chamber so that the chamber rested into a groove on the floor piece designed to seal the unit from leakages. Once the sponge was inside the chamber and the chamber was properly sealed, one of the sampling bottles was opened to collect water for 5 min and then closed again to avoid water exchange with the surrounding medium (movie S2). As a control, we incubated a rock selected from the sponge grounds but without an attached sponge. After an incubation period of 19 to 28 hours (incubation time varied because of weather and the logistics of the cruise), the ROV returned to the chambers and triggered the second sampling bottle. After this second water collection, the sponge and its attachment substrate were collected to estimate volume and biomass and to normalize DSi consumption rate, as indicated in the above section of "Ex situ incubations for kinetics of DSi consumption." Seawater samples were processed for determination of the initial and final DSi concentrations and deriving consumption rate as described in the above section. Individual BSi production in the natural habitat ------------------------------------------------ We aimed to estimate how much BSi---mass of siliceous skeleton---was produced by the sponges in their natural habitat per unit time to serve as a comparison for the predictions of DSi consumption from the laboratory-based kinetic model. Preliminary field work revealed that *V. pourtalesii* biofouled acoustic mooring arrays deployed by the Ocean Tracking Network (OTN; Dalhousie University) from Halifax to the shelf break on the Scotian Shelf ([Fig. 1, E and F](#F1){ref-type="fig"}). During routine servicing of these arrays, *V. pourtalesii* individuals were recovered from two moorings that had been immersed for 15 and 58 months, and rates of sponge growth and BSi production during those two periods were estimated. During an additional recovery made during a 2018 oceanographic mission, several substrata offered for *V. pourtalesii* settlement about \~1 year before brought out no sponge recruits (SponGES Consortium, unpublished information). It suggested that larval release is likely to occur in July to August. Therefore, the two largest sponges on the mooring immersed for 15 months were estimated to be 14 months old, and the three largest sponges on the 58-month-old mooring were estimated to be 54 months old. After determining their volume, sponges were wet-weighed (g), dried at 60°C to a constant dry weight (g), and combusted at 540°C for 10 hours for AFDW (g). The BSi content was estimated as 95% of the ash weight. However, for comparative purposes, the BSi content of some of the larger sponges was also estimated through the loss of weight before and after desilicification of the sample in 5% hydrofluoric acid. Gene expression and phylogenetic analyses ----------------------------------------- We collected tissue samples from six control individuals collected from Emerald Basin and from six DSi-enriched individuals (\#3, \#4, \#5, \#7, \#9, and \#10) used in the kinetic experiment (tables S2 and S3). Approximately 5 cm^3^ of tissue sample was collected immediately upon termination of the kinetic experiment, when concentrations were 250 μM DSi, and before any further processing of the sponges for morphometric studies. Samples were preserved in RNAlater (Ambion) at 4°C for 24 hours and then stored at −80°C until further processing. Total RNA from each sample was extracted using a TRIzol (Thermo Fisher Scientific, UK) standard extraction followed by a polyA selection of mRNA using a Dynabeads Direct mRNA purification kit (Thermo Fisher Scientific, UK) according to the manufacturer's protocols. These were used to produce an RNA library for next-generation sequencing using the ScriptSeq library prep kit v2 provided by Illumina (CA, USA). Sequencing was performed on a single run of the Illumina NextSeq 500 platform by the Natural History Museum's (London, UK) Sequencing Unit at 2 × 150 bp read length. A total of 202,315,355 reads were sequenced and remained after the removal of adaptor sequence and initial quality screening. We visualized the quality across the sequencing reads using FastQC (Babraham Bioinformatics) and performed additional trimming with Trimmomatic ([@R61]) to remove areas of sequence with low Phred scores and any residual sequences shorter than 36 bp (settings: ILLUMINACLIP:/ ScriptSeq_adapters.fa:2:30:10 LEADING:3 TRAILING:3 HEADCROP:8 SLIDINGWINDOW:4:15 MINLEN:36, where ScriptSeq_adapters.fa contained the sequences of the adaptors used in sequencing). The remaining total of 173,280,196 paired reads (unpaired reads were not retained) were then used to construct a transcriptome using Trinity 2.4.0 ([@R62]) with default options. Raw reads can be accessioned at the Short Read Archive (SRA) under BioProject number PRJNA580361. Annotation and gene expression analyses --------------------------------------- We obtained the annotations for our de novo--assembled transcriptome using "BlastX" command in DIAMOND ([@R63]) against two different databases: RefSeq and Swiss-Prot (last accessed in August to September 2018), retaining only the best hit with an *e*-value threshold of 10^−5^ in both cases. Then, we used Blast2GO ([@R64]) to obtain the GO terms associated with the blast hits obtained against Swiss-Prot for Biological Process, Molecular Function, and Cellular Component, with the GOSlim function. Completeness of the transcriptome was assessed by searching for single-copy orthologs in both eukaryotic and metazoan databases using BUSCO ([@R65]). For the gene expression analysis, we used standard mapping strategies Bowtie2 and RSEM (RNA-Seq by Expectation Maximization) as implemented in Trinity to collect the number of reads aligning with each of our genes in the reference transcriptome and then used the raw count reads against each gene to perform the differential gene expression analysis using edgeR ([@R66]) as implemented in Trinity. We only retained the genes with a corrected *P* value of false discovery rate (FDR) of 0.001 and at least fourfold expression (-P 1e-3 -C 2). The DE genes were annotated using the gene IDs from Swiss-Prot and also the GO terms (see table S2). To visualize the GO categories enriched in our DE genes, we used REVIGO ([@R67]) and plotted circos plots in the R statistical software program. The expression levels of target genes involved in silica production for spicule building in hexactinellids (polymerization and biomineralization: *glassin* and *cathepsins*; ion transporters: *transporters arsB*, *aquaporins 3* and *9*, *solute carriers*, *magnesium transporters NIPA*, *ammonia transporters*, *sodium-potassium calcium exchanger*, and *sodium bile acid cotransporter*) were collected from the normalized expression values using trimmed mean of *M*-value (TMM) matrix obtained with edgeR and plotted using pheatmap in R. Phylogenetic analyses --------------------- In our phylogenetic analysis of aquaporins and arsB transporters in *V. pourtalesii*, we included sequences of all major groups of aquaporins and arsenite-antimonite efflux protein sequences collected from GenBank and our transcriptomic databases for sponges (see accession numbers in [Fig. 5](#F5){ref-type="fig"}, fig. S4, and data file S1). Phylogenetic analyses of the genes coding for the passive and active transporters were conducted separately, with ingroup and outgroup sequences selected on previous knowledge of the protein families---aquaporins ([@R34]) and arsB/Lsi2 transporters ([@R49]). All sequences were aligned with MAFFT v5 ([@R68]), and the phylogenetic trees were built with RAxML 8.1.22 ([@R69]) with Le-Gascuel as the protein model of substitution and GAMMAI correction for rate variation among sites as obtained from PROTTEST ([@R70]) using the Akaike information criterion. Reliability of the phylogenetic trees was estimated using 100 bootstrap replicates. As a congruence test for the previous ML-based phylogenies, a subsequent Bayesian phylogenetic analysis was conducted for both protein families, using MrBayes ([@R71]) v3.2.2 x64 with the model provided by PROTTEST. The Monte Carlo Markov Chain search was run over at least 20,000,000 generations. Trees were sampled every 2500 generations, and the first 25% of trees gathered were discarded as "burn-in." Convergence was checked with Tracer 1.7 ([@R72]). Supplementary Material ====================== ###### aba9322_Movie_S2.avi ###### aba9322_SM.pdf ###### aba9322_Data_file_S1.xlsx ###### aba9322_Movie_S1.avi We thank B. MacDonald for help with logistics during sponge collection and maintenance in experimental conditions, C. Sitjà for help with sponge dry and ash weights, and M. García-Puig for video editing; F. Whoriskey and J. Pratt of the (OTN, Dalhousie University) for the collection of specimens from the OTN moorings; G. Yahel (Ruppin Academic Center) for advice when building the seawater collectors of the incubation chambers. **Funding:** This research was completed mostly by funds from the SponGES H2020 grant (BG-01-2015.2, agreement number 679849-2) to M.M. and A.R. and from Fisheries and Oceans Canada Strategic Program for Ecosystem-Based Research and Advice (SPERA) and International Governance Strategy (IGS) projects awarded to L.B. and E.K. This study also benefitted from funding by a PBS grant (MINECO CTM2015-67221-R) to M.M. This study is in memory of Hans Tore Rapp, who passed away on 7 March 2020, and who was the main coordinator of the H2020 SponGES project that has made this research possible. **Author contributions:** M.M. designed the study. The physiological experiments and nutrient analyses were performed and analyzed by M.M. and M.L.-A. E.K. and L.B. dealt with mapping, collection of organisms, and the logistics of cruise organization and laboratory gearing for in vivo experimentation with deep-sea sponges and nutrient analysis. A.R. and V.K. conducted the transcriptomic analysis, the analysis of differential gene expression, and the phylogenetic analyses, being the molecular data interpreted by M.M. and A.R. M.M. assembled the first draft of manuscript, which was further refined through invaluable contributions by all authors. **Competing interests:** The authors declare that they have no competing interests. **Data and materials availability:** All data and access numbers needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Raw transcript reads can be accessioned at the SRA under BioProject number PRJNA580361. Additional data related to this paper may be requested from the authors. Supplementary material for this article is available at <http://advances.sciencemag.org/cgi/content/full/6/28/eaba9322/DC1> [View/request a protocol for this paper from *Bio-protocol*](https://en.bio-protocol.org/cjrap.aspx?eid=10.1126/sciadv.aba9322).
{ "pile_set_name": "PubMed Central" }
Grape Hyacinths or Muscari are great for rockeries, pots, nooks and crannies, along pathways, drifts, grass plantings or at the front of beds. The flowers open late winter to early spring and look brilliant planted with Miniature Daffodils. The grass like foliage emerges late autumn to winter. Plant your Grape Hyacinths into well drained soil, humus rich is ideal but they will cope in poor soils with good drainage. Water the bulbs in, natural rainfall should take care of the rest, you will only need to water them they dry out during growth. Grape Hyacinth bulbs are best left to naturalise where they will multiply to form nice clumps. Add some general purpose synthetic fertiliser or blood and bone as the flowers are forming and again as they are fading to ensure good growth in the coming year. They are known as Grape Hyacinths because the florets resemble a bunch of grapes, they also have a very light fragrance which is similar to fresh grapes. They are native to Mediterranean Europe and South Western Asia.
{ "pile_set_name": "Pile-CC" }
Gurazada Apparao University The Gurazada Apparao University is a public university located in Vizianagaram, Andhra Pradesh. It was established on 14 February 2019. The university was named after Gurazada Apparao, a noted Indian playwright, dramatist, poet, and writer known for his works in Telugu theatre. History Earlier, the university is the outgrowth initiative of the Post-Graduate centre of Andhra University, which was established in 21 September 2004 with the aim of ensure better education to poor and backward communities in and around Vizianagaram. The Engineering college of the university was earlier known as JNTUK Vizianagaram. This Engineering college was established as a constituent institute of Jawaharlal Nehru Technological University, Kakinada on September 2007. On 14 February 2019, the University was formed by merging the Andhra University PG Centre, Vizianagaram and JNTUK Vizianagaram campus. It was inaugurated by then Chief Minister of Andhra Pradesh Nara Chandrababu Naidu,. Campus The university is spread across an area of 189 acres. It serves the educational needs of Vizianagaram district. Vizianagaram is the main city of the Vizianagaram District of North Eastern Andhra Pradesh in Southern India. Vizianagaram translates to the "city of victory" and is also given the nickname of the "city of education". References Category:Universities in Andhra Pradesh Category:Universities and colleges in Vizianagaram district Category:Educational institutions established in 2019 Category:2019 establishments in India Category:Vizianagaram
{ "pile_set_name": "Wikipedia (en)" }
Q: Natural vs surrogate keys on support tables I have read many articles about the battle between natural versus surrogate primary keys. I agree in the use of surrogate keys to identify records of tables whose contents are created by the user. But in the case of supporting tables what should I use? For example, in a hypothetical table "orderStates". The valuse in this table are not editable (the user can't insert, modify or delete this values). If you use a natural key would have the following data: TABLE ORDERSTATES {ID: "NEW", NAME: "New"} {ID: "MANAGEMENT" NAME: "Management"} {ID: "SHIPPED" NAME: "Shipped"} If I use a surrogate key would have the following data: TABLE ORDERSTATES {ID: 1 CODE: "NEW", NAME: "New"} {ID: 2 CODE: "MANAGEMENT" NAME: "Management"} {ID: 3 CODE: "SHIPPED" NAME: "Shipped"} Now let's take an example: a user enters a new order. In the case in which use natural keys, in the code I can write this: newOrder.StateOrderId = "NEW"; With the surrogate keys instead every time I have an additional step. stateOrderId_NEW = .... I retrieve the id corresponding to the recod code "NEW" newOrder.StateOrderId = stateOrderId_NEW; The same will happen every time I have to move the order in a new status. So, in this case, what are the reason to chose one key type vs the other one? A: The answer is: it depends. In your example of changing the order state inside your code, ask yourself how likely it is that you would create constants for those states (to avoid making typos for instance). If so, both will accomplish the same. In the case that a new order state gets submitted via a form, you would build the drop down (for example) of possible values using either the natural or surrogate key, no difference there. There's a difference when you're doing a query on the order table and wish to print the state for each order. Having a natural key would avoid the need to make another join, which helps (albeit a little). In terms of storage and query performance, the surrogate key is respectively smaller and faster (depending on the table size) in most cases. But having said all that, it just takes careful consideration. Personally I feel that surrogate keys have become something like a dogma; many developers will use them in all their tables and modeling software will automatically add them upon table creation. Therefore you might get mixed reactions about your choice, but there's no hard rule forbidding you to use them; choose wisely :)
{ "pile_set_name": "StackExchange" }
--- author: - '$^{1}$Ryosuke Akashi[^1] and $^{1,2}$Ryotaro Arita' title: 'Density Functional Theory for Plasmon-assisted Superconductivity' --- Introduction ============ Superconductivity constitutes one of the most fascinating fields in condensed matter physics ever since its discovery in the early twentieth century. After the success of its description by the Bardeen-Cooper-Schrieffer theory,[@BCS] particular attention has been paid to the material dependence of the superconducting transition temperature ($T_{\rm c}$): that is, why some materials such as the celebrated cuprate[@Bednorz-Muller] exhibit high $T_{\rm c}$ but others do not? Since superconductivity emerges as a result of subtle interplay and competition of interactions between atoms and electrons having much larger energy scales, $T_{\rm c}$ is extremely sensitive to details of the electronic and crystal structure. Thus, an accurate quantitative treatment is essential to understand the emergence of high values of $T_{\rm c}$. For the conventional phonon-mediated mechanism, quantitative calculations have been performed within the Migdal-Eliashberg (ME) theory[@Migdal-Eliashberg] implemented with the first-principles method based on the Kohn-Sham density functional theory[@Kohn-Sham-eq]: In a variety of systems, phonon properties are well reproduced by the density functional perturbation theory[@Baroni-review] or the total-energy method[@Kunc-Martin-frozen] within the local density approximation[@Ceperley-Alder; @PZ81]. By using the calculated phonon spectrum and electron-phonon coupling as inputs, it has been shown that the ME theory explains the qualitative tendency of $T_{\rm c}$ for various materials[@Savrasov-Savrasov; @Choi-MgB2]. However, the ME formalism is not suitable for full *ab initio* calculations since it is difficult to treat electron-electron interaction nonempirically. When we calculate $T_{\rm c}$ by solving the Eliashberg equation or using related approximate formulae such as the McMillan equation,[@McMillan; @AllenDynes] we vary the value of $\mu^{\ast}$ (Ref. ) representing the effective electron-electron Coulomb interaction which suppresses the Cooper-pair formation, and examine whether the range of the resulting $T_{\rm c}$ covers the experimentally observed value. With such a semi-empirical framework, the material dependence of the electron-electron interaction cannot be understood quantitatively. The recent progress in the density functional theory for superconductors (SCDFT)[@Oliveira; @Kreibich; @GrossI] has changed the situation. There, a non-empirical scheme describing the physics in the ME theory was formulated: Based on the Kohn-Sham orbital, it treats the weak-to-strong electron-phonon coupling, the screened electron-electron interaction within the static approximation, and the retardation effect[@Morel-Anderson] due to the difference in the energy ranges of these interactions. This scheme has been demonstrated to reproduce experimental $T_{\rm c}$s of various conventional phonon-mediated superconductors with deviation less than a few K.[@GrossII; @Floris-MgB2; @Sanna-CaC6; @Bersier-CaBeSi] More recently, it has been employed to examine the validity of the ME theory in fully gapped superconductors with high $T_{\rm c}$ such as layered nitrides[@Akashi-MNCl] and alkali-doped fullerides.[@Akashi-fullerene] Through these applications, the current SCDFT has proved to be an informative method well-suited to investigate the nontrivial effects of the electron-electron interaction behind superconducting phenomena. Although the electron-electron interaction just suppresses the pairing in the ME theory, possibilities of superconductivity induced by the electron-electron interaction have long been also explored. Since the discovery of the cuprates,[@Bednorz-Muller] superconductivity induced by short-range Coulomb interaction has been extensively investigated.[@Scalapino-review2012] On the other hand, there has been many proposals of superconducting mechanisms concerning long-range Coulomb interaction since the seminal work of Kohn and Luttinger.[@Kohn-Luttinger] In particular, there is a class of mechanisms that exploit the dynamical structure of the screened Coulomb interaction represented by the frequency-dependent dielectric function $\varepsilon(\omega)$: e.g., the plasmon[@Radhakrishnan1965; @Frohlich1968; @Takada1978; @Rietschel-Sham1983] and exciton[@Little1967] mechanisms. Interestingly, such mechanisms can cooperate with the conventional phonon mechanism. Since they usually favor $s$-wave pairing, they have a chance to enhance $s$-wave superconductivity together with the phonon mechanism. Taking this possibility into account, these mechanisms are important even where they do not alone induce superconductivity. Therefore, they are expected to involve a broader range of systems than originally expected in the early studies. In fact, for a variety of systems having low-energy electronic excitations, theoretical model calculations addressing such a cooperation have been performed: SrTiO$_{3}$ [@Koonce-Cohen1967; @Takada-SrTiO3] with small plasmon frequencies due to small electron densities, $s$-$d$ transition metals [@Garland-sd] where “demon" acoustic plasmons have been discussed,[@Pines-demon; @Ihm-Cohen1981] metals sandwiched by small-gap semiconductors [@Ginzburg-HTSC; @ABB1973], and layered systems where two-dimensional acoustic plasmons are proposed to become relevant [@Kresin1987; @Bill2002-2003]. Moreover, recent experimental discoveries of high-temperature superconductivity in doped band insulators have stimulated more quantitative analyses on effects of the cooperation [@Yamanaka1998; @Bill2002-2003; @Taguchi2006; @Taniguchi2012; @Ye2012]. Considering the above grounds, the situation calls for an *ab initio* theory that treats the phonon-mediated interaction and the dynamical screened Coulomb interaction together, with which one can study from the superconductors governed by phonons or the dynamical Coulomb interaction to those by their cooperation on equal footing. The aim of our present study is to establish this by extending the applicability of SCDFT. In this paper, we review the recent theoretical extension to include the plasmon-induced dynamical screened Coulomb interaction.[@Akashi-plasmon] In Sec. \[sec:theory\], we present the theoretical formulation and its practical implementation, and discuss how plasmons can enhance superconductivity. Section \[sec:appl-Li\] describes the application to elemental lithium under high pressures, for which the plasmon effect is expected to be substantial because of its relatively dilute electron density. In Sec. \[sec:summary\] we summarize our results and give concluding remarks. Formulation {#sec:theory} =========== General formalism {#subsec:theory-general} ----------------- Let us start from a brief review of SCDFT.[@GrossI] The current SCDFT employs the gap equation $$\begin{aligned} \Delta_{n{\bf k}}\!=\!-\mathcal{Z}_{n\!{\bf k}}\!\Delta_{n\!{\bf k}} \!-\!\frac{1}{2}\!\sum_{n'\!{\bf k'}}\!\mathcal{K}_{n\!{\bf k}\!n'{\bf k}'} \!\frac{\mathrm{tanh}[(\!\beta/2\!)\!E_{n'{\bf k'}}\!]}{E_{n'{\bf k'}}}\!\Delta_{n'\!{\bf k'}} \label{eq:gap-eq}\end{aligned}$$ to obtain $T_{\rm c}$, which is specified as the temperature where the calculated value of gap function $\Delta_{n{\bf k}}$ becomes zero. Here, $n$ and ${\bf k}$ denote the band index and crystal momentum, respectively, $\Delta$ is the gap function, and $\beta$ is the inverse temperature. The energy $E_{n {\bf k}}$ is defined as $E_{n {\bf k}}$=$\sqrt{\xi_{n {\bf k}}^{2}+\Delta_{n {\bf k}}^{2}}$ and $\xi_{n {\bf k}}=\epsilon_{n {\bf k}}-\mu$ is the one-electron energy measured from the chemical potential $\mu$, where $\epsilon_{n {\bf k}}$ is obtained by solving the normal Kohn-Sham equation in density functional theory $ \mathcal{H}_{\rm KS}|\varphi_{n{\bf k}}\rangle=\epsilon_{n{\bf k}} |\varphi_{n{\bf k}}\rangle $ with $\mathcal{H}_{\rm KS}$ and $|\varphi_{n{\bf k}}\rangle$ being the Kohn-Sham Hamiltonian and the Kohn-Sham state, respectively. The functions $\mathcal{Z}$ and $\mathcal{K}$, which are called as the exchange-correlation kernels, describe the effects of all the interactions involved: They are defined as the second functional derivative of the free energy with respect to the anomalous electron density. A formulation of the free energy based on the Kohn-Sham perturbation theory\cite{} enables us practical calculations of the exchange-correlation functionals using the Kohn-Sham eigenvalues and eigenfunctions derived from standard *ab initio* methods. The nondiagonal exchange-correlation kernel $\mathcal{K}$ is composed of two parts $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el}$ representing the electron-phonon and electron-electron interactions, whereas the diagonal kernel $\mathcal{Z}$ consists of one contribution $\mathcal{Z}$$=$$\mathcal{Z}^{\rm ph}$ representing the mass renormalization of the normal-state band structure due to the electron-phonon coupling. The phonon parts, $\mathcal{K}^{\rm ph}$ and $\mathcal{Z}^{\rm ph}$, properly treats the conventional strong-coupling superconductivity. The electron-electron coutribution $\mathcal{K}^{\rm el}$ is the matrix element of the *static* screened Coulomb interaction $\langle \varphi_{n{\bf k}\uparrow}\varphi_{n-{\bf k}\downarrow}|\varepsilon^{-1}(0)V|\varphi_{n'{\bf k}'\uparrow}\varphi_{n'-{\bf k}'\downarrow}\rangle$ with $V$ being the bare Coulomb interaction. Currently, the Thomas-Fermi approximation and the random-phase approximation (RPA) have been applied for the static dielectric function $\varepsilon^{-1}(0)$.[@Massidda] With these settings, the two parts of the nondiagonal kernel have different Kohn-Sham energy dependence: $\mathcal{K}^{\rm ph}$ has large values only for the states within the phonon energy scale, whereas $\mathcal{K}^{\rm el}$ decays slowly with the electronic energy scale. With this Kohn-Sham-state dependence, the retardation effect[@Morel-Anderson] is quantitatively treated. Thus, within the framework of the density functional theory, the SCDFT accurately treats the physics of Migdal-Eliashberg theory (based on the Green’s function). ![(Color online) Diagram corresponding to the electron nondiagonal kernel, $\mathcal{K}^{\rm el}$. The solid line with arrows running in the opposite direction denotes the electronic anomalous propagator [@GrossI]. The blue wavy line denotes the screened electronic Coulomb interaction, which is a product of the inverse dielectric function $\varepsilon^{-1}$ and the bare Coulomb interaction $V$.[]{data-label="fig:diagram"}](diagrams_ed_130926.jpg) The current setting $\mathcal{K}^{\rm el}$ $=$ $\langle \varphi_{n{\bf k}\uparrow}\varphi_{n-{\bf k}\downarrow}|$ $\varepsilon^{-1}(0)V$ $|\varphi_{n'{\bf k}'\uparrow}\varphi_{n'-{\bf k}'\downarrow}\rangle$ corresponds to the anomalous exchange contribution from the screened Coulomb interaction represented in Fig. \[fig:diagram\] with the $\omega$ dependence of $\varepsilon$ omitted. To incorporate effects of the plasmon on the interaction, we retain its frequency dependence. The diagram thus yields a following form $$\begin{aligned} \hspace{-30pt}&& \mathcal{K}^{\rm el, dyn}_{n{\bf k},n'{\bf k}} \!=\! \lim_{\{\Delta_{n{\bf k}}\}\rightarrow 0} \frac{1}{{\rm tanh}[(\beta /2 ) E_{n{\bf k}}]} \frac{1}{{\rm tanh}[(\beta /2) E_{n'{\bf k}'}]} \nonumber \\ \hspace{-10pt}&& \hspace{10pt}\times \frac{1}{\beta^{2}} \sum_{\tilde{\omega}_{1}\tilde{\omega}_{2}} F_{n{\bf k}}({\rm i}\tilde{\omega}_{1}) F_{n'{\bf k}'}({\rm i}\tilde{\omega}_{2}) W_{n{\bf k}n'{\bf k}'}[{\rm i}(\tilde{\omega}_{1}\!\!-\!\!\tilde{\omega}_{2})] , \label{eq:kernel-dyn}\end{aligned}$$ where $F_{n{\bf k}}({\rm i}\tilde{\omega})$ $=$ $\frac{1}{{\rm i}\tilde{\omega}\!+\!E_{n{\bf k}}} \!-\! \frac{1}{{\rm i}\tilde{\omega}\!-\!E_{n{\bf k}}} $ and $\tilde{\omega}_{1}$ and $\tilde{\omega}_{2}$ denote the Fermionic Matsubara frequency. Function $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$$\equiv$$\langle \varphi_{n{\bf k}\uparrow}\varphi_{n-{\bf k}\downarrow}|\varepsilon^{-1}({\rm i}\omega)V|\varphi_{n'{\bf k}'\uparrow}\varphi_{n'-{\bf k}'\downarrow}\rangle$ is the screened Coulomb interaction. We then apply the RPA[@RPA] to the $\omega$-dependent dielectric function, which is a standard approximation to describe the plasmon under crystal field. Formally, the present RPA kernel can be also derived from the RPA free energy defined by Eq. (13) in Ref. : The set of terms of order $O(FF^{\dagger})$ (i.e., the set of the diagrams having only one anomalous bubble taken from Fig. 2 in Ref. ) corresponds to the present kernel. The Coulomb interaction $W_{n{\bf k}n'{\bf k}'}({\rm i}\nu)$ is practically calculated using a certain set of basis functions. Let us here summarize the plane-wave representation, which has been employed in our studies: $$\begin{aligned} &&\hspace{-30pt}W_{n{\bf k}n'{\bf k}'}({\rm i}\nu) \nonumber \\ && = \frac{4\pi}{\Omega}\! \sum_{{\bf G}\!{\bf G}'}\! \frac{ \!\rho^{n{\bf k}}_{n'{\bf k}'}(\!{\bf G}\!)\tilde{\varepsilon}^{-1}_{{\bf G}{\bf G}'}(\!{\bf k}\!-\!{\bf k}'\!;{\rm i}\nu)\!\{\rho^{n{\bf k}}_{n'{\bf k}'}(\!{\bf G}'\!)\!\}^* } { |{\bf k}-{\bf k}'+{\bf G}||{\bf k}-{\bf k}'+{\bf G}'| }\!, \label{eq:K-el-RPA}\end{aligned}$$ with $\tilde{\varepsilon}_{{\bf G}{\bf G}'}(\!{\bf k}\!-\!{\bf k}'\!;{\rm i}\nu)$ being the symmetrized dielectric matrix,[@Hybertsen-Louie] defined by $$\begin{aligned} \tilde{\varepsilon}_{{\bf G}{\bf G}'}({\bf K}; {\rm i}\nu) \!\!\!\!&=&\!\!\!\!\!\! \delta_{{\bf G}{\bf G}'} \nonumber \\ &&\!\!\!-4\pi\frac{1}{|{\bf K}\!+\!{\bf G}|}\chi^{0}_{{\bf G}{\bf G}'}({\bf K}; {\rm i}\nu)\frac{1}{|{\bf K}\!+\!{\bf G}'|} .\end{aligned}$$ The independent-particle polarization $\chi^{0}_{{\bf G}{\bf G}'}({\bf K}; {\rm i}\nu)$ denotes $$\begin{aligned} \chi^{0}_{{\bf G}{\bf G}'}({\bf K};{\rm i}\nu) &\!\!\!\!=&\!\!\!\! \frac{2}{\Omega} \sum_{{\bf k}}\sum_{\substack{n:{\rm unocc}\\n':{\rm occ}}} [\rho^{n{\bf k}+{\bf K}}_{n'{\bf k}}({\bf G})]^{\ast}\rho^{n{\bf k}+{\bf K}}_{n'{\bf k}}({\bf G}') \nonumber \\ && \hspace{-35pt}\times [\frac{1}{{\rm i}\nu \!-\! \epsilon_{n {\bf k}+{\bf K}} \!+\! \epsilon_{n' {\bf k}}} - \frac{1}{{\rm i}\nu \!+\! \epsilon_{n {\bf k}+{\bf K}} \!-\! \epsilon_{n' {\bf k}}}] , \label{eq:chi-def}\end{aligned}$$ where the band indices $n$ and $n'$ run through the unoccupied bands and occupied bands for each **k**, respectively. The matrix $\rho^{n'{\bf k}'}_{n{\bf k}}({\bf G})$ is defined by $$\begin{aligned} \rho^{n'{\bf k}'}_{n{\bf k}}({\bf G}) &=& \int_{\Omega} d{\bf r} \varphi^{\ast}_{n'{\bf k}'}({\bf r}) e^{{\rm i}({\bf k}'-{\bf k}+{\bf G})\cdot{\bf r}} \varphi_{n{\bf k}}({\bf r}). \label{eq:rho}\end{aligned}$$ So far, we have ignored the intraband (Drude) contribution to $\tilde{\varepsilon}$ for ${\bf k}-{\bf k}'=0$: The kernel including this contribution diverges as $({\bf k}-{\bf k}')^{-2}$, whereas the total contribution by the small ${\bf k}-{\bf k}'$ to $T_{\rm c}$ should scale as $({\bf k}-{\bf k}')^{1}$ because of the ${\bf k}'$ integration in Eq. (\[eq:gap-eq\]). ![(Color online) (a) Energy dependence of nondiagonal kernels entering the gap equation. Phonon-induced attraction, static Coulomb repulsion, and the plasmon-induced high-energy Coulomb repulsion are indicated in red, green, and blue, respectively. (b) Approximate solution of the gap equation solved with the phonon and static Coulomb parts. (c) Energy dependence of the kernels in a case where the phonon part is negligibly small and the plasmon part is dominant.[]{data-label="fig:interaction"}](interactions_130920.jpg) The physical meaning of the present dynamical correction to the previous static kernel is as follows. In real systems, screening by charge fluctuations is ineffective for the interaction with large energy exchanges \[i.e., $\varepsilon(\omega) \xrightarrow{\omega \rightarrow \infty} 1$\], whereas it becomes significant as the energy exchange becomes small compared with typical energies of charge excitations. However, the conventional static approximation ignores this energy dependence of the screening by extrapolating the static value of the interaction to the high energy, and underestimates the screened Coulomb repulsion with large energy exchanges. The present extension corrects this underestimation, and gives additional repulsive contribution to the Coulomb matrix elements between the Cooper pairs having much different energies. Interestingly, this additional contribution can raise $T_{\rm c}$. Let us discuss this point in terms of the interaction kernel entering the energy-averaged gap equation $$\begin{aligned} \Delta(\xi) = -\frac{1}{2}N(0) \int \!\! d\xi' \! \mathcal{K}(\xi\!,\xi')\frac{{\rm tanh}[(\beta/2)\xi']}{\xi'}\Delta(\xi') , \label{eq:gap-eq-ave}\end{aligned}$$ where we define the averaged nondiagonal kernel as $\mathcal{K}(\xi,\xi')=\frac{1}{N(0)^{2}}\sum_{n{\bf k}n'{\bf k}'}\delta(\xi-\xi_{n{\bf k}})\delta(\xi'-\xi_{n'{\bf k}'})K_{n{\bf k}n'{\bf k}'}$ with $N(0)$ being electronic density of states at the Fermi level and omit the diagonal kernel for simplicity. This equation qualitatively describes coherent Cooper pairs represented by $\Delta(\xi)$ scattered by the pairing interactions. Suppose $\mathcal{K}=\mathcal{K}^{\rm ph}+\mathcal{K}^{\rm el}$, $N(0)\mathcal{K}^{\rm ph}(\xi,\xi')=-\lambda$ within the Debye frequency $\omega_{\rm ph}$ and $N(0)\mathcal{K}^{\rm el}(\xi,\xi')=\mu$ within a certain electronic energy range such as $E_{\rm F}$ (considering red and green parts in panel (a) of Fig. \[fig:interaction\]). Solving this equation by assuming $\Delta(\xi)$ to be nonzero and constant only within $\omega_{\rm ph}$, we obtain the BCS-type $T_{\rm c}$ formula $T_{\rm c}\propto \omega_{\rm ph}$$\times$$ {\rm exp}[-1/(\lambda-\mu)]$ for $\mu-\lambda<0$. However, if we allow $\Delta(\xi)$ to have nonzero constant values for $|\xi|>\omega_{\rm ph}$, we instead obtain $T_{\rm c}\propto \omega_{\rm ph}$$\times$${\rm exp}[-1/(\lambda-\mu^{\ast})]$ with $\mu^{\ast}=\mu/(1+\mu{\rm ln}[E_{\rm F}/\omega_{\rm ph}])<\mu$, and then, the resulting values of $\Delta(\xi)$ have opposite signs for $|\xi|<\omega_{\rm ph}$ and $|\xi|>\omega_{\rm ph}$ \[panel (b) in Fig. \[fig:interaction\]\]. Here, even if the total low-energy interaction $\mu-\lambda$ is repulsive, superconducting state realizes if $\mu^{\ast}-\lambda<0$. This weakening of the effective Coulomb repulsion is the celebrated retardation effect,[@Morel-Anderson] and its origin is the negative values of the high-energy gap function: Since the scattering by repulsion between Cooper pairs having $\Delta$ with opposite signs is equivalent to the scattering by attraction between those with same signs, there is a gain of the condensation energy.[@Kondo-PTP1963] Next, let us add the plasmon contribution \[blue part in panel (a)\], which enhances the repulsion by $\Delta \mu$ for $\xi$ with an energy scale of plasmon frequency $\omega_{\rm pl}$. Then, more condensation energy can be gained by enhancing the high-energy negative gap function, which increases $T_{\rm c}$. As an extreme situation, one can also consider the case where the phonon-induced attraction is negligible and the plasmon-induced repulsion is dominant \[panel (c)\]. Obviously, a superconducting solution exists even in this case because the discussion about the above $T_{\rm c}$ formula is also valid with the transformation $\lambda$$\rightarrow$$\Delta \mu$, $\mu$$\rightarrow$$\mu+\Delta \mu$ and $\omega_{\rm ph}$$\rightarrow$$\omega_{\rm pl}$. These discussions illustrate that the plasmon contribution can increase $T_{\rm c}$ by enhancing the high-energy repulsion. To the authors’ knowledge, the plasmon mechanism of the above-mentioned type to enhance $T_{\rm c}$ has been originally studied by Takada[@Takada1978] based on the Green’s function formalism for two- and three-dimensional homogeneous electron gas. Using the gap equation derived by himself, he has also performed calculations of $T_{\rm c}$ considering both the phonons and plasmons for doped SrTiO$_{3}$ (Ref. ) and metal-intercalated graphites.[@Takada-graphite1982; @Takada-graphite2009] Our present formalism, which treats the local field effect of inhomogeneous electron distribution behind the phonon and plasmon, is a DFT-based counterpart of his theory.[@comment-counterpart] Multipole plasmon approximation {#subsec:plasmon-pole} ------------------------------- Next we present a formulation to calculate $T_{\rm c}$ using the extended kernel. Evaluation of Eq. (\[eq:kernel-dyn\]) requires to perform the double discrete Matsubara summations for electronic energy scale, which is impractically demanding. We then analytically carry out the summations by approximating $W_{n{\bf k}n'{\bf k'}}$ as a simple function. For this purpose, we employ a multipole plasmon approximation $$\begin{aligned} \tilde{W}_{n{\bf k}n'{\bf k}'}({\rm i}\tilde{\nu}_{m}) \!\!\!\!&=&\!\!\!\! W_{n{\bf k}n'{\bf k}'}(0) \nonumber \\ &&+ \sum^{N_{\rm p}}_{i} a_{i;n{\bf k}n'{\bf k}'} g_{i;n{\bf k}n'{\bf k}'}(\tilde{\nu}_{m}) , \label{eq:W-tilde}\end{aligned}$$ with $g_{i;n{\bf k}n'{\bf k}'}$ being $$\begin{aligned} g_{i;n{\bf k}n'{\bf k}'}(x) = \frac{2}{\omega_{i;n{\bf k}n'{\bf k}'}} -\frac{2\omega_{i;n{\bf k}n'{\bf k}'}}{x^{2}\!+\!\omega^{2}_{i;n{\bf k}n'{\bf k}'}} .\end{aligned}$$ Here, $\tilde{\nu}_{m}$ denotes the Bosonic Matsubara frequency. In contrast with the case of uniform electron gas, inhomogeneous systems can have a variety of plasmon modes, and our aim is to treat these modes in a unified manner. Substituting Eq. (\[eq:W-tilde\]) in Eq. (\[eq:kernel-dyn\]), we finally obtain $\mathcal{K}^{\rm el,dyn}$$=$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$ with $\mathcal{K}^{\rm el,stat}_{n{\bf k}n'{\bf k}'}$$=$$W_{n{\bf k}n'{\bf k}'}(0)$ and $$\begin{aligned} \hspace{-10pt} \Delta\mathcal{K}^{\rm el}_{n{\bf k},n'{\bf k}} &\!\!\!\!\!\!\!=&\!\!\!\!\!\! \sum_{i}^{N_{\rm p}}\!2a_{i;n{\bf k}n'{\bf k}'} \!\left[ \frac{1} {\omega_{i;n{\bf k}n'{\bf k}'}} \right. \nonumber \\ && \hspace{-50pt} \left. + \frac{ I\!(\xi_{n{\bf k}}\!,\!\xi_{n'{\bf k}'}\!,\omega_{i;n{\bf k}n'{\bf k}'}\!) \!\!-\!\! I\!(\xi_{n{\bf k}}\!,-\!\xi_{n'{\bf k}'}\!,\omega_{i;n{\bf k}n'{\bf k}'}\!) }{{\rm tanh}[(\beta/2) \xi_{n{\bf k}}]{\rm tanh}[(\beta/2) \xi_{n'{\bf k}'}]} \right] , \label{eq:Delta-kernel}\end{aligned}$$ where the function $I$ is defined by Eq. (55) in Ref. . In order to calculate Eq. (\[eq:Delta-kernel\]), we determine the plasmon coupling coefficients $a_{i;n{\bf k}n'{\bf k}'}$ and the plasmon frequencies $\omega_{i;n{\bf k}n'{\bf k}'}$ by the following procedure: (i) Calculate the screened Coulomb interaction for the [*real*]{} frequency grid $W_{n{\bf k}n'{\bf k}'}(\nu_{j}\!+\!{\rm i}\eta)$, where {$\nu_{j}$} ($j=1, 2, . . .N_{\omega}$) specifies the frequency grid on which the numerical calculation is performed, and $\eta$ is a small positive parameter, (ii) determine the plasmon frequencies $\{\omega_{i;n{\bf k}n'{\bf k}'}\}$ by the position of the peaks up to the $N_{\rm p}$-th largest in ${\rm Im}W_{n{\bf k}n'{\bf k}'}(\nu_{j}\!+\!{\rm i}\eta)$, (iii) calculate the screened Coulomb interaction for the [*imaginary*]{} frequency grid $W_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j})$, and (iv) using the calculated $W_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j})$, determine the plasmon coupling coefficients $\{a_{i;n{\bf k}n'{\bf k}'}\}$ via the least squares fitting by $\tilde{W}_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j})$. For the fitting, the variance to be minimized is defined as $$\begin{aligned} S_{n{\bf k}n'{\bf k}'} \!\!\!\!&=&\!\!\!\! \sum^{N_{\omega }}_{j} \delta \omega_{j}\biggl[ W_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j}) -W_{n{\bf k}n'{\bf k}'}(0) \nonumber \\ && - \sum^{N_{\rm p}}_{i} a_{i;n{\bf k}n'{\bf k}'} g_{i;n{\bf k}n'{\bf k}'}(\nu_{j}) \biggr]^{2} ,\end{aligned}$$ and we have introduced a weight $\delta \omega_{j}$ satisfying $\sum^{N_{\omega }}_{j}\delta \omega_{j}$$=$$1$. With all the plasmon frequencies given, the extrema condition $\frac{\partial S}{\partial a_{i}}=0$ ($i=1$, . . . , $N_{\rm p}$) reads $$\begin{aligned} \begin{pmatrix} a_{1}\\ a_{2}\\ \vdots \end{pmatrix} \!\!\!\!&=&\!\!\!\! \begin{pmatrix} V^{gg}_{11} & V^{gg}_{12} & \cdots \\ V^{gg}_{21} & V^{gg}_{22} & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix}^{-1} \begin{pmatrix} V^{Wg}_{1} \\ V^{Wg}_{2} \\ \vdots \end{pmatrix} . \label{eq:fit-coeff}\end{aligned}$$ Here, $V^{Wg}$ and $V^{gg}$ are defined by $$\begin{aligned} V^{Wg}_{i} &\!\!\!\!\!=&\!\!\!\!\! \sum_{j=1}^{N_{\omega}} \delta\omega_{j}[W_{j}-W(0)] g_{i}(\nu_{j}) ,\\ V^{gg}_{ij} &\!\!\!\!\!=&\!\!\!\!\! \sum_{k=1}^{N_{\omega}} \delta\omega_{k} g_{i}(\nu_{k}) g_{j}(\nu_{k}) .\end{aligned}$$ For arbitrary frequency grids, we define the weight as $$\begin{aligned} \delta\omega_{j}\propto \left\{ \begin{array}{cl} 0 & (j=1, N_{w}) \\ (\nu_{j+1}\!-\!\nu_{j-1})p_{j} & (j\neq 1, N_{w}) \\ \end{array} \right. .\end{aligned}$$ The factor $p_{j}$ is the weight for the variance function introduced for generality, and we set $p_{j}=1$ in Secs. \[sec:theory\] and \[sec:appl-Li\]. When a negative plasmon coupling appears, we fix the corresponding coupling to zero, recalculate Eq. (\[eq:fit-coeff\]), and repeat this procedure until all the coupling coefficients becomes nonnegative so that the positive definiteness of the loss function is guaranteed. ![(Color online) Screened Coulomb interaction $W_{n{\bf k}n'{\bf k}'}$ and the corresponding approximate function $\tilde{W}_{n{\bf k}n'{\bf k}'}$ for fcc lithium under 14GPa calculated along the real frequency axis \[(a), (c)\], and the imaginary frequency axis \[(b), (d)\]. The band indices $n$ and $n'$ specify the partially occupied band. ${\bf k}$ and ${\bf k}'$ are $(2\pi/a)(1/7,1/7,1/7)$ and $(0,0,0)$ for (a)–(b), whereas $(2\pi/a)(2/7,2/7,6/7)$ and $(0,0,0)$ for (c)–(d).[]{data-label="fig:fit"}](Li_fcc_14GPa_Wnknk_k7_fit_w_ed_130926_2.jpg) For the determination of plasmon frequencies, the calculated spectrum of ${\rm Im}W_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\eta)$ is examined for each {$n,{\bf k},n',{\bf k}'$}. We have implemented a simple algorithm as follows: First, the peaks are specified as the point where the gradient of ${\rm Im}W_{n{\bf k}n'{\bf k}'}(\nu_{j}+{\rm i}\eta)$ turns from negative to positive; next, the specified peaks are sampled in order by their weighted values $p_{j}{\rm Im}W_{n{\bf k}n'{\bf k}'}(\nu_{j}+{\rm i}\eta)$. By increasing $N_{\rm p}$, we can expect all the relevant plasmon modes are properly considered. We show in Fig. \[fig:fit\] the results of the fitting for fcc Li under 14GPa as typical cases where the fitting is straightforward \[panel (a) and (b)\] and difficult \[(c) and (d)\]. The peaks used for the fitting are indicated by arrows. For the former, an accurate fitting function was obtained with $N_{\rm p}$$=$$2$, where the derived fitting function and its analytic continuation $\tilde{W}_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\delta)$ indicated by thick blue lines reproduce the numerically calculated $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ and $W_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\delta)$ quite well, respectively. For the latter, on the other hand, good agreement of $\tilde{W}_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ and $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ was not achieved with $N_{\rm p}$$\leq$7, where $a_{i;n{\bf k}n'{\bf k}'}$ for the peaks indicated by the smaller arrows were zero. This was because one of the relevant plasmon modes indicated by the larger arrows was the eighth largest with respect to the peak height. The convergence of $T_{\rm c}$ with respect to $N_{\rm p}$ can be slow due to such a feature, though it becomes serious only for {$n, {\bf k}, n', {\bf k}'$} where the dynamical structure is blurred by strong plasmon damping \[see the vertical axes in panels (a) and (c)\]. We here also note possible systematic errors in the present algorithm. First, multiple plasmon peaks in $W_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\delta)$ may mutually overlap due to their peak broadening. Then, some plasmon modes are hidden by large broad peaks and cannot be specified even if we increase $N_{\rm p}$. We have assumed that these hidden modes are negligible because of their small spectral weight and strong damping. Next, the variance does not exactly converge to zero since the numerically calculated $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ shows a weak cusplike structure at ${\rm i}\omega=0$ \[see panel (d) in Fig. \[fig:fit\]\]. This structure probably originates from the finite lifetime of the plasmon modes. Its effect is not included by the plasmon-pole approximation, and will be examined in future studies. \[tab:Tc\] [lccccc]{} & Al &\ & &14GPa & 20GPa & 25GPa& 30GPa\ $\lambda$ &0.417 &0.522 &0.623 &0.722 &0.812\ $\lambda^{a}$ & & 0.49 &0.66 & &0.83\ $\omega_{\rm ln}$ \[K\] & 314 &317&316&308&304\ $r_{s}$ &2.03 &2.71 &2.64 &2.59 &2.55\ $\Omega_{\rm p}$\[eV\] &16.2& 8.23& 8.44 &8.51 &8.58\ $T_{\rm c}^{\rm ph}$ \[K\] &5.9 &10.0 &15.2 &19.0 &23.3\ $T_{\rm c}^{\rm stat}$ \[K\] &0.8 &0.7 &1.8 & 3.2 &5.0\ $T_{\rm c}^{N_{\rm p}=1}$ \[K\] &1.4 &2.2 &4.1 &6.5 &9.1\ $T_{\rm c}^{N_{\rm p}=2}$ \[K\] &1.4 &2.2 &4.4 &6.8 &9.1\ $T_{\rm c}^{\rm expt.}$ \[K\] & 1.20$^{b}$ & $<$4 &\ \ \ \ ![(Color online) Our calculated $T_{\rm c}$ (solid squares and circles) for aluminum and fcc lithium under high pressures compared with the experimentally observed values. The open symbols represent the experiments: Ref.  (open inverted triangle), Ref.  (open squares), Ref.  (open circles), Ref.  (open regular triangles), and Ref.  (open diamonds). []{data-label="fig:Tc-expt"}](Al_Li_fcc_pressure_Tc_compare_expts_ed_130930.jpg) Application to lithium under pressures {#sec:appl-Li} ====================================== The above formalism, which is based on the plasmon-pole approximation, is expected to be valid for the nearly uniform electron gas. We here present the recent application to an elemental-metal superconductor Li. Lithium has been known to exhibit superconductivity with $T_{\rm c}$$\gtrsim$10 K under high pressure.[@Shimizu2002; @Struzhkin2002; @Deemyad2003; @Lin-Dunn] Early *ab initio* calculations[@Christensen-Novikov; @Tse; @Kusakabe2005; @Kasinathan; @Jishi] including that based on the SCDFT[@Profeta-pressure] reproduced the experimentally observed pressure dependence of $T_{\rm c}$ quantitatively. However, a later sophisticated calculation[@Bazhirov-pressure] using the Wannier interpolation technique[@Giustino-Wannier-elph] has shown that the numerically converged electron-phonon coupling coefficient is far smaller than the previously reported values. On the other hand, the plasmon effect is expected to be substantial because the density of conducting electrons $n$, which determines a typical plasmon frequency by $\propto\sqrt{n}$, is relatively small in Li due to the large radius of the ion and the small number of valence electrons. Therefore, it is interesting to see if the newly included plasmon contribution fills the gap between the theory and experiment. It is also important to examine whether the present *ab initio* method works successfully for conventional superconductors whose $T_{\rm c}$s have already been well reproduced by the conventional SCDFT. For that reason, we also applied the present method to aluminum. ![image](Al_fccLi_14GPa_Kernel_ph_el_K1_ed_130926_2.jpg) Calculation with small $N_{p}$ {#subsec:small-Np} ------------------------------ In Ref. , we performed calculations for fcc Li under pressures of 14, 20, 25, and 30GPa. All our calculations were carried out within the local-density approximation [@Ceperley-Alder; @PZ81] using [*ab initio*]{} plane-wave pseudopotential calculation codes [Quantum Espresso]{} [@Espresso; @Troullier-Martins] (see Ref.  for further details). The phonon contributions to the SCDFT exchange-correlation kernels ($\mathcal{K}^{\rm ph}$ and $\mathcal{Z}^{\rm ph}$) were calculated using the energy-averaged approximation [@GrossII], whereas the electron contributions ($\mathcal{K}^{\rm el,stat}$ and $\Delta\mathcal{K}^{\rm el}$) were calculated by Eq. (13) in Ref.  and Eq. (\[eq:Delta-kernel\]) to evaluate the plasmon effect. The SCDFT gap equation was solved with a random sampling scheme given in Ref. , with which the sampling error in the calculated $T_{\rm c}$ was not more than a few percent. In addition to the typical plasmon, an extra plasmon due to a band-structure effect has been discussed for Li[@Karlsson-Aryasetiawan; @Silkin2007] and Al[@Hoo-Hopfield; @Sturm-Oliveira1989]. We therefore carried out the calculation for $N_{\rm p}$$=$$1$ and $2$. In Table \[tab:Tc\], we summarize our calculated $T_{\rm c}$ values with $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$ ($T_{\rm c}^{\rm ph}$), $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$ ($T_{\rm c}^{\rm stat}$), and $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$ ($T_{\rm c}^{N_{\rm p}=1}$ and $T_{\rm c}^{N_{\rm p}=2}$). The estimated electron-phonon coupling coefficient $\lambda$, the logarithmic average of phonon frequencies $\omega_{\rm ln}$, the density parameter $r_{s}$, and typical plasma frequency $\Omega_{\rm p}$ are also given. Instead of using the Wannier-interpolation technique, we carried out the Fermi surface integration for the input Eliashberg functions[@Migdal-Eliashberg] with broad smearing functions, [@Akashi-plasmon] and we obtained $\lambda$ consistent with the latest calculation [@Bazhirov-pressure], which is smaller than the earlier estimates [@Tse; @Kusakabe2005; @Profeta-pressure; @Kasinathan; @Christensen-Novikov; @Jishi]. The material and pressure dependence of the theoretical $T_{\rm c}$ follows that of $\lambda$. With $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$, $T_{\rm c}$ is estimated to be of order of 10K. While it is significantly suppressed by including $\mathcal{K}^{\rm el,stat}$, it is again increased by introducing $\Delta\mathcal{K}^{\rm el}$. We here do not see significant $N_{\rm p}$ dependence, which is further examined in Sec. \[subsec:large-Np\]. The calculated values of $T_{\rm c}$ are compared with the experimental values in Fig. \[fig:Tc-expt\]. With the static approximation (red square), the general trend of the experimentally observed $T_{\rm c}$ is well reproduced: Aluminum exhibits the lowest $T_{\rm c}$, and $T_{\rm c}$ in Li increases as the pressure becomes higher. However, the calculated $T_{\rm c}$ for Li is significantly lower than the experimental one, which demonstrates that the conventional phonon theory is quantitatively insufficient to understand the origin of the high $T_{\rm c}$ in Li under high pressures. From the previous *ab initio* calculations, this insufficiency has not been well recognized because either too strong electron-phonon coupling or too weak electron-electron Coulomb interaction was used. With the plasmon contribution (blue circle), the resulting $T_{\rm c}$ systematically increases compared with the static-level one and it becomes quantitatively consistent with the experiment. For Al, in contrast, the accuracy is acceptable with both $T^{\rm stat}_{\rm c}$ and $T^{N_{\rm p}=2}_{\rm c}$, where the increase of $T_{\rm c}$ by $\Delta\mathcal{K}^{\rm el}$ is relatively small. These results indicate the followings: First, the plasmon contribution is essential for the high $T_{\rm c}$ in fcc Li under pressure, and second, our scheme gives accurate estimates of $T_{\rm c}$ regardless of whether their dynamical effects are strong or weak. ![(Color online) (a) Decomposition of the nondiagonal exchange-correlation kernel $\mathcal{K}_{n{\bf k}n'{\bf k}'}$ at $T$$=$$0.01$K calculated for fcc lithium under pressure of 14GPa, averaged by equal-energy surfaces for $n'{\bf k}'$. (b) The corresponding gap function calculated with (darker) and without (lighter) $\Delta\mathcal{K}^{\rm el}$.[]{data-label="fig:kernel-gap"}](Li_fcc_14GPa_Kernel_ph_el_K1_gap_ed_130926_2.jpg) We discuss the origin of the enhancement of $T_{\rm c}$ by considering the dynamical effect in terms of partially energy-averaged nondiagonal kernels $\mathcal{K}_{n{\bf k}}(\xi)$$\equiv$$\frac{1}{N(\xi)}\sum_{n'{\bf k}'}\mathcal{K}_{n{\bf k}n'{\bf k}'}\delta(\xi-\xi_{n'{\bf k}'})$. With $n{\bf k}$ chosen as a certain point near the Fermi energy, we plotted the averaged kernel for fcc Li under pressure of 14GPa and Al with $N_{\rm p}$$=$$2$ in Fig. \[fig:kernel\]. The total kernel is decomposed into $\mathcal{K}^{\rm ph}$ (solid red line), $\mathcal{K}^{\rm el,stat}$ (dotted green line), and $\Delta\mathcal{K}^{\rm el}$ (dashed blue line). Generally, the total kernel becomes slightly negative within the energy scale of the phonons due to $\mathcal{K}^{\rm ph}$, whereas it becomes positive out of this energy scale mainly because of $\mathcal{K}^{\rm el,stat}$. The $\Delta\mathcal{K}^{\rm el}$ value is positive definite, but nearly zero for a low energy scale. As discussed in Sec. \[sec:theory\], the high-energy enhancement of repulsion increases $T_{\rm c}$ through the retardation effect. Remarkably, $\Delta\mathcal{K}^{\rm el}$ sets in from an energy far smaller than the typical plasmon frequency (see Table \[tab:Tc\]), and its absolute value is of the same order of $\mathcal{K}^{\rm el,stat}$. These features can be also seen in the case of homogeneous electron gas studied by Takada [@Takada1978]. On the difference between Li and Al \[(a) and (b)\], we see that the contribution of $\Delta\mathcal{K}^{\rm el}$ in Al is noticeably smaller than that in Li. Also, the energy scale of the structure of $\Delta\mathcal{K}^{\rm el}$ \[inset of (b)\], which correlates with $\Omega_{\rm p}$ (see Table \[tab:Tc\]), is small (large) for Li (Al). These differences explain why the effect of $\Delta\mathcal{K}^{\rm el}$ is more significant in Li. The enhanced retardation effect by the plasmon is seen more clearly from the gap functions plotted together with the non-diagonal kernel in Fig. \[fig:kernel-gap\]. Indeed, we observe substantial enhancement of the negative gap value in the high-energy region, where the additional repulsion due to $\Delta\mathcal{K}^{\rm el}$ is strong. This clearly demonstrates that the plasmon mechanism indeed enhances $T_{\rm c}$, as is described in Sec. \[sec:theory\]. We did not find a nonzero solution for the gap equation Eq. (\[eq:gap-eq\]) with only the electron-electron contributions ($\mathcal{K}$$=$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$) down to $T=0.01$ kelvin, but did with the electron-phonon and the static electron-electron contribution ($\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$). Hence, while the driving force of the superconducting transition in Li is the phonon effect, the plasmon effect is essential to realize high $T_{\rm c}$. Finally, we also examined the effect of energy dependence of electronic density of states (DOS) on $\mathcal{Z}^{\rm ph}$. Since the form for $\mathcal{Z}^{\rm ph}$ used above \[Eq. (24) in Ref. \] only treats the constant component of the density of states, we also employed a form generalized for the nonconstant density of states \[Eqs. (40) in Ref. \]. The calculated $T_{\rm c}$ changes by approximately 2% with the nonconstant component, indicating that the constant-DOS approximation for the phonon contributions is valid for the present systems. ![(Color online) $N_{\rm p}$ dependence of the calculated gap function $\Delta_{n{\bf k}}$ at the Fermi level at $T$=0.01 K for (a) fcc lithium under pressure of 25GPa and (b) aluminum.[]{data-label="fig:gap-conv"}](Al_Li_fcc_25GPa_gap_conv_wrt_peak_nointerpol_ed_130928.jpg){width="7cm"} \[tab:Tc-2\] --------------------------------- --------- --------- --------- --------- ---------- Al 14GPa 20GPa 25GPa 30GPa $T_{\rm c}^{N_{\rm p}=1}$ \[K\] 1.4,1.5 2.2,2.8 4.1,5.2 6.5,7.4 9.1,11.1 $T_{\rm c}^{N_{\rm p}=2}$ \[K\] 1.4,1.6 2.2,3.1 4.4,5.5 6.8,8.0 9.1,10.7 $T_{\rm c}^{N_{\rm p}=5}$ \[K\] 1.6 3.8 6.5 9.2 12.0 $T_{\rm c}^{N_{\rm p}=8}$ \[K\] 1.6 3.8 6.5 9.2 12.0 --------------------------------- --------- --------- --------- --------- ---------- : Calculated $T_{\rm c}$ with different $N_{\rm p}$ using the procedure described in the text. For $N_{\rm p}$$=$1 and 2, the calculated values in Table \[tab:Tc\] are given together for comparison (left values). $N_{\rm p}$ dependence of $T_{\rm c}$ {#subsec:large-Np} ------------------------------------- Here we investigate the convergence of $T_{\rm c}$ with respect to the number of plasmon peaks $N_{\rm p}$. To address this problem, on top of the procedure described in Secs. \[sec:theory\] and \[subsec:small-Np\], we employed a slightly different algorithm. The difference is as follows. First, in the previous procedure, the plasmon frequencies $\omega_{i;n{\bf k}n'{\bf k}'}$ and coupling coefficients $a_{i;n{\bf k}n'{\bf k}'}$ for a set of sampling points were calculated from linear interpolation using the *ab initio* data on the equal grid, where the interpolation was independently carried out for each $i$-th largest branch. Since such an algorithm becomes unstable for damped peaks, we here did not carry out that, but rather determined $\omega_{i;n{\bf k}n'{\bf k}'}$ and $a_{i;n{\bf k}n'{\bf k}'}$ simply by the *ab initio* values on the neighboring grid point. Second, the weight for the variance and the ordering of the peak $p_{j}$ (see Sec. \[subsec:plasmon-pole\]) was set to unity in the previous procedure, but we here adopted $p_{j}= \nu_{j}^{-(1/3)}$: In an analytic $T_{\rm c}$ formula for three-dimensional electron gas derived by Takada \[Eq. (2.28) in Ref. \], the coefficient $\langle F \rangle$ in the exponent depends on the typical plasmon frequency by $\Omega_{\rm p}^{-(1/3)}$, so that we determined $p_{j}$ accordingly. We have indeed found that this setting of $p_{j}$ accelerates the convergence of the calculated gap function with respect to $N_{\rm p}$, as demonstrated by Fig. \[fig:gap-conv\].[@comment-accelerate] Carrying out the above procedure,[@comment-recalc] we calculated $T_{\rm c}$ for Al and Li under the pressures. The calculated result for $N_{\rm p}$$=$1, 2, 5 and 8 is summarized in Table \[tab:Tc-2\] together with that of Sec. \[subsec:small-Np\]. For $N_{\rm p}$$=$1 and 2, the previous and present procedures give slightly different values of $T_{\rm c}$, which originates mainly from the difference in the interpolation of $\omega_{i;n{\bf k}n'{\bf k}'}$ and $a_{i;n{\bf k}n'{\bf k}'}$. Within the present results, the caluculated $T_{\rm c}$ for Al shows little $N_{\rm p}$ dependence, whereas $N_{\rm p}$ has to be larger than 5 for Li to achieve the convergence within 0.1K. This indicates that the damped dynamical structure of the Coulomb interaction ignored with $N_{\rm p}$$=$1 and 2 also has a nonnegligible effect. We note that the general numerical trend observed in the results in Sec. \[subsec:small-Np\] is also valid for the calculated values with $N_{\rm p}\geq 5$. Summary and Conclusion {#sec:summary} ====================== We reviewed the recent progress by the authors in the SCDFT to address non-phonon superconducting mechanisms.[@Akashi-plasmon] An exchange-correlation kernel entering the SCDFT gap equation has been formulated within the dynamical RPA so that the plasmons in solids are considered. Through the retardation effect, plasmons can induce superconductivity, which has been studied for more than 35 years as the plasmon-induced pairing mechanism. A practical method to calculate $T_{\rm c}$ considering the plasmon effect have been implemented and applied to fcc Li. We have shown that the plasmon effect considerably raises $T_{\rm c}$ by cooperating with the conventional phonon-mediated pairing interaction, which is essential to understand the high $T_{\rm c}$ in Li under high pressures. The recent application suggests a general possibility that plasmons have a substantial effect on $T_{\rm c}$, even in cases where it does not alone induce superconducting transition. It is then interesting to apply the present formalism to “other high-temperature superconductors"[@Pickett-review-other] such as layered nitrides, fullerides, and the bismuth perovskite. Effects of the electron-electron and electron-phonon interactions in these systems have recently been examined from various viewpoints, particularly with *ab initio* calculations.[@Meregalli-Savrasov-BKBO; @Heid-Bohnen2005; @Yin-Kotliar-PRX; @Antropov-Gunnarsson-C60; @Janssen-Cohen-C60; @Akashi-MNCl; @Akashi-fullerene; @Nomura-C60-cRPA] Since they have a nodeless superconducting gap, plasmons may play a crucial role to realize their high $T_{\rm c}$.[@Bill2002-2003] More generally, there can be other situations: (i) the phonon effect does not dominate over the static Coulomb repulsion, but the plasmon effect does (i.e., a superconducting solution is not found with $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$, but is found with $\mathcal{K}$$=$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$), and (ii) either of the two effects does not independently, but their cooperation does (i.e., a superconducting solution is found with $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$). Searching for superconducting systems of such kinds is another interesting future subject, for which our scheme provides a powerful tool based on the density functional theory. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank Kazuma Nakamura and Yoshiro Nohara for providing subroutines for calculating the RPA dielectric functions. This work was supported by Funding Program for World-Leading Innovative R & D on Science and Technology (FIRST Program) on “Quantum Science on Strong Correlation,” JST-PRESTO, Grants-in-Aid for Scientic Research (No. 23340095), and the Next Generation Super Computing Project and Nanoscience Program from MEXT, Japan. [999]{} J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. **108**, 1175 (1957). J. G. Bednorz and K. A. Müller, Z. Phys. B **64**, 189 (1986). A. B. Migdal, Sov. Phys. JETP **7**, 996 (1958); G. M. Eliashberg, Sov. Phys. JETP **11**, 696 (1960); D. J. Scalapino, in [*Superconductivity*]{} edited by R. D. Parks, (Marcel Dekker, New York, 1969) VOLUME 1; J. R. Schrieffer,[*Theory of superconductivity; Revised Printing*]{}, (Westview Press, Colorado, 1971); P. B. Allen and B. Mitrović, in *Solid State Physics*, edited by H. Ehrenreich, F. Seitz, and D. Turnbull (Academic, New York, 1982), Vol. 37, p. 1. W. Kohn and L. J. Sham, Phys. Rev. **140**, A1133 (1965). S. Baroni, S. deGironcoli, A. Dal Corso, and P. Giannozzi, Rev. Mod. Phys. **73**, 515(2001). K. Kunc and R. M. Martin, in *Ab initio Calculation of Phonon Spectra*, edited by J. T. Devreese, V. E. van Doren, and P. E. van Camp (Plenum, New York, 1983), p. 65. D. M. Ceperley and B. J. Alder, Phys. Rev. Lett. **45**, 566 (1980). J. P. Perdew and A. Zunger, Phys. Rev. B **23**, 5048 (1981). S. Y. Savrasov and D. Y. Savrasov, Phys. Rev. B **54**, 16487 (1996). H. J. Choi, D. Roundy, H. Sun, M. L. Cohen, and S. G. Louie, Nature (London) **418**, 758 (2002); Phys. Rev. B **66**, 020513(R) (2002). W. L. McMillan, Phys. Rev. [**167**]{}, 331 (1968). P. B. Allen and R. C. Dynes, Phys. Rev. B [**12**]{}, 905 (1975). P. Morel and P. W. Anderson, Phys. Rev. **125**, 1263 (1962); N. N. Bogoliubov, V. V. Tolmachev, and D. V. Shirkov, [*A New Method in the Theory of Superconductivity*]{} (1958) (translated from Russian: Consultants Bureau, Inc., New York, 1959). L. N. Oliveira, E. K. U. Gross, and W. Kohn, Phys. Rev. Lett. **60**, 2430 (1988). T. Kreibich and E. K. U. Gross, Phys. Rev. Lett. **86**, 2984 (2001). M. Lüders, M. A. L. Marques, N. N. Lathiotakis, A. Floris, G. Profeta, L. Fast, A. Continenza, S. Massidda, and E. K. U. Gross, Phys. Rev. B **72**, 024545 (2005). M. A. L. Marques, M. Lüders, N. N. Lathiotakis, G. Profeta, A. Floris, L. Fast, A. Continenza, E. K. U. Gross, and S. Massidda, Phys. Rev. B **72**, 024546 (2005). A. Floris, G. Profeta, N. N. Lathiotakis, M. Lüders, M. A. L. Marques, C. Franchini, E. K. U. Gross, A. Continenza, and S. Massidda, Phys. Rev. Lett. **94**, 037004 (2005). A. Sanna, G. Profeta, A. Floris, A. Marini, E. K. U. Gross, and S. Massidda, Phys. Rev. B **75**, 020511(R) (2007). C. Bersier, A. Floris, A. Sanna, G. Profeta, A. Continenza, E. K. U. Gross, and S. Massidda, Phys. Rev. B **79**, 104503 (2009). R. Akashi, K. Nakamura, R. Arita, and M. Imada, Phys. Rev. B **86**, 054513 (2012). R. Akashi and R. Arita, Phys. Rev. B **88**, 054510 (2013). D. J. Scalapino, Rev. Mod. Phys. **84**, 1383 (2012). W. Kohn and J. M. Luttinger, Phys. Rev. Lett. **15**, 524 (1965). V. Radhakrishnan, Phys. Lett. **16**, 247 (1965). H. Fröhlich, J. Phys. C: Solid State Phys. **1**, 544 (1968). Y. Takada, J. Phys. Soc. Jpn. **45**, 786 (1978). H. Rietschel and L. J. Sham, Phys. Rev. B **28**, 5100 (1983). W. A. Little, Phys. Rev. **134**, A1416 (1967). C. S. Koonce, M. L. Cohen, J. F. Schooley, W. R. Hosler, and E. R. Pfeiffer, Phys. Rev. **163**, 380 (1967). Y. Takada, J. Phys. Soc. Jpn. **49**, 1267 (1980). J. W. Garland, Jr., Phys. Rev. Lett. **11**, 111 (1963). D. Pines, Can. J. Phys. **34**, 1379 (1956). J. Ihm, M. L. Cohen, and S. F. Tuan, Phys. Rev. B **23**, 3258 (1981). V. L. Ginzburg, Sov. Phys. Usp. **13**, 335 (1970). D. Allender, J. Bray, and J. Bardeen, Phys. Rev. B **7**, 1020 (1973). V. Z. Kresin, Phys. Rev. B **35**, 8716 (1987). A. Bill, H. Morawitz, and V. Z. Kresin, Phys. Rev. B **66**, 100501(R) (2002); Phys. Rev. B **68**, 144519 (2003). S. Yamanaka, K. Hotehama, and H. Kawaji, Nature (London) **392**, 580 (1998). Y. Taguchi, A. Kitora, and Y. Iwasa, Phys. Rev. Lett. **97**, 107001 (2006). K. Taniguchi, A. Matsumoto, H. Shimotani, and H. Takagi, Appl. Phys. Lett. **101**, 042603 (2012). J. T. Ye, Y. J. Zhang, R. Akashi, M. S. Bahramy, R. Arita, and Y. Iwasa, Science **338**, 1193 (2012). R. Akashi and R. Arita, Phys. Rev. Lett. **111**, 057006 (2013). S. Massidda, F. Bernardini, C. Bersier, A. Continenza, P. Cudazzo, A. Floris, H. Glawe, M. Monni, S. Pittalis, G. Profeta, A. Sanna, S. Sharma, and E. K. U. Gross, Supercond. Sci. Technol. **22**, 034006 (2009). D. Pines, *Elementary Excitations in Solids* (Benjamin, New York, 1963). S. Kurth, M. Marques, M. Lüders, and E. K. U. Gross, Phys. Rev. Lett. **83**, 2628 (1999). M. S. Hybertsen and S. G. Louie, Phys. Rev. B **35**, 5585 (1987); M. S. Hybertsen and S. G. Louie, Phys. Rev. B **35**, 5602 (1987). J. Kondo, Prog. Theor. Phys. **29**, 1 (1963). Y. Takada, J. Phys. Soc. Jpn, **51**, 63 (1982). Y. Takada, J. Phys. Soc. Jpn, **78**, 013703 (2009). His theory also treats the coupling between phonons and plasmons, which is not considered in our method. K. Shimizu, H. Ishikawa, D. Takao, T. Yagi, and K. Amaya, Nature (London) **419**, 597 (2002). V.V. Struzhkin, M. I. Eremets, W. Gan, H. K. Mao, and R. J. Hemley, Science **298**, 1213 (2002). S. Deemyad and J. S. Schilling, Phys. Rev. Lett. **91**, 167001 (2003). T. H. Lin and K. J. Dunn, Phys. Rev. B **33**, 807 (1986). J. S. Tse, Y. Ma, and H. M. Tütüncü, J. Phys. Condens. Matter **17**, S911 (2005); Y. Yao, J. Tse, K. Tanaka, F. Marsiglio, Y. Ma, Phys. Rev. B **79**, 054524 (2009). S. U. Maheswari, H. Nagara, K. Kusakabe, and N. Suzuki, J. Phys. Soc. Jpn. **74**, 3227 (2005). D. Kasinathan, J. Kuneš, A. Lazicki, H. Rosner, C. S. Yoo, R. T. Scalettar, and W. E. Pickett, Phys. Rev. Lett. **96**, 047004 (2006); D. Kasinathan, K. Koepernik, J. Kuneš, H. Rosner, and W. E. Pickett, Physica C **460-462**, 133 (2007). N. E. Christensen and D. L. Novikov, Phys. Rev. B **73**, 224508 (2006). R. A. Jishi, M. Benkraouda, and J. Bragin, J. Low Temp. Phys. **147**, 549 (2007). G. Profeta, C. Franchini, N. N. Lathiotakis, A. Floris, A. Sanna, M. A. L. Marques, M. Lüders, S. Massidda, E. K. U. Gross, and A. Continenza, Phys. Rev. Lett. **96**, 047003 (2006). T. Bazhirov, J. Noffsinger, and M. L. Cohen, Phys. Rev. B **82**, 184509 (2010). F. Giustino, M. L. Cohen, and S. G. Louie, Phys. Rev. B **76**, 165108 (2007). P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. Dal Corso, S. Fabris, G. Fratesi, S. de Gironcoli, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, J. Phys.: Condens. Matter **21**, 395502 (2009); http://www.quantum-espresso.org/. N. Troullier and J. L. Martins, Phys. Rev. B **43**, 1993 (1991). V. M. Silkin, A. Rodriguez-Prieto, A. Bergara, E. V. Chulkov, and P. M. Echenique, Phys. Rev. B **75**, 172102 (2007). K. Karlsson and F. Aryasetiawan, Phys. Rev. B **52**, 4823 (1995). E. N. Foo and J. J. Hopfield, Phys. Rev. **173**, 635 (1968). K. Sturm and L. E. Oliveira, Phys. Rev. B **40**, 3672 (1989). N. W. Ashcroft and N. D. Mermin, *Solid State Physics* (Thomson Learning, Singapore, 1976). R. Akashi and R. Arita, Phys. Rev. B **88**, 014514 (2013). The two settings of $p_{j}$ give slightly different values of the variance for $N_{\rm p}\rightarrow \infty$ due to the cusplike structure discussed in Sec. \[subsec:plasmon-pole\], though the difference is invisibly small here. Also, we used Kohn-Sham energy eigenvalues on an auxiliary $21^{3}$ k-point grid for the ${\bf k}$ integration in Eq. (\[eq:chi-def\]) when the both $n$ and $n$’ corresponds to partially-occupied bands, with which some plasmon peaks became sharp. W. E. Pickett, Physica B **296**, 112 (2001); Physica C **468**, 126 (2008). V. Meregalli and S. Y. Savrasov, Phys. Rev. B **57**, 14453 (1998). R. Heid and K. P. Bohnen, Phys. Rev. B **72**, 134527 (2005). Z. P. Yin, A. Kutepov, and G. Kotliar, Phys. Rev. X **3**, 021011 (2013). V. P. Antropov, O. Gunnarsson, and A. I. Liechtenstein, Phys. Rev. B **48**, 7651 (1993). J. Laflamme Janssen, M. Côté, S. G. Louie, and M. L. Cohen, Phys. Rev. B **81**, 073106 (2010). Y. Nomura, K. Nakamura, and R. Arita, Phys. Rev. B **85**, 155452 (2012). [^1]: E-mail address: akashi@solis.t.u-tokyo.ac.jp
{ "pile_set_name": "ArXiv" }
Cardiac hypertrophy secondary to ACTH treatment in children. The usefulness of ACTH in the treatment of childhood epilepsy is assessed by improvement in the EEG and in the clinical condition. However, pronounced side effects, even serious ones, must be encountered. The most common complications are Cushing syndrome, infections, and arterial hypertension. We report on seven patients with infantile myoclonic seizures, who exhibited myocardial hypertrophy with increased left ventricular function during ACTH treatment. These changes were detected and followed by serial echocardiographic investigations. Within a period of 5 months after the termination of ACTH therapy the abnormal echocardiographic findings disappeared. We believe that the cardiac hypertrophy is ACTH-induced. Based on the various biological effects of ACTH different explanations are proposed: oedema or deposition of glycogen in the myocardial tissue, hyperinsulinism, arterial hypertension and increased inotropic stimulus. Because of our observations, we suggest careful monitoring of children treated with ACTH by performing serial echocardiographic investigations.
{ "pile_set_name": "PubMed Abstracts" }
The invention is related to an electromagnetically actuatable fuel injection valve of the type used for internal combustion engines. A fuel injection valve, and a method for producing the fuel injection valve, are already known, but this valve is not suitable for use in low-pressure fuel injection systems, because, as a result of heating, when it is used in a motor vehicle there is an undesirable formation of vapor bubbles and insufficient preparation of the fuel to be injected. In this valve, the armature stroke is adjusted by the interposition of spacer discs of various thicknesses. This operating procedure, first, makes it difficult to automate manufacture; also, it is expensive and causes excessively large deviations in the quantities of fuel ejected at the various fuel injection valves.
{ "pile_set_name": "USPTO Backgrounds" }
Illmaculate - Do Not Disturb In The Matrix, Lawrence Fishburne offers Keanu Reeves a choice between the blue pill, which will allow him to return to a life of comfortable illusions, and the red pill, which will set him on the path toward learning the truth. Illmaculate is clearly a “red pill” kind of guy. On new single Do Not Disturb, making its world premiere in the Booth, the Portland rhymesayer slaughters a few of the sacred cows in which much of the American public uncritically places its faith—from electoral democracy, to psychiatry, to the rosy view of history taught in U.S. schools. Worst Nightmare collaborator Chase Moore handles production, backing the artist’s hard-hitting bars with spacey synths and slow-rolling percussion. For Do Not Disturb and much more, cop Illmaculate’s Clay Pigeons LP when it hits record stores and online retailers Tuesday, March 11.
{ "pile_set_name": "Pile-CC" }
/* * Copyright 1995-2018 The OpenSSL Project Authors. All Rights Reserved. * * Licensed under the OpenSSL license (the "License"). You may not use * this file except in compliance with the License. You can obtain a copy * in the file LICENSE in the source distribution or at * https://www.openssl.org/source/license.html */ #include <stdio.h> #include <errno.h> #include "bio_lcl.h" #include "internal/cryptlib.h" /* * BIO_put and BIO_get both add to the digest, BIO_gets returns the digest */ static int nullf_write(BIO *h, const char *buf, int num); static int nullf_read(BIO *h, char *buf, int size); static int nullf_puts(BIO *h, const char *str); static int nullf_gets(BIO *h, char *str, int size); static long nullf_ctrl(BIO *h, int cmd, long arg1, void *arg2); static long nullf_callback_ctrl(BIO *h, int cmd, BIO_info_cb *fp); static const BIO_METHOD methods_nullf = { BIO_TYPE_NULL_FILTER, "NULL filter", /* TODO: Convert to new style write function */ bwrite_conv, nullf_write, /* TODO: Convert to new style read function */ bread_conv, nullf_read, nullf_puts, nullf_gets, nullf_ctrl, NULL, NULL, nullf_callback_ctrl, }; const BIO_METHOD *BIO_f_null(void) { return &methods_nullf; } static int nullf_read(BIO *b, char *out, int outl) { int ret = 0; if (out == NULL) return 0; if (b->next_bio == NULL) return 0; ret = BIO_read(b->next_bio, out, outl); BIO_clear_retry_flags(b); BIO_copy_next_retry(b); return ret; } static int nullf_write(BIO *b, const char *in, int inl) { int ret = 0; if ((in == NULL) || (inl <= 0)) return 0; if (b->next_bio == NULL) return 0; ret = BIO_write(b->next_bio, in, inl); BIO_clear_retry_flags(b); BIO_copy_next_retry(b); return ret; } static long nullf_ctrl(BIO *b, int cmd, long num, void *ptr) { long ret; if (b->next_bio == NULL) return 0; switch (cmd) { case BIO_C_DO_STATE_MACHINE: BIO_clear_retry_flags(b); ret = BIO_ctrl(b->next_bio, cmd, num, ptr); BIO_copy_next_retry(b); break; case BIO_CTRL_DUP: ret = 0L; break; default: ret = BIO_ctrl(b->next_bio, cmd, num, ptr); } return ret; } static long nullf_callback_ctrl(BIO *b, int cmd, BIO_info_cb *fp) { long ret = 1; if (b->next_bio == NULL) return 0; switch (cmd) { default: ret = BIO_callback_ctrl(b->next_bio, cmd, fp); break; } return ret; } static int nullf_gets(BIO *bp, char *buf, int size) { if (bp->next_bio == NULL) return 0; return BIO_gets(bp->next_bio, buf, size); } static int nullf_puts(BIO *bp, const char *str) { if (bp->next_bio == NULL) return 0; return BIO_puts(bp->next_bio, str); }
{ "pile_set_name": "Github" }
Q: What's the semantically accurate position for the ampersand in C++ references It's pretty common knowledge that the semantically accurate way to declare pointers is int *x; instead of int* x; This is because C sees *x as an int, not x as an int pointer. This can be easily demonstrated by int* a, b; where a is an int pointer, while b is an int. There are at least 5 duplicate questions on stackoverflow.com that discuss this issue for pointers. But what about references? A: Bjarne Stroustrup says: A typical C programmer writes int *p; and explains it *p is what is the int emphasizing syntax, and may point to the C (and C++) declaration grammar to argue for the correctness of the style. Indeed, the * binds to the name p in the grammar. A typical C++ programmer writes int* p; and explains it p is a pointer to an int emphasizing type. Indeed the type of p is int*. I clearly prefer that emphasis and see it as important for using the more advanced parts of C++ well. When declaring a pointer variable or argument, you may place the asterisk (or ampersand) adjacent to either the type or to the variable name. The most important thing is to do this consistently within a single file. // These are fine, space preceding. char *c; const int &i; // These are fine, space following. char* c; const int& i; A: While researching for this question, I already found the answer: The & needs to be written just like the *. The demonstration code is similar to the pointer demonstration code: int main() { int a = 0; int b = 1; int& ar = a, br = b; br = 2; return b; } This returns 1, which means that ar is an int reference, while br is just an integer. A: Thanks to "template typedefs", you can declare multiple references in an (arguably) nicer way: template<typename T> using ref = T&; int a, b; ref<int> ar = a, br = b;
{ "pile_set_name": "StackExchange" }
Q: Tomcat & Eclipse integration I'm developing on a Ubuntu 8.04 machine using Eclipse Ganymede. I installed Tomcat 5.5 using sudo apt-get install tomcat5.5 tomcat5.5-admin and using an Ant script I deploy my WAR file by copying it to $CATALINA_HOME/webapps. I then created an Eclipse project and I have it output compiled source in a similar but separate directory structure under $PROJECT_ROOT/target/. I still deploy the WAR file by right clicking on the build.xml and choosing my deploy-war task. As Tomcat is running as a deamon, automatically started up on booting, I'm not instructing it when to start or exit. My problems with this setup are: Using this approach I do not get any output to the Eclipse console, as Tomcat is running under the tomcat55 user and I have a different login and no access to Stdout of tomcat55. The logging which occurs is also directed to Stdout at the moment, which I find pretty nice during development. But it's not nice when I can't see it. :-) I don't have any servers under the Server tab and no Run configurations. This makes it impossible for me to use the Debug mode of Eclipse, which otherwise is quite convenient. What do you think I should do to integrate them and in turn make my development environment much better? A: I'd say forget the pre-packaged Tomcat. Grab the apache-tomcat-x.y.z.zip from the site, unzip it somewhere in your $HOME and add a Server to your eclipse workspace, pointing to your local installation of tomcat. Of course you need the j2ee/wtp Eclipse bundle. Works fine on Windows, can't see a reason for it not working on Linux. Edit: You may have to fiddle with server ports if you have two tomcat installs. A: Add Tomcat to the list of Eclipse servers and run your web-app on the server. If you need more details click here.
{ "pile_set_name": "StackExchange" }
Google is completely redesigning AdWords - uptown http://searchengineland.com/adwords-redesign-first-look-246074 ====== eggy In Dart and Angular 2. I have to take a look at Dart again. I thought it was going to be left to wither and die, but with Flutter and now this, I have to go back and take another look. The tooling was fun, and they are developing or have developed a 'strong mode' for stronger typing. ------ rylest14 Love the new interface - going to make Adwords much more user friendly!
{ "pile_set_name": "HackerNews" }
Meaning name Kaisa Kai - Compare with masculine forms of Kai. Hawaiian unisex name meaning "sea."Kaikala - Hawaiian name meaning "the sea and the sun."Kaila - Altered form of English Kayley, meaning "slender."Kailash - Hindi unisex name derived from the name of a sacred mountain in the Himalayas, from the word kailasa, meaning "crystal." The Tibetan name for the mountain is Gang Rinpoche, meaning "precious jewel of snows."Kailee - Variant spelling of English Kayley, meaning "slender."Kailey - Variant spelling of English Kayley, meaning "slender."Kailyn - Variant spelling of English Kaylyn, meaning "girl."Kaimana - Hawaiian unisex name meaning "diamond" or "sea filled with Mana."Kaiolohia - Hawaiian name meaning "calm sea."Kaitlin - Anglicized form of Irish Gaelic Caitlín, meaning "pure."
{ "pile_set_name": "Pile-CC" }
When Landauer argued in 1961 that any physical realisation of erasure of information has a fundamental thermodynamic work cost he irrevocably linked thermodynamics and information theory[@b1][@b2][@b3][@b4][@b5][@b6][@b7][@b8][@b9]. A practical consequence of this insight is that all computers must dissipate a minimal amount of heat in each irreversible computing step, a threshold that is becoming a concern with future computer chips entering atomic scales. The treatment of general *quantum* information processing tasks within the wider framework of quantum thermodynamics has only recently begun[@b13]. Quantum mechanics differs from classical mechanics in at least three central aspects: the special nature of measurement, the possibility of a quantum system to be in a superposition and the existence of quantum correlations. The thermodynamic energy needed to perform a (selective) measurement has been investigated[@b10] and the total work for a closed thermodynamic measurement cycle explored[@b11]. The catalytic role of quantum superposition states when used in thermal operations has been uncovered[@b12] and it has been shown that work can be drawn from quantum correlations[@b13][@b14] in a thermodynamic setting, see [Fig. 1](#f1){ref-type="fig"}. In particular, del Rio *et al.*[@b14] showed that contrary to Landauer's principle, it is possible to *extract* work while performing erasure of a system's state when the system is correlated to a memory. This can occur if and only if the initial correlations imply a negative conditional entropy, a uniquely quantum feature. The thermodynamic process does however now require operation on degrees of freedom external to the system, i.e. the memory's. Results ======= Projections and the optimal work value of removing coherences ------------------------------------------------------------- Our motivation is here to shed light on the implications of performing a measurement on a quantum state that has coherences. We will consider this task in the thermodynamic setting of Landauer's erasure, involving a heat bath at fixed temperature *T* and operation on *N* → ∞ uncorrelated and identically prepared copies of the system (i.i.d. limit). This is of interest in the context of the quantum Jarzynski equality, for example, and will also be central for experiments testing quantum thermodynamic predictions in the future. To tackle this question we define the information-theoretic "projection" for a given initial quantum state *ρ* and a complete set of mutually orthogonal projectors . Such state transformation can be seen as analogous to the state transfer of erasure, , to a blank state . Physically, this projection can be interpreted as the result of an unread, or unselective[@b15], measurement of an observable that has eigenvector projectors . In an unselective measurement the individual measurement outcomes are not recorded and only the statistics of outcomes is known. In the literature the implementation of unselective measurements is often not specified, although it is typically thought of as measuring individual outcomes, e.g. with a Stern-Gerlach experiment, see [Fig. 2a](#f2){ref-type="fig"}, followed by mixing. The crux is that the information-theoretic projection can be implemented in many physical ways. The associated thermodynamic heat and work will differ depending on *how* the projection was done and we will refer to the various realisations as "thermodynamic projection processes". One possibility is decohering[@b16] the state in the so-called pointer basis, , a thermodynamic process where an environment removes coherences in an uncontrolled manner resulting in no associated work. In general it is possible to implement the state transfer in a finely controlled fashion achieving optimal thermodynamic heat and work values. Of particular importance in thermodynamics is the projection of the system's initial state *ρ* onto the set of energy eigenstates of the system's Hamiltonian with *E*~*k*~ the energy eigenvalues. Here the state's off-diagonals with respect to the energy eigenbasis are removed - a state transformation that is frequently employed in quantum thermodynamic derivations and referred to as "dephasing" or "measuring the energy". Our key observation is that there exists a thermodynamic projection process realising this transformation and allowing to draw from the quantum system a non-trivial *optimal average work* of Here *T* is the temperature of the heat bath with which the system is allowed to interact, see illustration [Fig. 1](#f1){ref-type="fig"}, *k*~*B*~ is the Boltzmann constant and *S* is the von Neumann entropy. Crucially, this work is strictly positive for quantum states with coherences. Extending the key observation to general projections one finds that optimal thermodynamic projection processes can be implemented that allow to draw an average work of where an additional internal energy change term appears. Physical interpretation and assumptions made to derive the optimal work ----------------------------------------------------------------------- The optimal work values stated in Eqs. [(1](#eq12){ref-type="disp-formula"}) and ([2](#eq14){ref-type="disp-formula"}) are valid for processes applied to classical and quantum states alike. While for a classical ensemble the entropy change, , will be zero this is not so in the general quantum situation, where initial non-diagonal quantum states result in a strictly positive entropy change[@b17]. We note that while the optimal work values are in principle attainable, practical implementations may be suboptimal resulting in a reduced work gain or a higher work cost. The physical meaning of can be grasped by considering a lower bound[@b18] on it, , see Supplement. Here *d* is the dimension of the system and denotes the Hilbert-Schmidt norm. The first factor quantifies the distance of the initial state from the fully mixed state, while the second factor, , quantifies the angle between the diagonal basis of *ρ* and the projection basis . These terms correspond to incoherent and coherent mixing contributions. The entropy change is non-trivially bounded only if the initial state is not an incoherent mixture with respect to that basis. The entropy bound is the largest for pure initial states whose basis is mutually unbiased with respect to . In this case the optimal entropy change is . One may wonder where the work has gone to. There are two equivalent approaches to the accounting of work. In the present analysis the focus is on the work that the system exchanges, as done in statistical physics[@b5][@b19][@b20][@b21][@b22]. In this approach it is often not explicitly mentioned where the work goes to, but the only place work can go to are the externally controlled energy sources. Similarly, the heat, i.e. the energy change minus the work, is established implicitly. For example, in the experimental realisation of classical Landauer erasure with a colloidal silica bead trapped in an optical tweezer[@b21], the dissipated heat of erasure was calculated by knowing the applied tilting forces and integrating over the bead's dynamics. The second approach is to collect work in a separate work storage system[@b23], as illustrated by the weight in [Fig. 1](#f1){ref-type="fig"} and detailed in the Supplement. Both the implicit and the explicit treatment of work are equivalent in the sense that the results obtained in one approach can be translated into the other. The thermodynamic assumptions made to prove Eq. [(2)](#eq14){ref-type="disp-formula"} are congruent with current literature[@b9][@b23][@b24][@b25]; specifically they are: (T0) an isolated system is a system that only exchanges work and not heat; (T1) the validity of the *first law* relating the internal energy change, Δ*U*, of the system during a process to its average heat absorbed and work drawn, ; (T2) the validity of the *second law* relating the system's entropy change to its average absorbed heat, , when interacting with a bath at temperature *T*, with equality attainable by an optimal process; (T3) the thermodynamic entropy to be equal to the von Neumann entropy in equilibrium as well as out-of-equilibrium, . In addition we make the following standard quantum mechanics assumptions: (Q0) an isolated system evolves unitarily; (Q1) control of a quantum system includes its coherences. Details of the proof are in the Methods Summary. We note that in the single-shot setting whole families of second laws apply[@b7][@b8] that differ from (T2) stated above. However, in the limit of infinitely many independent and identically prepared copies of the system these collapse to the standard second law, (T2), on the basis of which Eq. [(2)](#eq14){ref-type="disp-formula"} is derived. From the information-theory point of view the projections considered here constitute just one example of the larger class of trace-preserving completely positive (TPCP) maps characterising quantum dynamics. Of course, all TPCP maps can be interpreted thermodynamically with the assumptions stated above, resulting in an optimal average work given by a free energy difference. Erasure is another such map whose study forged the link between information theory and thermodynamics. The benefit of discussing "projections" here lies in the insight that this focus provides: it uncovers that coherences offer the potential to draw work making it a genuine and testable quantum thermodynamic feature. This work is non-trivial even when the thermodynamic process is operated on the system alone, not involving any side-information[@b14] stored in other degrees of freedom. Qubit example for drawing optimal work -------------------------------------- To gain a detailed understanding of thermodynamic projection processes that give the optimal work stated in Eq. [(1)](#eq12){ref-type="disp-formula"} we now detail one such process for the example of a spin-1/2 particle (qubit), see illustration in [Fig. 2b,c](#f2){ref-type="fig"}. This process consists of a unitary evolution, a quasi-static evolution and a quench[@b25], and it is optimal for any finite dimensional quantum system (proof in the Methods Summary). An experimentalist, Emmy, prepares the spin in a state ( w.l.o.g.) exposed to an external magnetic field which she controls. The Hamiltonian associated with the system is where the energy difference between the aligned ground state, , and anti-aligned excited state, , is given by with the spin's magnetic moment. Importantly, in general the spin state's basis, , are superpositions with respect to the energy eigenbasis, and with . For the optimal implementation of the projection Emmy now proceeds with the following three steps. Firstly, she isolates the spin from the bath and modifies external magnetic fields to induce a unitary rotation, , of the spin into the energy basis. In nuclear magnetic resonance (NMR)[@b26] and pulsed electron spin resonance (ESR) experiments[@b27] such rotations are routinely realised by radio-frequency and microwave pulses respectively, as evidenced by Rabi oscillations. The power, duration and phase of such a pulse would be chosen to generate the spin-rotation along the green circle until the desired unitary *V* is achieved. In the same step Emmy adjusts the strength of the external B-field such that the spin state is Boltzmann-distributed at temperature *T* with respect to the energy gap of the Hamiltonian at the end of the step, *H*^(1)^. In NMR or ESR the B-field magnitude is tuned quickly on the *T*~1~ timescale to achieve the desired energy gap. In the second step, Emmy wants to implement a quasi-static evolution of the spin that is now thermal. She brings the spin in contact with the heat bath at temperature *T* and quasi-statically adjusts the magnitude of the external B-field allowing the spin state to thermalise at all times. The final B-field, , is chosen such that the final thermal state becomes *η*^*H*^. In ESR this step can be realised by changing the external B-field slowly on the *T*~1~ timescale so that the spin continuously equilibrates with its environment. Finally, Emmy isolates the spin from the environment and quickly changes the B-field to its original magnitude while the state remains *η*^*H*^. During Step 1 and 3 the system was isolated and the average work drawn is thus just the average energy change. During Step 2 the average work is the equilibrium free energy difference between the final and initial thermal states at temperature *T*, see Supplement for details. In NMR/ESR the work contributions drawn from the spin system are done on the external B-field and the microwave mode. This could be detected by measuring the stimulated emission of photons in the microwave mode or observing current changes induced by the spins dynamics[@b26][@b27]. The overall thermodynamic process has now brought the spin from a quantum state with coherences, *ρ*, into a state without coherences, *η*^*H*^, while keeping the average energy of the spin constant. The net work drawn during the three steps adds up to showing the attainability of the optimum stated in Eq. [(1)](#eq12){ref-type="disp-formula"} for the spin-1/2 example. We note that Eq. [(1)](#eq12){ref-type="disp-formula"} is also the maximal work that can be extracted from a *qubit* state *ρ* under *any* transformation of the system that conserves its average energy, , i.e. for qubits *η*^*H*^ is the optimal final state under this condition. We emphasise that this optimal implementation involves a finely tuned and controlled operation that relies on knowledge of the initial state *ρ*. This is akin to the situation considered in[@b14] where knowledge of the initial global state of system and memory is required for optimal erasure with side-information. It is important to distinguish this situation from that of Maxwell demon's who has access to knowledge of the individual micro-states that make up the ensemble state , and who uses it to beat the second law[@b28]. In the scenario considered here there is no knowledge of the individual micro-states and the process does not violate the second law, on the contrary, it is derived from it. Comparison with single-shot work -------------------------------- The preceding discussion concerned the *average* work that can be drawn when operating on an ensemble of *N* → ∞ independent spins. This scenario contrasts with the single shot situation considered in a number of recent publications[@b7][@b14][@b29][@b30]. In particular, two major frameworks[@b29][@b30] have recently been put forward to identify optimal *single-shot* work extraction and work cost of formation in the quantum setting. These frameworks rely on a resource theory approach[@b6] and make use of min- and max-relative entropies that originate from one-shot information theory. The optimal work extraction schemes of these frameworks require non-diagonal states to be decohered first to become diagonal in the energy basis. This decoherence step is assumed to not have an associated single-shot work. However, the present analysis of energy basis projections showed that thermodynamic projection processes can yield positive average work, see Eq. [(1)](#eq12){ref-type="disp-formula"}. Therefore one may expect a positive work for removing coherences from a state *ρ* in the single-shot setting, too. Since our focus is the *N* → ∞ limit we will not aim to construct the single-shot case. Nevertheless, to establish a notion of consistency between single-shot results[@b29][@b30] and the average analysis presented here we now separate the projection into a diagonal part that can be analysed in the single-shot framework and a non-diagonal part that can be analysed in the average framework. One possible decomposition of is the split in three steps each starting and ending with Hamiltonian *H*: . Here *ρ*~1~ is the rotated state defined above and is the thermal state for the Hamiltonian *H* at temperature *T*. We can now use a single-shot analysis[@b30] for Steps *b* and *c* that involve only states diagonal in the energy basis, giving a single-shot work contribution of , see Supplement. Here *D*~min~ and *D*~max~ are the min- and max-relative quantum entropies, respectively. Taking the limit of *N* → ∞ copies for Steps *b* and *c* and adding the average work contribution for the initial non-diagonal rotation *a*, , one indeed recovers the optimal average work as stated in Eq. [(1)](#eq12){ref-type="disp-formula"}. After making public our results very recently a paper appeared[@b31] that derives the work that can be extracted when removing coherences in a single-shot setting. These results are in agreement with Eq. [(1)](#eq12){ref-type="disp-formula"} and reinforce the above conclusion that coherences are a fundamental feature distinguishing quantum from classical thermodynamics. Comparison with quantum work fluctuation relations -------------------------------------------------- The key observation was that thermodynamic projection processes can have a non-trivial work and heat. Another instance where this has interesting repercussions is the quantum Jarzynski equality[@b4][@b5]. This is a generalisation of the prominent classical fluctuation relation valid for general non-equilibrium processes, which has been used to measure the equilibrium free energy surface inside bio-molecules by performing non-equilibrium pulling experiments[@b19]. The quantum version has recently been tested for the first time in a nuclear magnetic resonance experiment[@b26]. The quantum Jarzynski relation, , links the fluctuating work, *W*, drawn from a system in individual runs of the same non-equilibrium process, with the free energy difference, Δ*F*, of the thermal states of the final and initial Hamiltonian, see Supplement. In its derivation a system initially in a thermal state *ρ*~0~ with respect to Hamiltonian *H*^(0)^ at temperature *T* is first measured in the energy basis of *H*^(0)^. The Hamiltonian is then varied in time ending in *H*^(*τ*)^ generating a unitary evolution, *V*, of the system, see [Fig. 3a](#f3){ref-type="fig"}. A second measurement, in the energy basis of *H*^(*τ*)^, is then performed to establish the final fluctuating energy. For each run the difference of the two measured energies has been associated with the fluctuating work[@b5], Δ*E* = −*W*. The experiment is repeated, each time producing a fluctuating work value. On average the work extracted from the system during the quantum non-equilibrium process turns out to be where is the ensemble's state after the unitary evolution, and similarly the average exponentiated work is calculated. The above identification was made assuming that the system undergoes a unitary process with no heat dissipation. However, the need to acquire knowledge of the system's final energies requires the second measurement. The ensemble state is thus further altered from *ρ*~*τ*~ to *η*~*τ*~, the state *ρ*~*τ*~ with any coherences in the energy basis of *H*^(*τ*)^ removed. This step is not unitary - during the projection the system may absorb heat, , indicated in [Fig. 3b](#f3){ref-type="fig"}, whose value depends on *how* the process is conducted. Thus, while the energy difference for the projection is zero, , for states *ρ*~*τ*~ with coherences the entropy difference is not trivial, . This implies that in an experimental implementation of the Jarzynski relation the work done by the system on average can be more than previously thought, . We conclude that the suitability of identifying , and hence the validity of the quantum Jarzynski *work* relation, depends on the details of the physical process that implements the second measurement. This conclusion is not at odds with previous experiments[@b26] which showed nature's agreement with , involving the average of the exponentiated measured fluctuating energy. Work from coherences of correlated quantum systems -------------------------------------------------- It is insightful to extend the thermodynamic analysis of projections to correlated systems. An experimenter may have access not only to the system *S* but also the auxiliary systems *A* with which *S* is correlated[@b14]. She can then perform a global operation, , that implements a projection locally on the system *S*, i.e. , while leaving the reduced state of the auxiliary system unchanged, i.e. . By doing so the experimenter can optimally draw the overall work , where is the entropy change for the state of system + auxiliary and is still the energy change of the system *alone*. This quantity can be re-written as the sum of two terms: , the extractable work when operating on the system alone given in Eq. [(2)](#eq14){ref-type="disp-formula"}, and , a positive term quantifying the quantum correlations between *S* and *A*, see Supplement. The latter contribution was previously identified in an inspiring paper by Zurek[@b13]. It depends on the choice of projectors and is related to, but broader than, quantum discord[@b32] which is optimised over all possible projectors. This means that even states of system and auxiliary that can be considered classically correlated (i.e. no discord) provide an advantage for drawing work contrasting with the erasure process where this only occurs for highly entangled states[@b14]. The gap between these two sets of correlated states is an intriguing fact and calls for further exploration of the link between thermodynamics and information theory in the quantum regime. Discussion of implications ========================== To conclude, erasure is not the only irreversible information processing task -- in the quantum regime a second fundamental process exists that mirrors Landauer's erasure. In contrast to the minimum heat limit of erasure, thermodynamic projection processes have a maximum work limit. While the former is non-zero for the erasure of classical *and* quantum bits, optimal thermodynamic projection processes have a non-zero work *only* when applied to quantum states with coherences. The optimal average work stated in Eqs. [(1](#eq12){ref-type="disp-formula"}) and ([2](#eq14){ref-type="disp-formula"}) constitutes an experimentally accessible quantum thermodynamic prediction. Future experiments testing this optimal work may be pursued with current setups, for instance with NMR/ESR techniques[@b26][@b27] or single atoms[@b33], and promise to be accessible with other platforms entering the quantum regime, such as single electron boxes[@b22]. Experiments will be limited by practical constraints, such as achieving a quasistatic process and obtaining the maximum work for pure states which may require, for instance, very large B-fields. The derivation of the optimal work value is mathematically straightforward, just like that of Landauer's principle. The result's significance is that it opens new avenues of thought and provides key input for the construction of a future quantum thermodynamic framework. For example, the developed approach opens the door to investigate the connection between microscopic statistical physics and macroscopic thermodynamics in the quantum regime. While it is straightforward to identify the thermodynamic work of quantum processes involving macroscopic ensembles, what is needed is a microscopic concept of work that when averaged, gives the correct macroscopic work. The microscopic work concept should be valid for general (open) quantum processes and quantum states (including coherences), and only require access to properties of the system. While single-shot approaches have discarded coherences[@b29][@b30], fluctuating work approaches cannot be applied directly to a system undergoing open quantum evolution[@b20]. The observation is also important from the experimental perspective as testing quantum thermodynamic predictions will involve measurement -- a projection process. We have argued that measurements, such as those required in establishing the Jarzynski equality, are not necessarily thermodynamically neutral. Indeed, they can be implemented in different physical ways and in general play an active role in thermodynamics, contributing a non-zero average heat and work. This new perspective gives physical meaning to the change of entropy in the debated quantum measurement process - it provides a capacity to draw work. Specifically, work can be drawn when *coherences* of a state are removed during an unselective measurement. Finally, it is apparent that optimal thermodynamic projection processes require use of knowledge of the initial state *ρ*, i.e. its basis and eigenvalues. One may be inclined to exclude use of such knowledge, particularly when considering projections in the context of measurement which is often associated with the acquisition of knowledge. Such restriction would necessarily affect the set of assumptions (T0-T3, Q0-Q1) in the quantum regime. These could be changed, for example, by dropping the possibility of saturating the second law inequality (cf. T2) or choosing a new quantum non-equilibrium entropy that only considers the state's diagonal entries (cf. T3). The latter would mean a departure from standard quantum information theory where entropies are basis-independent. Thus whichever approach one takes - not making or making a restriction - quantum coherences will contribute a new dimension to thermodynamics. They either lead to non-classical work extraction or they alter the link between information theory and thermodynamics in the quantum regime. The line drawn here between the assumptions (T0-T3, Q0-Q1) and results (Eqs. [(1](#eq12){ref-type="disp-formula"}) and ([2](#eq14){ref-type="disp-formula"})) establishes a frame for this possibility to be investigated. Methods Summary =============== Further underlying research materials can be accessed in the [supplementary information](#S1){ref-type="supplementary-material"} that accompanies this article. Proof of Eq. [(2)](#eq14){ref-type="disp-formula"} -------------------------------------------------- Using the first law (T1) the average work drawn in a thermodynamic projection process is simply , where is the average energy change for that process. Relating the average heat absorbed by the system during the process to its entropy change one then obtains (T2). Here is the difference of von Neumann entropies of the system's state before and after the projection (T3). The average work drawn is thus , where the entropy change is non-negative and the energy change can be either positive or negative. The stated *optimal* work, , is achieved when the inequality is saturated by an optimal process (T2) the implementation of which may require knowledge of the initial state and control of coherences (Q1). In the special case of a projection onto the energy eigenbasis the internal energy change is zero, , and one obtains Eq. [(1)](#eq12){ref-type="disp-formula"}. Optimality of three-step process for finite-dimensional systems --------------------------------------------------------------- It is straightforward to generalise the proof of optimality from the two-dimensional spin-1/2 example to thermodynamic projection processes in dimension *d*. Again the projectors map onto the energy eigenspaces of the Hamiltonian, , where , , are the energy eigenvalues. A general initial state can be written as where are probabilities, , are rank-1 projectors on the corresponding eigenvectors , and . A unitary operation, *V*, is now chosen such that it brings the initial configuration (*ρ*, *H*) into the new diagonal and thermal configuration where and . The new energy eigenvalues, , are adjusted such that the probabilities *a*~*k*~ are thermally distributed with respect to *H*^(1)^ for the bath temperature *T*. Adjusting the Hamiltonian eigenvalues while letting the state thermalise at all times now results in a isothermal quasi-static operation from to . Here the new energy eigenvalues, , are chosen to be thermal (at *T*) for the state's probabilities which are given by . Finally, a quench brings the thermal configuration quickly into the non-equilibrium state . The average work for this overall process is where and because the first and third steps are unitary (Q0 + T0). The quasistatic step's work is[@b25][@b29] where is the thermal equilibrium free energy for Hamiltonian *H*^(1)^, and similarly, . Summing up and using , one obtains concluding the optimality proof of the process sequence. Additional Information ====================== **How to cite this article**: Kammerlander, P. and Anders, J. Coherence and measurement in quantum thermodynamics. *Sci. Rep.* **6**, 22174; doi: 10.1038/srep22174 (2016). Supplementary Material {#S1} ====================== ###### Supplementary Information We thank T. Deesuwan, M. Wolf and R. Renner, G. Morley, R. Uzdin and D. Reeb for insightful discussions and J. Gemmer, R. Renner, S. Horsley and T. Philbin for critical reading of the manuscript. P.K. acknowledges support from the Swiss National Science Foundation (through the National Centre of Competence in Research 'Quantum Science and Technology') and the European Research Council (grant 258932). J.A. is supported by the Royal Society and EPSRC (EP/M009165/1). J.A. thanks the Isaac Newton Institute in Cambridge where part of this work was conceived for the stimulating environment and kind hospitality. This work was supported by the European COST network MP1209. **Author Contributions** J.A. provided the main idea and developed the central argument. P.K. developed the single-shot analysis. Both authors wrote the manuscript and [supplementary information](#S1){ref-type="supplementary-material"}. ![Thermodynamic setting.\ A system, depicted as a spin, interacts with a heat bath at temperature *T*, with which it exchanges *heat*, and with controlled energy sources, illustrated as coil and weight, with which it exchanges *work*. Work drawn from the system can be collected in a work storage system (weight) for future use.](srep22174-f1){#f1} ![Two physical realisations of a projection process.\ (**a**) *N* identically prepared spin 1/2 particles in state pass a Stern-Gerlach magnet and a screen after which they emerge in either the spin-up or the spin-down beam. Recombining the two beams mixes the spins to the final state for *N* → ∞. Illustration of the spin example discussed in main text, showing the state evolution in (**b**) and the B-field evolution in (**c**). The poles in the Blochsphere (**b**) are the energy eigenstates and that are aligned and anti-aligned with an externally applied B-field (indicated in blue in (**c**)), which initially is (black point in (**c**)). In the first step the Blochvector (black arrow in (**b**)) of Emmy's initial state *ρ* is rotated on the green-dashed circle to (green arrow in (**b**)). The unitary rotation *V* required for this step can be realised by applying a microwave pulse creating an additional B-field (indicated in orange in (**c**)) in the direction orthogonal to the plane of the green circle. At the end of the first step the pulse is turned off and the external B-field is adjusted to (green point in (**c**)). The second step shortens to (red arrow in (**b**)), the Blochvector of *η* (superscripts *H* have been omitted). The external B-field (blue in (**c**)) decreases slowly to (red point at *t*~2~ in (**c**)). In the last step the B-field quickly returns to its initial value, (red point at *t*~3~ in (**c**)), while the state remains *η*. The angle between the Blochvectors of *ρ* and *η* is indicated by *θ*.](srep22174-f2){#f2} ![Dynamical steps in a quantum fluctuation experiment.\ (**a**) The quantum Jarzynski relation is described as characterising the non-equilibrium work of processes that start in a thermal state *ρ*~0~ and evolve unitarily (*V*), driven by a changing Hamiltonian, reaching the final state *ρ*~*τ*~ at time *τ*. This unitary process has no heat contribution. (**b**) Illustration of three steps that are assumed in mathematical derivations of the quantum Jarzynski relation[@b4][@b5]: initial energy measurement of *H*^(0)^ indicated by *M*~0~, unitary evolution, and final energy measurement of *H*^(*τ*)^ indicated by . The ensemble state evolves here from *ρ*~0~ to *ρ*~*τ*~ and then to *η*~*τ*~, the state *ρ*~*τ*~ with its coherences removed. The observed average energy difference encompasses both, the unitary process and the second projection process, and can in general contain a heat contribution , in contrast to (**a**).](srep22174-f3){#f3}
{ "pile_set_name": "PubMed Central" }
Each Sheridan user is assigned a network account and allocated 300 MB of disk space. Your network drive (sometimes known as the "G" drive) is automatically set-up for you. Although 300 MB may be insufficient to store all of your files, as these drives are backed up every night, it’s a good idea to store your most important files on your network drive ("G drive").
{ "pile_set_name": "Pile-CC" }
Ingegärd Töpel Ingegärd Margareta Töpel (13 May 1906 – 11 July 1988) was a Swedish diver. She competed in the 10 m platform event at the 1928 Summer Olympics, alongside her elder sister Hjördis. References Category:1906 births Category:1988 deaths Category:Olympic divers of Sweden Category:Divers at the 1928 Summer Olympics Category:Swedish female divers
{ "pile_set_name": "Wikipedia (en)" }
Turmeric Extract: Potential Use as a Prebiotic and Anti-Inflammatory Compound? Prebiotics are regarded as the non-digestible food constituents that are selectively consumed by health-promoting bacteria (probiotics). In fact, a number of active metabolites is released due to intensive interaction between prebiotics and probiotics in the gut which exert local and systemic beneficial effects including regulation of intestinal disorders and modulation of host immunity. Turmeric is one of the most important medicinal herbaceous that is derived from Curcuma longa rhizome. Curcumin is a well-recognized component of turmeric which contributes to the prevention of multiple inflammatory diseases. Despite curcumin as a well-known compound, few researches have focused on the turmeric extract (TE) and its potential as prebiotic and anti-inflammatory compound. The aim of this study was to evaluate the prebiotic potential and some functional-structural properties of TE. The Fourier-transform-infrared spectroscopy (FTIR) spectrum of TE showed identical peaks that belonged to β configuration in pyranose and glycosidic bonds. High performance liquid chromatography (HPLC) analysis revealed the presence of potent phenolic and flavonoid anti-oxidants and curcuminoids, and some functional monosaccharides. TE demonstrated excellent resistance to artificial human gastric and intestine juice compared to the standard prebiotic (inulin) (p ≤ 0.05). Interestingly, our time course experiment showed that TE not only is digested by probiotics including Lactobacillus rhamnosus GG (LGG) and Bifidobacterium animalis BB12, but also supports the growth of these bacteria even after 72 h (p ≤ 0.05). To our knowledge, this is the first report evaluating prebiotic potential of TE and exploring its suppressive effects on LPS induced IL-8 production in HT29-19A cell line.
{ "pile_set_name": "PubMed Abstracts" }
Food insecurity affects school children's academic performance, weight gain, and social skills. Food insecurity has been associated with diverse developmental consequences for U.S. children primarily from cross-sectional studies. We used longitudinal data to investigate how food insecurity over time related to changes in reading and mathematics test performance, weight and BMI, and social skills in children. Data were from the Early Childhood Longitudinal Study-Kindergarten Cohort, a prospective sample of approximately 21,000 nationally representative children entering kindergarten in 1998 and followed through 3rd grade. Food insecurity was measured by parent interview using a modification of the USDA module in which households were classified as food insecure if they reported > or =1 affirmative response in the past year. Households were grouped into 4 categories based on the temporal occurrence of food insecurity in kindergarten and 3rd grade. Children's academic performance, height, and weight were assessed directly. Children's social skills were reported by teachers. Analyses examined the effects of modified food insecurity on changes in child outcomes using lagged, dynamic, and difference (i.e., fixed-effects) models and controlling for child and household contextual variables. In lagged models, food insecurity was predictive of poor developmental trajectories in children before controlling for other variables. Food insecurity thus serves as an important marker for identifying children who fare worse in terms of subsequent development. In all models with controls, food insecurity was associated with outcomes, and associations differed by gender. This study provides the strongest empirical evidence to date that food insecurity is linked to specific developmental consequences for children, and that these consequences may be both nutritional and nonnutritional.
{ "pile_set_name": "PubMed Abstracts" }
Q: "Linear isomorphism" in definition of vector bundle I'm reading out of Broecker & Jaenich's differential topology text, and in the definition of vector bundle I'm having trouble understanding what they're talking about. This trouble is worsened by the fact that other sources define the structure similarly, and also don't address my problem. The definition is as follows: "A ($n$-dimensional real topological) vector bundle is a trouble $(E, \pi X)$, where $\pi : E \to X$ is a continuous surjective map, every $E_x = \pi^{-1}(x)$ has the structure of an $n$-dimensional real vector space such that: Axiom of local triviality. Every point of $X$ has a neighborhood $U$, for which there exists a homeomorphism $$f : \pi^{-1} (U) \to U \times \mathbb{R}^{n}$$ such that for every $x \in U$ $$f_{x} |E_{x} \to \{ x \} \times \mathbb{R}^{n}$$ is a vector space isomorphism." [Bold added for emphasis] My trouble here is that we don't seem to have imposed any sort of vector space structure on $E_{x}$. Though the authors don't address exactly what kind of structures we've imposed on $E$ and $X$, other sources suggest only a topological space is needed, though others go on to imagine that $X$ is a topological or smooth manifold. None of these make reference to the assumption that any of the sets in question should have any kind of vector space structure. So I have no clue how I'm supposed to parse out what it means to have an isomorphism of vector spaces without a vector space. Is it supposed to be with reference to a chart on $X$? Someone please explain this. Thanks. A: Part of the structure of a vector bundle $(E,\pi,X)$ is, as your definition says, a vector space structure on the set $E_x$ for each $x\in X$. That is, to have a vector bundle, you have to specify a vector space structure on each $E_x$. So the maps $f_x$ are supposed to be vector space isomorphisms with respect to this specified vector space structure on $E_x$ (and the obvious vector space structure on $\{x\}\times \mathbb{R}^n$). (For a topological vector bundle, as in the definition you quoted, you also need topologies on $E$ and $X$, in order to say that $\pi$ is continuous and $f$ is a homeomorphism. For smooth vector bundles, you require $E$ and $X$ to be smooth manifolds and require all the maps to also be smooth.)
{ "pile_set_name": "StackExchange" }
During the lifetime of a patient, it may be necessary to perform a joint replacement procedure on the patient as a result of, for example, disease or trauma. The joint replacement procedure may involve the use of a prosthesis that is implanted into one or more of the patient's bones. In the case of a knee replacement procedure, a tibial tray is implanted into the patient's tibia. A bearing is then secured to the tibial tray. The condyle surfaces of a replacement femoral component bear against the tibial bearing. One type of knee prosthesis is a fixed-bearing knee prosthesis. As its name suggests, the bearing of a fixed-bearing knee prosthesis does not move relative to the tibial tray. Fixed-bearing designs are commonly used when the condition of the patient's soft tissue (i.e., knee ligaments) does not allow for the use of a knee prosthesis having a mobile bearing. In contrast, in a mobile-bearing type of knee prosthesis, the bearing can move relative to the tibial tray. Mobile-bearing knee prostheses include so-called “rotating platform” knee prostheses, wherein the bearing can rotate about a longitudinal axis on the tibial tray. Tibial trays are commonly made of a biocompatible metal, such as a cobalt chrome alloy or a titanium alloy. For both fixed and mobile-bearing knee prostheses, the tibial trays may be designed to be cemented into place on the patient's tibia or alternatively may be designed for cementless fixation. Cemented fixation relies on mechanical bonds between the tibial tray and the cement as well as between the cement and the bone. Cementless implants generally have surface features that are conducive to bone ingrowth into the implant component and rely to a substantial part on this bony ingrowth for secondary fixation; primary fixation is achieved through the mechanical fit of the implant and the prepared bone. Tibial components of both fixed and mobile-bearing and cemented and cementless knee arthroplasty systems are commonly modular components, comprising a tibial tray and a polymeric bearing carried by the tibial tray. The tibial trays commonly include features extending distally, such as pegs or stems. These extensions penetrate below the surface of the tibial plateau and stabilize the tibial tray component against movement. In cementless tibial implants, the outer surfaces of these extensions are typically porous to allow for bone ingrowth. For example, in the Zimmer Trabecular Metal Monoblock tibial trays, pegs with flat distal surfaces and hexagonal axial surfaces are formed completely of a porous metal. In such trays, bone ingrowth is likely to occur along all surfaces of the pegs, including the distal surfaces. Femoral components of such knee prosthesis systems are also designed for either cemented or cementless fixation. For cemented fixation, the femoral component typically includes recesses or cement pockets. For cementless fixation, the femoral component is designed for primary fixation through a press-fit, and includes porous bone-engaging surfaces suitable for bone ingrowth. Both designs may include pegs designed to extend into prepared holes in the femur for stabilization of the implant. On occasion, the primary knee prosthesis fails. Failure can result from many causes, including wear, aseptic loosening, osteolysis, ligamentous instability, arthrofibrosis and patellofemoral complications. When the failure is debilitating, revision surgery may be necessary. In a revision, the primary knee prosthesis (or parts of it) is removed and replaced with components of a revision prosthetic system. When the tibial or femoral implant includes extensions (such as pegs or stems) that extend into the natural bone, a revision surgery usually requires a large resection of the bone in order to dislodge the extensions from the bone. This large resection not only complicates the surgery, it also requires removal of more of the patient's natural bone than is desirable. This removal of additional bone may further compromise the bone, increase the risk of onset of bone pathologies or abnormalities, or reduce the available healthy bone for fixation of the revision implant. Moreover, the large resection usually means that a larger orthopaedic implant is necessary to fill the space and restore the joint component to its expected geometry. This difficulty in dislodging the primary implant components from the bones is worsened by the fact that bone also grows into the extensions. Severing these connections may be problematic since not all of these areas are easily accessible without resecting large amounts of bone. In implants such as the Zimmer Trabecular Metal Monoblock tibia tray, some surfaces of the porous metal portion of the tibial tray may remain exposed above the tibial plateau after implantation. These exposed porous metal surfaces may be rough and may irritate the patient's soft tissue as the patient engages in normal day-to-day activities. Similar issues may be presented in other types of joint prostheses.
{ "pile_set_name": "USPTO Backgrounds" }
Effect of time and sex on tissue selenium concentrations in chicks fed practical diets supplemented with sodium selenite or calcium selenite. An experiment was conducted with 384 1-d-old male and female broiler-chicks. The basal corn-soybean meal diet (.07 ppm Se DM basis) was supplemented with 0, .1, .2, or .3 ppm added Se as either sodium selenite (Na2SeO3) or calcium selenite (CaSeO3), and fed for 1, 3, or 5 wk. There was no effect of Se source or level on feed intake or gain, but males consumed more (P less than .01) feed than females. There was no effect (P greater than .10) of sex or Se source on plasma, liver, or kidney Se concentration. The Se concentration of all tissues increased (P less than .01) with time and increasing dietary Se concentration. Based on multiple regression slope ratios of liver, kidney, and plasma Se concentrations, Se from CaSeO3 was as available (103%) as Se from Na2SeO3.
{ "pile_set_name": "PubMed Abstracts" }
2013 Victorino Cunha Cup The Victorino Cunha Cup is an annual Angolan basketball tournament held in honour of former Angolan basketball coach Victorino Cunha. The 5th edition (2013), ran from October 22 to 24, and was contested by the top four teams of the 2013 BAI Basket, and played in a round robin system. Recreativo do Libolo ended the tournament undefeated to win its first title. Schedule Round 1 Round 2 Round 3 Final standings Awards See also 2013 BAI Basket 2013 Angola Basketball Cup 2013 Angola Basketball Super Cup References Category:Victorino Cunha Cup seasons Victorino
{ "pile_set_name": "Wikipedia (en)" }
$(document).ready(function() { // $("#id_permissions_role").sSelect(); $("#newchannel-name").blur(function() { $("#name-spinner").spin('small'); var zreg_name = $("#newchannel-name").val(); $.get("new_channel/autofill.json?f=&name=" + encodeURIComponent(zreg_name),function(data) { $("#newchannel-nickname").val(data); zFormError("#newchannel-name-feedback",data.error); $("#name-spinner").spin(false); }); }); $("#newchannel-nickname").blur(function() { $("#nick-spinner").spin('small'); var zreg_nick = $("#newchannel-nickname").val(); $.get("new_channel/checkaddr.json?f=&nick=" + encodeURIComponent(zreg_nick),function(data) { $("#newchannel-nickname").val(data); zFormError("#newchannel-nickname-feedback",data.error); $("#nick-spinner").spin(false); }); }); });
{ "pile_set_name": "Github" }
Aorangi Forest Park Aorangi Forest Park is a protected area in the Wellington Region of New Zealand administered by the Department of Conservation (DOC). It had been called the Haurangi Forest Park but DOC changed to reflect the Māori name of the range protected by the park. There are six backcountry huts and a recreational hunting area in the park. There are deer, goats and pigs (in low numbers) in the park. See also Forest Parks of New Zealand Protected areas of New Zealand Conservation in New Zealand Tramping in New Zealand References External links Aorangi Forest Park at the Department of Conservation Aorangi Forest Park at Google Maps Category:Forest parks of New Zealand Category:Protected areas of the Wellington Region Category:Protected areas established in 1978
{ "pile_set_name": "Wikipedia (en)" }
In news stories about the student debt crisis, we hear about American young adults delaying the typical milestones of adulthood due to their student loans. They (well, we) postpone marriage, childbearing, and purchasing first homes. But what if you’re interested in a holier, more altruistic path? Men and women who want to join Catholic religious life must be debt-free before they even think about making their vows, and that’s a challenge for people who don’t realize their calling until after they’ve taken on student debt in the mid-five figures. [More] A group of Benedictine nuns in Texas are shocked that Walmart considers them a threat and ordered a “threat assessment” from their crack security team. The nuns had filed a shareholder resolution that was critical to Walmart. “The Benedictine Sisters of Boerne, Texas have written a letter to Lee Scott, Wal-Mart’s chief executive, to say they were “deeply disappointed, appalled and shocked.”
{ "pile_set_name": "Pile-CC" }
The world’s first extinguishing water facility in port The extinguishing water facility in the oil terminal in Malmö is completed. “It is a purely environmental investment, from today no contaminated extinguishing water or surface water will end up in the sea if a fire was to occur, which thankfully has never happened”, says Jens Haugsöen, oil terminal manager at CMP in Malmö. The work started in October last year on the world’s first extinguishing water facility in an oil terminal. If a fire was to occur, the contaminated extinguishing water is now dealt with in the best way possible in terms of the environment. The businesses in the oil terminal - Nordic Storage, Statoil, Vopak, OK/Q8, STS, Preem, Norcarb Engineered Carbons AB, Univar and Wibax – have paid 67%, and CMP is bearing the rest of the total of almost SEK 5 million which the project has cost to complete. Furthermore, the bulk of the operations have requirements from the Environment and Health Administration, and these have now been met”. The extinguishing water system is an addition to CMP’s ordinary surface water system. In the event of a fire, the surface water is redirected to the extinguishing water system. An approximately 800 meter long glass-fibre reinforced pipe – known as a GAP pipeline – has been laid in the ground to transport the extinguishing water and an approximately 22 metre pool has been made for temporary storage of the water. In principle, an extinguishing water plant doesn’t require any staff, one person starts the pump in the buffer pool when the level is sufficiently high and the plant then functions automatically. “Large diesel-powered motors pump the water”, explains Jens Haugsöen. The water is then conveyed from the pool onward via a pump to a 9,900 cubic metre SAFIR tank. Stefan Kristenssons Åkeri AB carried out the excavation work and built the buffer tank where the pumping stations direct all the surface water. Depåservice AB laid the underground GAP pipeline and also installed motorised valves and pump in conjunction with Malmströms El AB.
{ "pile_set_name": "Pile-CC" }
Radio frequency identification (RFID) technology provides an alternative to bar code reader technology for distinguishing and recording items for purchase. RFID may result in labor savings to retailers, since it may obsolete conventional methods of identifying items. One proposed method of processing items with RFID labels is to read the RFID labels in batch. For example, the processing method would include reading RFID labels on items while the items remain in a shopping cart, palette, or packaging. Technical limitations make this method of processing impractical. Numerous materials, including metals and liquids, can shield radio frequency (RF) energy. RFID labels can be damaged. Finally, RFID labels may be defective due to low yield rates. Therefore, it would be desirable to provide a system and method of determining unprocessed items.
{ "pile_set_name": "USPTO Backgrounds" }
Scott Report The Scott Report (the Report of the Inquiry into the Export of Defence Equipment and Dual-Use Goods to Iraq and Related Prosecutions) was a judicial inquiry commissioned in 1992 after reports of arms sales to Iraq in the 1980s by British companies surfaced. The report was conducted by Sir Richard Scott, then a Lord Justice of Appeal. It was published in 1996. Much of the report was secret. Background In the late 1980s, Matrix Churchill, a British (Coventry) aerospace quality machine tools manufacturer that had been bought by the Iraqi government, was exporting machines used in weapons manufacture to Iraq. According to the International Atomic Energy Authority, the products later found in Iraq were among the highest quality of their kind in the world. They were 'dual use' machines that could be used to manufacture weapons parts. Such exports are subject to government control, and Matrix Churchill had the appropriate government permissions, following a 1988 relaxation of export controls. Crucially, however, this relaxation had not been announced to Parliament – indeed, when asked in Parliament whether controls had been relaxed, the then-Secretary of State for Trade and Industry replied incorrectly that they had not. Matrix Churchill was contacted by HM Customs and Excise, under suspicion of exporting arms components to Iraq without permission. It had this permission but this was denied by the government, in line with the most recently announced policy on the matter. Matrix Churchill's directors were therefore prosecuted in 1991 by Customs and Excise for breaching export controls. The trial did not go well for the government – public interest immunity certificates obtained by the government to suppress some critical evidence (supposedly on grounds of national security) were quickly overturned by the trial judge, forcing the documents to be handed over to the defence. The trial eventually collapsed when former minister Alan Clark admitted he had been 'economical with the actualité in answer to parliamentary questions regarding what he knew about export licenses to Iraq. Report The Scott Report represents possibly the most exhaustive study produced to that date of the individual responsibility of ministers to Parliament. Scott comments on the difficulty of extracting from departments the required documents (some 130,000 of them in all) and notes how Customs and Excise could not find out what Ministry of Defence export policy was, and how intelligence reports were not passed on to those who needed to know. The Economist commented that "Sir Richard exposed an excessively secretive government machine, riddled with incompetence, slippery with the truth and willing to mislead Parliament". The report characterised the nature of the government as: Scott identified three main areas of democratic concern. First, the Import, Export and Customs Powers (Defence) Act 1939 was emergency legislation passed at the outbreak of the Second World War. It allowed the government to issue regulations which were not subject to resolutions in Parliament, for the duration of the emergency, which would make it a criminal offence to export particular goods to particular countries. While the Act should have been lapsed in 1945, it remained in force, and had been modified in 1990 so as to become part of the Import and Export Control Act 1990. The second area was the failure of ministerial accountability; the principle that "for every action of a servant of the crown a minister is answerable to Parliament". The third area was that of public-interest immunity certificates, which had been issued during the Matrix Churchill trial. As a result of these certificates, innocent men were in danger of being sent to prison, because the government would not allow the defence counsel to see the documents that would exonerate their clients. While some of these contained potentially sensitive intelligence material, many were simply internal communications: the certificates were intended to protect the ministers and civil servants who had written the communications, rather than the public interest. Scott states: Publication The publication of the report was seen by many as the nadir of the 1990s Conservative governments of the UK. Prior to the report's publication, those ministers who were criticised were given the opportunity to comment and request revisions. The 1,806-page report was published, along with a press pack which included a few relatively positive extracts from the report presented as if representative of the entire report, at 3:30pm. Given a then largely pro-government press, this proved effective at stalling an extensive analysis in the media. The report had to be debated in Parliament. Ministers criticised in the report were given advanced access to the report and briefed extensively on how to defend themselves against the report's criticisms. In contrast, according to senior Labour MP Robin Cook, the opposition were given just two hours to read the million-plus words, during which scrutiny they were supervised and prevented from making copies of the report. Finally, the Prime Minister, John Major, stated that a vote against the Government would be in effect a vote of no confidence, ensuring that Conservative MPs would not vote against, while a vote for was a vote exonerating the Government of any wrongdoing. Robin Cook worked with a team of researchers to scrutinise the report, and delivered "what was regarded as a bravura performance". Nonetheless, the Government won the vote 320–319. References Commentary by David Butler Q&A: The Scott Report, BBC News Robin Cook's obituary, BBC News. Category:1992 in the United Kingdom Category:1996 in the United Kingdom Category:Public inquiries in the United Kingdom Category:Judicial inquiries
{ "pile_set_name": "Wikipedia (en)" }
We're Smith & Williamson's charity The Bristol office of accountancy, investment management and tax group, Smith & Williamson, has announced BRACE as its charity of the year for 2016-17. Around 200 staff from Smith & Williamson’s Bristol office voted for their charity of choice and BRACE came out on top. Funds raised by the firm over the next 12 months will help to maintain BRACE’s cutting edge Brain Bank in the Learning and Research Centre at Southmead Hospital. Mike Lea, Managing Partner at Smith & Williamson’s Bristol office, recently visited the SW Dementia Brain Bank (SWDBB) where he met BRACE’s Chief Executive Mark Poarch and Brain Bank Manager Dr Laura Palmer for a tour of the facility and to find out more about the ground-breaking work that is being done. Mike Lea said, “Dementia touches the lives of so many people and BRACE is doing an incredible job in funding research to better understand this devastating condition. We are delighted to announce BRACE as the main beneficiary of our fundraising activities over the next 12 months. We look forward to hearing more about their work and doing as much as we can to help.” Mark Poarch, Chief Executive of BRACE, said, “We were delighted to hear of Smith & Williamson’s decision to support dementia research with BRACE. The backing of a major professional firm can have such an impact on our fundraising and help us to build urgently-needed resources for new science here in the South West.” He continued, “It was great to show Mike the world-leading research facilities represented by the Brain Bank. Funding research remains the best hope we have to find real treatments for this cruel condition. The Smith & Williamson team are setting about their fundraising task with great enthusiasm and I know they will make a big difference this year.”
{ "pile_set_name": "Pile-CC" }
Q: What is the basis for anointing of physical objects and who practices this? It's been a tradition of my family and friends to anoint a home with oil when a family moves in. I understand the significance of praying and dedicating the space as God's, used for His purposes and will - and my interpretation of anointing has mostly been along the lines of prayer in this manor. However is there a biblical reason or basis for the practice of anointing a physical object? Were does this originate and what Christian traditions practice it? A: Anointing an object certainly appears in the Bible, and is part of ancient Jewish practice: Then the Lord said to Moses, ‘Take the following fine spices: 500 shekels of liquid myrrh, half as much (that is, 250 shekels) of fragrant cinnamon, 250 shekels of fragrant calamus, 500 shekels of cassia – all according to the sanctuary shekel – and a hin of olive oil. Make these into a sacred anointing oil, a fragrant blend, the work of a perfumer. It will be the sacred anointing oil. Then use it to anoint the tent of meeting, the ark of the covenant law, the table and all its articles, the lampstand and its accessories, the altar of incense, the altar of burnt offering and all its utensils, and the basin with its stand. You shall consecrate them so they will be most holy, and whatever touches them will be holy. Exodus 30:22–29 [NIV] It's also part of the Catholic liturgy for consecrating an altar Reference. The oil of chrism which is used is a fragrant oil rather like that described in Exodus. However this use of oil is part of the consecration, the setting-apart, of the object to God's use. The objects would not be put to any other use whatsoever. Whether a house is set apart in quite the same way may be open to question, and most house-blessings (such as the one I was involved with recently) would involve sprinkling1 holy water. 1 Sprinkling is a technical term, which refers to splashing water around with a brush or some sort of shaker. It's not usually a delicate operation!
{ "pile_set_name": "StackExchange" }
package example.model; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; @Entity public class Customer1390 { @Id @GeneratedValue(strategy = GenerationType.AUTO) private long id; private String firstName; private String lastName; protected Customer1390() {} public Customer1390(String firstName, String lastName) { this.firstName = firstName; this.lastName = lastName; } @Override public String toString() { return String.format("Customer1390[id=%d, firstName='%s', lastName='%s']", id, firstName, lastName); } }
{ "pile_set_name": "Github" }
Background {#Sec1} ========== Congenital cataracts are diagnosed within the first year of life. These cataracts are one of the leading causes of blindness in children and are estimated to occur with a prevalence of 3--6 per 10,000 live births \[[@CR1]\]. Congenital cataracts may appear either in isolation or in association with other ocular or systemic anomalies. Up to 25 % of congenital cataracts are thought to be caused by genetic defects \[[@CR2]\]. The genetic landscape of mutations causing congenital cataract is extremely diverse; more than 40 genes and additional loci have been associated with nonsyndromic cataract \[[@CR2]--[@CR6]\]. *GCNT2* (glucosaminyl (N-acetyl) transferase 2, I-branching enzyme) was first identified in 2001 as the gene encoding for the glycosyltransferase responsible for the human blood group I antigen. Recessive mutations in *GCNT2* result in an adult i blood group phenotype, which is also associated with congenital cataracts in some cases \[[@CR7]\]. Alternative splicing of the *GCNT2* gene produces three transcripts (A, B, and C). The three transcripts share a common second and third coding exon with a unique first exon for each isoform; differing expression profiles were identified for the transcripts with only the *GCNT2B* isoform expressed in lens epithelial cells and only the *GCNT2C* isoform expressed in reticulocytes \[[@CR8]\]. To date, seven missense mutations, one nonsense mutation, and two large deletions have been reported; mutations in exon 1C, affecting only the *GCNT2C* isoform, cause the adult i blood group without cataracts while mutations/deletions affecting exons 2 and 3, shared by all isoforms, result in the adult i blood group along with congenital cataract \[[@CR7]--[@CR11]\]. Case presentation {#Sec2} ================= Patient 1(individual II:1) is an 18-month old Pakistani female affected with bilateral dense central congenital cataract (Fig. [1a,](#Fig1){ref-type="fig"} Table [1](#Tab1){ref-type="table"}) which were visually significant and required extraction at 2 months of age, mild asymmetry of the palpebral fissures, and left nasolacrimal duct obstruction; her development is normal and growth parameters are generally normal with the exception of borderline microcephaly (length 83.8 cm, 75--90th centile; weight 10.2 kg, 25--50th centile; and head circumference 44 cm (3rd centile)). Physical exam at 4 months of age identified hypotelorism (familial) and mildly widely-spaced nipples. Her younger brother, age 6 months, was similarly affected with visually significant bilateral dense central congenital cataracts requiring extraction around 2 months of age; his length (67.5 cm, 25--50th centile), weight (6.8 cm, 5--10th centile), and head circumference (42.5 cm, 10--25th centile) are all within the normal range (Table [1](#Tab1){ref-type="table"}). Family history shows unaffected second-cousin parents with additional endogamous mating within the family. A double second-cousin to the proband is affected with bilateral non-syndromic anophthalmia/ microphthalmia with no additional details available.Fig. 1Patient photographs and pedigree. **a** Photograph of Patient 1's eyes at 2 months of age showing bilateral cataract. **b** Pedigree showing both affected siblings with a homozygous deletion of 6p24.3 while the unaffected parents are heterozygous carriers. WT: wild type; black arrow indicates probandTable 1Phenotype and genotype information of the affected patientsPatientLens phenotypeOther featuresDevelopmentDeletionGenes involvedPatient 1Bilateral dense central congenital cataracts; extraction at \~2 months of ageBorderline microcephaly (3rd centile), mild asymmetry of the palpebral fissures, left nasolacrimal duct obstruction, hypotelorism, somewhat widely-spaced nipplesWNL97.9 kb homozygous deletion of 6p24.3The first coding exons of *GCNT2A* and *GCNT2B,* two 5'noncoding exons of *GCNT2A*, and a part of the region upstream of *TFAP2A*Patient 2Bilateral dense central congenital cataracts; extraction at \~2 months of ageNoneWNL97.9 kb homozygous deletion of 6p24.3The first coding exons of *GCNT2A* and *GCNT2B,* two 5'noncoding exons of *GCNT2A*, and a part of the region upstream of *TFAP2A* Materials and methods {#Sec3} ===================== Whole exome sequencing was performed by Macrogen (previously Axeq) and analyzed as previously described \[[@CR12]\]; briefly, exome data from the proband was analyzed using the SNP & Variation Suite (SVS; Golden Helix, Bozeman, MT, USA) to identify/exclude mutations in the coding and splicing regions of 40 known nonsyndromic cataract genes and 7 additional crystallins \[[@CR3]--[@CR6]\]; synonymous variants and variants with a frequency of \>1 % in the general population ([http://exac.broadinstitute.org](http://exac.broadinstitute.org/), <http://evs.gs.washington.edu/EVS/>, <http://www.1000genomes.org/>) were considered to be benign variants. Copy number variation analysis was completed by screening exome sequencing data using the Copy Number Inference From Exome Reads (CoNIFER) v0.2.2 software package as previously outlined \[[@CR13]\]; regions of interest were further verified by independent quantitative PCR reactions using DNA samples from the proband and other available familial samples with SYBR Green PCR Master Mix (Applied Biosystems/Life Technologies, Carlsbad, CA, USA). qPCR reactions utilized three region-specific probes (Additional file [1](#MOESM1){ref-type="media"}: Table S1) and were performed as follows: primers located within regions of interest were designed using Primer3Plus software (<http://sourceforge.net/projects/primer3/>) using qPCR settings. Each reaction was comprised of five nanograms of DNA in a total reaction volume of 12uL. Each primer set was run three times in triplicate using patient, parental or control DNA on a Bio-Rad CFX Connect Real-Time PCR machine (Bio-Rad, Hercules, CA, USA). A primer set for the housekeeping gene *RPPH1* (ribonuclease P RNA component H1) was used to normalize all data. A probe located in *NDP* (Norrie disease (pseudoglioma)), located on the X-chromosome, was used as a copy-loss control. All experiments included a no-template control and an unaffected human DNA sample with presumably normal copy number at each region for comparison. Copy number changes were calculated using the 2^-ΔΔCt^ method as previously described \[[@CR14]\]. Following qPCR confirmation, the size and exact breakpoints of the deletion were determined using a series of regular PCR reactions that utilized primers located on both ends of the region (as defined by CoNIFER and qPCR analysis) and standard conditions (Additional file [1](#MOESM1){ref-type="media"}: Table S1). Since the patients were apparently homozygous for the deletion, no amplification product indicated that the primer(s) are located inside of the deleted region while the presence of a PCR product indicated primers outside of the deletion. Once sequences bordering the deleted region on the centromeric and telomeric sides were determined, the corresponding primers were used to amplify a 1.5 kb region across the breakpoints. The resultant product was cloned into pCRII-TOPO® (Life Technologies, Carlsbad, CA, USA) vector using the manufacturer's protocols and sequenced bidirectionally with M13 forward and reverse primers using Big Dye Terminator v3 chemistry and an ABI 3730XL sequencer (Applied Biosystems/Life Technologies, Carlsbad, CA, USA); the obtained sequences were compared with the corresponding reference sequence using BLAST (<http://blast.ncbi.nlm.nih.gov/Blast.cgi>). Results and discussion {#Sec4} ====================== Review of the whole exome sequencing (WES) data from Patient 1 did not identify any potentially pathogenic variants (with only two synonymous variants) in known nonsyndromic cataract genes. The WES data was then analyzed for copy number variation which revealed a potential 208-kb deletion (6p24.3 chr6: 10,412,788-10,621,660) affecting *TFAP2A* and *GCNT2*. The deletion was verified using qPCR probes located in the first coding exon of *GCNT2* isoform A and the first exon of *TFAP2A*; the qPCR confirmed deletion of the *GCNT2* sequence in both unaffected parents (haploid, heterozygous) and affected children (complete loss, homozygous) while diploid copy of the *TFAP2A* sequence was identified in all family members. Further analysis of the region by a series of regular PCR reactions using affected DNA identified the centromeric breakpoint between chr6:10472330--10472606 (set 7; diploid) and chr6:10474759--10474901 (set 8; complete loss) and the telomeric breakpoint between chr6:10570580--10570905 (set 13, complete loss) and chr6:10571951--10572257 (set 14, diploid). Primer sets designed to span the deleted region produced a \~1.5 kb product from the DNA of the affected patients. Sequencing of this product identified the exact deletion breakpoint sites: their analysis revealed the presence of Alu repeats and specifically a 12-bp identical sequence at both sides of the deleted region of 97.974-kb (hg19, chr6: 10,473,864--10,571,838) (Fig. [2](#Fig2){ref-type="fig"}). The homozygous deletion encompassed four exons of *GCNT2* (the first two noncoding and one coding exons of isoform A and the first coding exon of isoform B) and extended 47.471-kb upstream of the most 5' exon of the *GCNT2* gene (Fig. [2](#Fig2){ref-type="fig"}). The distance from the telomeric end of the deletion to the nearest protein-coding gene, *TFAP2A* (transcription factor AP-2 alpha), is 54.300-kb. The distance from the centromeric end of the deletion to the fist exon of *GCNT2* isoform C is 13.922-kb. Although *GCNT2C* and *TFAP2A* were not included in the deletion, effects on their expression through possible interference with regulatory elements cannot be ruled out.Fig. 2Schematic presentation of the chromosome 6p24.3−24.2 region and the identified deletion. The UCSC Genome Browser ([http://genome.ucsc.edu](http://genome.ucsc.edu/)) view of the deleted region indicating the positions of genes is included; the deletion identified in the affected family is shown as a rectangular red box; the DNA sequence across the breakpoint for the deleted allele is shown at the bottom of the drawing with regions corresponding to the telomeric and centromeric flanks of the deletion indicated by dashed lines and a 12-nt repeat highlighted in red font Genomic deletions of *GCNT2* have been previously reported in two families with blood group i and congenital cataracts but both deletions included exons 2 and 3 which are shared by all isoforms \[[@CR7], [@CR11]\]. Borck and colleagues noted that the *GCNT2* locus is rich with Alu elements and therefore is likely a hotspot for deletions or duplications to occur \[[@CR11]\]. The *GCNT2* gene has three differentially expressed transcripts, with *GCNT2B* being the only isoform associated with lens function and *GCNT2C* being the only isoform expressed in red blood cells \[[@CR8]\]. The GCNT2 protein modifies the i antigen, a linear sphingoglycolipid present on the cell surface of most human cells as well as on glycoproteins in body fluids, into the active branched I antigen; the i/I antigens are thought to play a role in the regulation of cell growth and differentiation in the developing lens \[[@CR8], [@CR9]\]. The deletion described in this case report differs from previously reported deletions and mutations since it only affects the *GCNT2A* and *GCNT2B* isoforms and leaves the *GCNT2C* isoform intact. Previous studies demonstrated that only the *GCNT2B* isoform is expressed in lens epithelial cells and patients with mutations which specifically affect the C isoform demonstrate the adult i phenotype without congenital cataracts \[[@CR8]\]. Thus, the presence of cataract in the affected patients reported here with a clear disruption of *GCNT2A* and *B* isoforms only is consistent with the isoform-specific roles identified for this gene. Additionally, in the case reported here we were able to identify the exact sequences at the breakpoints and clearly implicate Alu-mediated non-homologous end-joining as a mechanism for this rearrangement. This mechanism has been previously reported by our and other groups \[[@CR15], [@CR16]\]). The deletion reported here also extends into the genomic region upstream of *GCNT2* and *TFAP2A* which are positioned in a head-to-head orientation. *TFAP2A* is approximately 100-kb distal to *GCNT2* and the deletion removes approximately 47-kb of genomic sequence between the two genes. It is possible that this deletion could affect *TFAP2A* function through removal/rearrangement of regulatory elements, as has been shown for other genes \[[@CR17]--[@CR19]\]. *TFAP2A* is a retinoic acid responsive transcription factor which is required for normal development of the lens and optic cup as well as for parts of the craniofacial region. Heterozygous mutations in *TFAP2A* cause Branchio-Ocular-Facial-Syndrome (BOFS) characterized by craniofacial phenotypes (distinct facial features, microcephaly, and cleft lip/palate), skin defects in the cervical region or regions around the ear, ocular defects (microphthalmia, coloboma, strabismus, cataract, or ptosis), lacrimal duct obstruction, and hearing loss \[[@CR20]--[@CR22]\]. Missense mutations account for the majority of *TFAP2A* variants, however whole gene deletions have also been reported. To date, no deletions affecting the upstream region of *TFAP2A,* but not the coding region itself, have been reported. Careful physical examination of the patients did not identify sufficient features to warrant a diagnosis of BOFS in the siblings, but Patient 1 did show borderline microcephaly, mild asymmetry of the palpebral fissures, left nasolacrimal duct obstruction, and somewhat widely spaced nipples. While the shared cataract phenotype observed in the affected siblings is consistent with the *GCNT2* deficiency alone, an effect of this deletion on the function of *TFAP2A* and the observed phenotypes cannot be completely ruled out. Interestingly, a double second-cousin to the proband has been reported to be affected with bilateral anophthalmia/ microphthalmia (A/M), an ocular condition that is more consistent with the *TFAP2A* spectrum. It is possible that the familial deletion expanded to include *TFAP2A* in this patient; alternatively, the A/M diagnosis may have an independent genetic etiology. Unfortunately, no other familial samples were available for further study. Conclusions {#Sec5} =========== We identified a \~98-kb homozygous deletion involving several exons of *GCNT2* and the region upstream of *TFAP2A* in two children affected with congenital cataracts from a consanguineous family of Pakistani decent. This cataract-causing deletion removes the first coding exons of *GCNT2* isoforms *A* and *B* but leaves the *GCNT2C* sequence intact, providing further support for the isoform-specific roles of this gene; this is the first disruption of *GCNT2* reported which does not affect isoform *C*. While the patients do not fit a diagnosis of BOFS, one sibling demonstrates mild overlap with the phenotypic spectrum, and therefore an effect of this deletion on the function of *TFAP2A* cannot be ruled out. Additional file {#Sec6} =============== Additional file 1: Table S1.Summary of PCR/qPCR reactions and copy number status in the affected family. (DOCX 21 kb) A/M : Anophthalmia/Microphthalmia BOFS : Branchio-ocular-facial-syndrome GCNT2 : Glucosaminyl (N-acetyl) transferase 2, I-branching enzyme TFAP2A : Transcription factor AP-2 alpha WES : Whole exome sequencing The authors also gratefully acknowledge the patients and their family for their participation in this research study. This work was supported by the National Institutes of Health awards R01EY015518 (EVS) and funds provided by the Children's Hospital of Wisconsin (EVS), along with 1UL1RR031973 from the Clinical and Translational Science Award (CTSA) program and the National Eye Institute of the National Institutes of Health under Award Number P30EY001931. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Funding {#FPar1} ======= This work was supported by the National Institutes of Health awards R01EY015518 (EVS) and funds provided by the Children's Hospital of Wisconsin (EVS), along with 1UL1RR031973 from the Clinical and Translational Science Award (CTSA) program and the National Eye Institute of the National Institutes of Health under Award Number P30EY001931. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Availability of data and materials {#FPar2} ================================== Data from this study that do not pertain to identifiable patient information are freely available and provided as supplemental material and/or can be obtained by contacting the corresponding author. Authors' contributions {#FPar3} ====================== EVS conceived and designed the study. DC performed ophthalmological evaluation and referred the patient to the study. LMR enrolled the family. HH and EW carried out the genetic studies. HH, EW, EVS, and LMR analyzed and interpreted the data. HH, EW, LMR, and EVS drafted the manuscript. All authors read and approved the final manuscript. Competing interests {#FPar4} =================== The authors declare that they have no competing interests. Consent to publish {#FPar5} ================== Written consent for publication was obtained from the parents of the patients. A copy of the consent is available for review by the Editor of this journal. Ethics approval and consent to participate {#FPar6} ========================================== This human study was approved by the Institutional Review Board of the Children's Hospital of Wisconsin and carried out in accordance with the Declaration of Helsinki. Written and informed consent for molecular studies was obtained from the parents of the patients. A copy of the consent is available for review by the Editor of this journal.
{ "pile_set_name": "PubMed Central" }
Q: FiwareLab Cygnus always get error to install According to this manual, I've try to install Cygnus on my Centos instance of FiwareLab but always I get the following error "Permission denied". I have used the default user "centos" but I don't get any result. Anyone can help me? Thanks a lot!! Here's a supplementary screen shot: A: The error you were facing was because the user has no permission to perform that operation on the specified directory. The solution, in this case, is use sudo: sudo cat > /etc/yum.repos.d/fiware.repo <<EOL (continue the lines) I hope it had helped you.
{ "pile_set_name": "StackExchange" }
Clackmannanshire and Dunblane (Scottish Parliament constituency) Clackmannanshire and Dunblane is a constituency of the Scottish Parliament (Holyrood). It elects one Member of the Scottish Parliament (MSP) by the plurality (first past the post) method of election. It is also one of nine constituencies in the Mid Scotland and Fife electoral region, which elects seven additional members, in addition to the nine constituency MSPs, to produce a form of proportional representation for the region as a whole. Created in 2011, the constituency covers much of the area previously in the abolished Ochil. Electoral region The other eight constituencies of the Mid Scotland and Fife region are Cowdenbeath, Dunfermline, Kirkcaldy, Mid Fife and Glenrothes, North East Fife, Perthshire North, Perthshire South and Kinross-shire and Stirling. The region covers all of the Clackmannanshire council area, all of the Fife council area, all of the Perth and Kinross council area and all of the Stirling council area. Constituency boundaries and council areas The Ochil constituency was created at the same time as the Scottish Parliament, in 1999, with the name and boundaries of a pre-existing Westminster (House of Commons) constituency. In 2005, however, Scottish Westminster constituencies were mostly replaced with new constituencies. The Ochil Westminster constituency, was divided between the Ochil and South Perthshire Westminster constituency and the Stirling Westminster constituency. The constituency covers all of the Clackmannanshire council area, while the rest of the Stirling council area is covered by the Stirling constituency. From the 2011 Scottish Parliament election, Ochil was largely replaced by an expanded constituency of Clackmannanshire and Dunblane. The electoral wards used in the creation of Clackmannshire and Dunblane are: Clackmannanshire West Clackmannanshire North Clackmannanshire Central Clackmannanshire South Clackmannanshire East Dunblane Bridge of Allan Member of the Scottish Parliament Election results 2010s Footnotes Category:Scottish Parliament constituencies and regions from 2011 Category:Politics of Stirling (council area) Category:Politics of Clackmannanshire Category:Constituencies of the Scottish Parliament Category:Constituencies established in 2011 Category:2011 establishments in Scotland
{ "pile_set_name": "Wikipedia (en)" }
Guest Blogger: A Christian Astronomer Reflects on the Total Solar Eclipse (This article was written to be published this coming Monday, August 21, but we decided to post it a few days early due to the tremendous interest in the upcoming total solar eclipse.) My name is Andy Puckett, and I’m a professional astronomer. When I look at the world around me, I tend to see the big picture. The Sun “rises and sets” because the Earth rotates. Seasons change due to the tilt of the Earth’s axis. The position and phase of the Moon are based on the predictable motions of the Moon and the Earth. And all of these are based on the physical laws of the universe: motion, gravitation, acceleration. I am also a Christian, so I see God’s hand in all of this. I know that He doesn’t move the moons and planets capriciously. I see the order and predictability of their motions. And I believe that God wrote the underlying laws of motion, and that he also gave me the curiosity to try to understand them. Today (August 21st), many of you may get to see a total eclipse of the Sun. That’s when the Moon gets directly between the Earth and the Sun, and you find yourself in the darkest part of the Moon’s shadow. The Moon is 400 times smaller than the Sun but also 400 times closer, which is the happy “coincidence” that makes this amazing event possible. But the Moon’s orbital plane doesn’t line up perfectly with the Earth’s, which is what prevents solar eclipses from being regular monthly events. It’s very rare that a total solar eclipse passes within driving distance of your house, and even rarer for one to pass directly over where you live. If you do happen to be within the 70-mile wide “path of totality” today, you’re in for a treat! For up to 2 minutes 40 seconds, it will become as dark as night; the wind will get cooler and change direction; the solar corona will pop into view; and everyone around you will know that they’ve experienced something extraordinary. Total Solar Eclipse I’m a scientist, and there’s great science to be done during an eclipse, but that’s not my plan for today. I’ve been looking forward to this eclipse for 20 years, so I’m going to just take it all in. And I’m going to make sure my family gets to experience it safely, including my brother-in-law Shannon and all of our kids. I hope to help them see the big picture, and God’s hand in all of it. A note from Shannon about this week’s article: Andy Puckett is my brother-in-law and the Assistant Professor of Astrophysics at Columbus State University in Columbus, GA. Andy is also a practicing Catholic and is perhaps more excited than anyone else I know about the much-anticipated Total Solar Eclipse, set to dazzle us this Monday. For this article, Orin and I asked Andy to do an eclipse-related followup to Orin’s joy-themed article from a few weeks ago, entitled “Ongoing Creation”. In that article, Orin asked the question: “What is it that you are doing these days, using the creative gifts given you, at the service of God and the Church?” In our view, through the witness of his Catholic faith and the joyful enthusiasm with which he shares his knowledge of our physical universe, Andy is daily answering God’s call to glorify God with his life. We thank Andy for taking the time to write this for us.
{ "pile_set_name": "Pile-CC" }
Tag Archives: bilderberg We are creating videos for the proper interpretation of the geopolitical and geo-economic events in the hope that when more people understand their true significance, they will be able to supplant the motives of those who are taking them for fools! Continue reading The Covert Hybrid WW3 Video Series→ In the early part of our uncivilization, there were only three estates, each with distinct political rights: Priests or spiritual lords Nobility or temporal lords Commoners or slaves Soon, the fourth estate emerged from free thinkers, philosophers, poets and playwrights. Wired technologies facilitated a new vocation called journalism. Just like its predecessor, journalism has its own limits, one of which is the unidirectional flow of information. Today, a more decentralized exercise of sharing information and views is helping shape public opinion, i.e. blogging. While the Fourth Estate has gone bigger and better, the First and Second Estate have decided to join forces in order to have full control of the components of the Fourth Estate. This resulted to a profound subjugation of the Third Estate. Collectively, this elite force dominating the planet right now is called the Deep State. Thanks to Rick2012 for the article below… The Dollar and the Deep State If we consider the Fed’s policies (tapering, etc.) solely within the narrow confines of the corporatocracy or a strictly financial context, we are in effect touching the foot of the elephant and declaring the creature to be short and roundish. I have been studying the Deep State for 40 years, before it had gained the nifty name “deep state.” What others describe as the Deep State I term the National Security State which enables the American Empire, a vast structure that incorporates hard and soft power–military, diplomatic, intelligence, finance, commercial, energy, media, higher education–in a system of global domination and influence. Back in 2007 I drew a simplified chart of the Imperial structure, what I called the Elite Maintaining and Extending Global Dominance (EMEGD): At a very superficial level, some pundits have sought a Master Control in the Trilateral Commission or similar elite gatherings. Such groups are certainly one cell within the Empire, but each is no more important than other parts, just as killer T-cells are just one of dozens of cell types in the immune system. One key feature of the Deep State is that it makes decisions behind closed doors and the surface government simply ratifies or approves the decisions. A second key feature is that the Deep State decision-makers have access to an entire world of secret intelligence. Here is an example from the late 1960s, when the mere existence of the National Security Agency (NSA) was a state secret. Though the Soviet Union made every effort to hide its failures in space, it was an ill-kept secret that a number of their manned flights failed in space and the astronauts died. The NSA had tapped the main undersea cables, and may have already had other collection capabilities in place, for the U.S. intercepted a tearful phone call from Soviet Leader Brezhnev to the doomed astronauts, a call made once it had become clear there was no hope of their capsule returning to Earth. There is another, more shadowy, more indefinable government that is not explained in Civics 101 or observable to tourists at the White House or the Capitol. The subsurface part of the iceberg I shall call the Deep State, which operates according to its own compass heading regardless of who is formally in power.The term “Deep State” was coined in Turkey and is said to be a system composed of high-level elements within the intelligence services, military, security, judiciary and organized crime. I use the term to mean a hybrid association of elements of government and parts of top-level finance and industry that is effectively able to govern the United States without reference to the consent of the governed as expressed through the formal political process. I would say that only senior military or intelligence officers have any realistic grasp of the true scope, power and complexity of the Deep State and its Empire.Those with no grasp of military matters cannot possibly understand the Deep State. If you don’t have any real sense of the scope of the National Security State, you are in effect touching the foot of the elephant and declaring the creature is perhaps two feet tall. The Deep State arose in World War II, as the mechanisms of electoral governance had failed to prepare the nation for global war. The goal of winning the war relegated the conventional electoral government to rubber-stamping Deep State decisions and policies. After the war, the need to stabilize (if not “win”) the Cold War actually extended the Deep State. Now, the global war on terror (GWOT) is the justification. One way to understand the Deep State is to trace the vectors of dependency. The Deep State needs the nation to survive, but the nation does not need the Deep State to survive (despite the groupthink within the Deep State that “we are the only thing keeping this thing together.”) The nation would survive without the Federal Reserve, but the Federal Reserve would not survive without the Deep State. The Fed is not the Deep State; it is merely a tool of the Deep State. This brings us to the U.S. dollar and the Deep State. The Deep State doesn’t really care about the signal noise of the economy–mortgage rates, minimum wages, unemployment, etc., any more that it cares about the political circus (“step right up to the Clinton sideshow, folks”) or the bickering over regulations by various camps. What the Deep State cares about are the U.S. dollar, water, energy, minerals and access to those commodities (alliances, sea lanes, etc.). As I have mentioned before, consider the trade enabled by the reserve currency (the dollar): we print/create money out of thin air and exchange this for oil, commodities, electronics, etc. If this isn’t the greatest trade on Earth–exchanging paper for real stuff– what is?While I am sympathetic to the strictly financial arguments that predict hyper-inflation and the destruction of the U.S. dollar, they are in effect touching the toe of the elephant. The financial argument is this: we can print money but we can’t print more oil, coal, ground water, etc., and so eventually the claims on real wealth (i.e. dollars) will so far exceed the real wealth that the claims on wealth will collapse. So far as this goes, it makes perfect sense. But let’s approach this from the geopolitical-strategic perspective of the Deep State: why would the Deep State allow policies that would bring about the destruction of its key global asset, the U.S. dollar? There is simply no way the Deep State is going to support policies that would fatally weaken the dollar, or passively watch a subsidiary of the Deep State (the Fed) damage the Deep State itself. The strictly financial arguments for hyper-inflation and the destruction of the U.S. dollar implicitly assume a system that operates like a line of dominoes: if the Fed prints money, that will inevitably start the dominoes falling, with the final domino being the reserve currency. Setting aside the complexity of Triffin’s Paradox and other key dynamics within the reserve currency, we can safely predict that the Deep State will do whatever is necessary to maintain the dollar’s reserve status and purchasing power. In my view, the euro currency is a regional experiment in the “bancor” model,where a supra-national currency supposedly eliminates Triffin’s Paradox. It has failed, partly because supra-national currencies don’t resolve Triffin’s dilemma, they simply obfuscate it with sovereign credit imbalances that eventually moot the currency’s ability to function as intended. Many people assume the corporatocracy rules the nation, but the corporatocracy is simply another tool of the Deep State. Many pundits declare that the Powers That Be want a weaker dollar to boost exports, but this sort of strictly financial concern is only of passing interest to the Deep State. The corporatocracy (banking/financialization, etc.) has captured the machinery of regulation and governance, but these are surface effects of the electoral government that rubber-stamps policies set by the Deep State. The corporatocracy is a useful global tool of the Deep State, but its lobbying of the visible government is mostly signal noise to the Deep State. The only sectors that matter are the defense, energy, agriculture and international financial sectors that supply the Imperial Project and project power. What would best serve the Deep State is a dollar that increases in purchasing power and extends the Deep State’s power. It is widely assumed that the Fed creating a few trillion dollars has created a massive surplus of dollars that will guarantee a slide in the dollar’s purchasing power and its demise as the reserve currency. Those who believe the Fed’s expansion of its balance sheet will weaken the dollar are forgetting that from the point of view of the outside world, the Fed’s actions are not so much expanding the supply of dollars as offsetting the contraction caused by deleveraging. I would argue that the dollar will soon be scarce, and the simple but profound laws of supply and demand will push the dollar’s value not just higher but much higher. The problem going forward for exporting nations will be the scarcity of dollars. If we consider the Fed’s policies (tapering, etc.) solely within the narrow confines of the corporatocracy or a strictly financial context, we are in effect touching the foot of the elephant and declaring the creature to be short and roundish. The elephant is the Deep State and its Imperial Project. Please support us by downloading our Towards Healthcare Emancipation – Second Edition, a fully illustrated eBook about how you can implement a low cost but extensive and decisively effective healthcare system in the comfort of your own home. The proceeds from this book will be used to fund our next project, Towards Energy Emancipation. The aim is to make the subject of free energy more understandable for the layman so that anybody could replicate and install his own power plant and be completely living off-grid. If you haven’t done so, please like our FB page to encourage others to learn more about our work. I have suffered numerous murder attempts. The people who tried to kill me include Henry Kissinger, George Bush Sr., David de Rothschild in Geneva. And the people who are now at the source of the problems in the United States include Frank Carlucci, James Baker, Paul Wolfenson, George Soros, Zbigniew Brzezinski, Timothy Geitner; these people are murderers and criminals, you need to arrest them. If they resist arrest, they are murderers, you must shot them. You must not let these people be free. This is important, they’re planning to start World War III, and murder billions of people. David de Rothschild in Geneva, don’t think that you’re safe in your castle there, you’re not. We know where you are, we know who you are. And you have this serial child rapist pope, Pope Malevolent XVI. Well, he’s going to be dead soon. And good riddance, Satan is waiting for you. There are a lot of these scumbags now, who are on the run, and we’ve got to push, we’ve got to remove them from power. They are preventing humanity from progressing. They’ve held back technical progress by at least a hundred years, if not, more. They are trying to murder four billion people. They are trying to start World War 3 right now in Iran and in Syria. And everybody say, “Gee, I don’t know what to do.” C’mon, it’s just a few old men. Why don’t you Americans put them in jail? And I’m letting you know now, there’s a threat against Japan. There’s a ship drilling nuclear bombs into the seabed off the shore of Cheba, it’s called Shikyu Maru. And if Japan is attacked again with a nuclear tsunami, then I’ve heard that there’s going to be retaliation. They’re going to sink a large rock formation in the Canary island which will cause a 300-foot tsunami to hit the East Coast of the United States including New York, Washington DC, and Miami. You must prevent such tragedy. You must arrest these criminals. We know who they are. And I have another thing; I want to tell the Jewish people: you are a god fearing good people ruled by Satan worshiping gangsters. Okay? The Israeli flag should have been the menorah. Instead, you have a satanic symbol. Okay? That is a satanic symbol on your flag — that is not a Jewish symbol. Why isn’t the menorah on the Israeli flag? I tell why, because the Rothschild family, who created Israel, worship Lucifer. They worship Satan. They don’t worship Yahweh. They’re not real Jews. Their real name is Bauer. They’re using you. And they’re making fool of yourselves by going along with this insane plan. Hurry up and arrest those criminals who have taken leaderships of your society. This is not a joke. There’s enough evidence out there for everybody to know it’s true. It’s all over there. Anybody who has done any research in the well documented thoroughly proven facts will know that a criminal gang has control of your financial system for the past three hundred years, and they are planning genocide. They must be stop. We will stop them. This is a declaration of war. Do you hear me, David the Rothschild? Okay? You’re not safe anymore. We’re tired of your murderous games. I’ve heard that Evelyn de Rothschild, a family leader, is now quietly hiding in his castle in England. Well, Evelyn, tell your family to stand down. Tell them enough is enough. No genocide. Your family is in danger unless you stop this madness. Time is running out. I don’t like to have an angry face just as we approach Christmas. This should be a time now, we have the possibility to have the richest possible boom in the history of civilization. We have the technology to turn the desert green. We can refill the ocean with fish. We can have world peace. And all that’s blocking us is a few dozen old men. Please everybody arrest them. What is wrong with you people? We can have world peace. We can have all these good things. We’re just being block by a tiny group of old men who worship Satan. That’s a fact. Please, we must save the planet. Thank you. Final thing to say: this is not getting the bad guys. It never has been. But they are trying to get us. And after certain point, after so many murder attempts, after so much harassment, telling everybody I knew that I have gone insane, I was taking drugs. Going around and killing people I’ve known. You know, they murdered more people than anyone else in history. They’re responsible for World War 1. They are responsible for World War 2. They’re responsible for the holocaust. They killed Kennedy. They killed Martin Luther King. They killed John Lennon. They killed Michael Jackson. They’ve murdered leaders all over the world. They are the worst type of gangsters. They’ve turned the United States into a banana republic. And they are trying to turn Europe into fascist dictatorship. And all they have as their weapon is an illusion; fake money that’s not backed by anything real. We’re cutting off their money, and we’re going to put them in jail. They have to surrender. We can have world peace. We can have prosperity. We can have increased longevity. We can rid ourselves of this nightmare. It’s just up to us. Everybody who’s listening to this, do what you can. Remember, we outnumber them, a billion to one or more. Certainly, even in the area of the most infested, you outnumber them a hundred to one. Arrest the criminals you know, we’re closest to you, we can save this planet. Thank you. ___ Kim Jong Il was murdered last Saturday in a major power struggle that’s taking place here in Asia. The Rothschild family is trying to replace him with Kim Jong Un, who’s a playboy educated in Switzerland, who they hope would follow their orders, in exchange with beautiful woman and fancy cars and other toys. But this is now a chance for the people in the Korean Peninsula to become independent once again. They have been artificially divided in order to put under the control of the forces in Europe. There is no need for the Korean people to be divided. United, they will one of the strongest countries on Earth. There’s a chance for peace in the Korean peninsula, and of course, that would led to a boom never seen before in this region of the Earth. At the same time, the network of North Korean agents in Japan pretending to be Japanese is being dismantled. People are being arrested. Japan is going to be free from the control of foreign forces pretending to be Japanese. And this is a chance for the Japanese and Korean people who are cousins if not brothers to have friendly and prosperous relations. Remember, the Rothschilds need war to control us. They need war to put us in debt. There’s no need for humans to fight each other. We can have world peace. This is a chance. A power struggle has begun. We must not give them the chance to once again put us under their control. Humanity can now free itself. The battle has begun. We must fight them on every front until there’s world peace, and there’s an end to poverty, and an end to environmental destructions. We can accomplish that within a matter of months, once we get these murderous Satan worshiping cabal. It is started. This message is primarily directed to the WhiteHats and other entities working for the light. The days ahead will be heating up, but as Alfred Lambremont Webre had said, “Just enjoy the show”. For those who have been asking, how they could join the WDS, you don’t need to. Organize your own group and effect the arrest. The evidence is overwhelming against these people, and all legal actions are useless if these cabalists are still out there. This week’s update from Fulford, which came after his foiled assassination, is a very strong signal that they can make good of their promises to kill or arrest any member of the Cabal within their sphere of influence. It is time for the WhiteHats to do likewise. Note: If you’re new to this site, you can read all Ben Fulford’s update here. And if you want to fully understand what’s going on behind the scene, try hovering on the “Global Issues” link of the main menu above. Thanks.
{ "pile_set_name": "Pile-CC" }
Q: C++ Issue with median function using arrays I am having trouble getting this function to work. I am trying to write a median function that takes a user entered array and size, validates that it is correct and then sorts it and displays the median and sorted array. I have tried several different things and no matter what I try I am unable to get this program to work. Any help would be appreciated. Thank you very much. #include <iostream> #include <iomanip> using namespace std; double median(int n[], int size); int main(int argc, char** argv) { cout << "Calculate The Median of an Array" << endl; cout << "---------------------------------" << endl; int size, n; cout << "Array Size (Maximum is Ten)? "; cin >> size; if (size > 10 || size < 0) { cout << "Invalid size. Please Re-enter." << endl; }; cout << "Array Contents? "; cin >> n; if ([n] != size) { cout << "Invalid Array. Please Re-enter. " << endl; }; median(n, size); return 0; }; double median(int n[], int size) { // Allocate an array of the same size and sort it. double* dpSorted = new double[size]; for (int i = 0; i < size; ++i) { dpSorted[i] = n[i]; }; for (int i = size - 1; i > 0; --i) { for (int j = 0; j < i; ++j) { if (dpSorted[j] > dpSorted[j+1]) { double dTemp = dpSorted[j]; dpSorted[j] = dpSorted[j+1]; dpSorted[j+1] = dTemp; }; }; }; // Middle or average of middle values in the sorted array. int median = 0; if ((size % 2) == 0) { median = (dpSorted[size/2] + dpSorted[(size/2) - 1])/2.0; } else { median = dpSorted[size/2]; }; cout << "Median of the array " << dpSorted << "is " << median << endl; }; I am getting the following errors and I cant figure out how to fix them. 34 16 C:\Users\ryanw\Desktop\C++\Labs\Lab 6\main.cpp [Error] invalid conversion from 'int' to 'int*' [-fpermissive] and 18 8 C:\Users\ryanw\Desktop\C++\Labs\Lab 6\main.cpp [Note] initializing argument 1 of 'double median(int*, int)' A: I'm inlining the explanations as comments in the code. Here is the array version. I'm keeping it as close to OP's source as possible. #include <iostream> #include <iomanip> using namespace std; double median(int n[], int size); int main() { // the parameters asre not being used. You can safely leave them out. cout << "Calculate The Median of an Array" << endl; cout << "---------------------------------" << endl; unsigned int size; // unsigned disallows negative numbers. Lest testing required int n[10]; // n is an array of 10 elements cout << "Array Size (Maximum is Ten)? "; cin >> size; while (size > 10) { // repeat until user provides a valid size cout << "Invalid size. Please Re-enter." << endl; cin >> size; }; // the above loop will be an infinite loop if the user types in a value // that cannot be converted into an integer. cout << "Array Contents? "; for (unsigned int index = 0; index < size; index++) { cin >> n[index]; } // don't need to test the size. This is ensured by the for loop. Mostly // for now we are ignoring the simple problem: "what if the user inputs // a value that is not an integer? median(n, size); return 0; }// don't need ; after function. double median(int n[], int size) { // Don't need an array of doubles. Ignoring it. // assuming the logic here is correct. If it isn't, that's a different // topic and another question. for (int i = size - 1; i > 0; --i) { for (int j = 0; j < i; ++j) { if (n[j] > n[j+1]) { int dTemp = n[j]; n[j] = n[j+1]; n[j+1] = dTemp; }; }; }; // Middle or average of middle values in the sorted array. double result = 0; //reusing an identifier is a dangerous business. Avoid it. if ((size % 2) == 0) { result = (n[size/2] + n[(size/2) - 1])/2.0; } else { result = n[size/2]; }; cout << "Median of the array is " << result << endl; // it's harder to print out an array than I think it should be. // Leaving it out for now // Other than main, a function with a return type must ALWAYS return. return result; } And with std::vector and other library wizardry: #include <iostream> #include <iomanip> #include <vector> #include <algorithm> // using namespace std; generally should avoid this // moving median up here so forward declaration isn't needed double median(std::vector<int> &n) { //don't need size. Vector knows how big it is std::sort(n.begin(), n.end()); // use built-in sort function double result = 0; auto size = n.size(); if ((size % 2) == 0) { result = (n[size/2] + n[(size/2) - 1])/2.0; } else { result = n[size/2]; }; // since this function calculates and returns the result, it shouldn't // also print. A function should only do one thing. It make them easier // to debug and more re-usable return result; } int main() { // can chain couts into one // endl is more than just a line feed and very expensive. Only use it // when you need the message to get out immediately std::cout << "Calculate The Median of an Array\n" << "---------------------------------\n" << "Array Size (Maximum is Ten)? " << std::endl; unsigned int size; std::cin >> size; while (size > 10) { // here we use endle because we want he user to see the message right away std::cout << "Invalid size. Please Re-enter:" << std::endl; std::cin >> size; }; std::vector<int> n(size); std::cout << "Array Contents? " << std::endl; for (int & val: n) // for all elements in n { std::cin >> val; } std::cout << "Median of the array is " << median(n) << std::endl; return 0; }
{ "pile_set_name": "StackExchange" }
Q: regex regarding symbols in urls I want to replace consecutive symbols just one such as; this is a dog??? to this is a dog? I'm using str = re.sub("([^\s\w])(\s*\1)+", "\\1",str) however I notice that this might replace symbols in urls that might happen in my text. like http://example.com/this--is-a-page.html Can someone give me some advice how to alter my regex? A: So you want to unleash the power of regular expressions on an irregular language like HTML. First of all, search SO for "parse HTML with regex" to find out why that might not be such a good idea. Then consider the following: You want to replace duplicate symbols in (probably user-entered) text. You don't want to replace them inside a URL. How can you tell what a URL is? They don't always start with http – let's say ars.userfriendly.org might be a URL that is followed by a longer path that contains duplicate symbols. Furthermore, you'll find lots of duplicate symbols that you definitely don't want to replace (think of nested parentheses (like this)), some of them maybe inside a <script> on the page you're working on (||, && etc. come to mind. So you might come up with something like (?<!\b(?:ftp|http|mailto)\S+)([^\\|&/=()"'\w\s])(?:\s*\1)+ which happens to work on the source code of this very page but will surely fail in other cases (for example if URLs don't start with ftp, http or mailto). Plus, it won't work in Python since it uses variable repetition inside lookbehind. All in all, you probably won't get around parsing your HTML with a real parser, locating the body text, applying a regex to it and writing it back. EDIT: OK, you're already working on the parsed text, but it still might contain URLs. Then try the following: result = re.sub( r"""(?ix) # case-insensitive, verbose regex # Either match a URL # (protocol optional (if so, URL needs to start with www or ftp)) (?P<URL>\b(?:(?:https?|ftp|file)://|www\.|ftp\.)[-A-Z0-9+&@#/%=~_|$?!:,.]*[A-Z0-9+&@#/%=~_|$]) # or | # match repeated non-word characters (?P<rpt>[^\s\w])(?:\s{0,100}(?P=rpt))+""", # and replace with both captured groups (one will always be empty) r"\g<URL>\g<rpt>", subject) Re-EDIT: Hm, Python chokes on the (?:\s*(?P=rpt))+ part, saying the + has nothing to repeat. Looks like a bug in Python (reproducible with (.)(\s*\1)+ whereas (.)(\s?\1)+ works)... Re-Re-EDIT: If I replace the * with {0,100}, then the regex compiles. But now Python complains about an unmatched group. Obviously you can't reference a group in a replacement if it hasn't participated in the match. I give up... :(
{ "pile_set_name": "StackExchange" }
Medical Research Council Technology LifeArc, formerly known as the Medical Research Council Technology (MRC Technology, MRCT) is a British life science medical research charity. It was established in 2000 to translate the work of UK Medical Research Council (MRC) research scientists. Today, LifeArc provides intellectual property identification, protection and commercialisation, technology development, diagnostic development, early stage drug discovery and antibody humanization services for the MRC, academia, biotechnology and pharmaceutical organisations and charities, aiming to move promising medical research forward into viable and accessible patient treatments. Profits from LifeArc's activities are reinvested into further research. History LifeArc started as the Medical Research Council Liaison Office in 1984, and in 1986 the MRC Collaborative Centre, a laboratory-based technology transfer function, was founded. In 1993, the Liaison Office became MRC's Technology Transfer Group, responsible for office based patenting and licensing. The organisation was set up as a charity and a company limited by guarantee in 2000 to incorporate patenting, licensing and research functions. On 15 June 2017 it officially became LifeArc. Activities LifeArc has humanised a number of antibodies on behalf of other organisations. Four of these, Tysabri (Biogen Idec/Elan), Actemra (Hoffmann-La Roche/Chugai), Entyvio (Millenium Pharma/Takeda) and Keytruda (Merck/MSD), are now on the market. In 2010, LifeArc signed a deal with the drug company AstraZeneca to share chemical compounds to help identify potential treatments for serious diseases. LifeArc is a member of a Global Drug Discovery Alliance along with the Centre for Drug Research and Development, the Scripps Research Institute, Cancer Research Technology, the Lead Discovery Centre and the Centre for Drug Design and Discovery, dedicated to translating health research into new medicines and working together to improve the conversion of global early-stage research into much-needed new therapies. Through its earnings from licensing agreements, LifeArc provides funding for academic research and early-stage medical research. Dementia Consortium was launched in December 2013 - a unique £3m drug discovery collaboration between Alzheimer's Research UK, LifeArc and pharmaceutical companies Eisai and Lilly. In March 2019, LifeArc joined with Cancer Research UK and Ono Pharma to progress new immunotherapy drug targets for cancer. In May 2019, LifeArc announced it had sold part of its royalty rights for Keytruda to a subsidiary of Canada Pension Plan Investment Board (CPPIB) for US$1.297 billion, making it one of the biggest UK medical charities by size of investment. Key achievements References External links LifeArc's website Category:Technology transfer Category:Companies based in the London Borough of Camden Category:Medical research institutes in the United Kingdom Category:Charities based in London
{ "pile_set_name": "Wikipedia (en)" }
Influence of Danazol on gonadotropin secretion and synthesis by rat pituitary cells in cultures. Comparison with gonadal steroids. Increasing concentrations of estradiol, testosterone, progesterone (1.10(-10) to 1.10(-7] and Danazol (1.10(-9) to 1.10(-6) M) have been added to male rat pituitary cells maintained in monolayer cultures for a preincubation period of three days followed by a six hour incubation with or without GnRH (1.10(-8) M). Concentrations of LH and FSH have been assessed in the culture media and in the cells at the end of the experiments allowing an estimation of the influence of these steroids on gonadotropin release and synthesis. In these experimental conditions, estradiol does not modify basal and GnRH induced FSH release and synthesis but reduces the GnRH-induced response of LH. Testosterone and progesterone stimulate synthesis of FSH but inhibit synthesis and secretion of LH in the presence of GnRH. These results have been compared with those of literature. Danazol, in the same experimental conditions, stimulates synthesis of gonadotropins and simultaneously inhibits their release induced by the presence of GnRH. We conclude that Danazol is able to act at the pituitary level as testosterone which is in good agreement with its androgenic properties.
{ "pile_set_name": "PubMed Abstracts" }
21 Reasons Why We Wish Sheldon Cooper Was Our Best Friend 10/22 9. He’s brimming with the greatest relationship advice. Relationship troubles got you down? Take a seat on Dr. Sheldon's couch—just not in his spot!—and let him help you sort out your romantic woes. He might have the greatest track record when it comes to navigating his own relationship with Amy, but he's way cheaper than a licensed therapist.
{ "pile_set_name": "Pile-CC" }
Denver Premium Outlets® PROPERTY OVERVIEW THINK INSIDE THIS BOX. Great Space Available! Complete this form to have us contact you about leasing opportunities. CONTACT US Located in Thornton, Colorado in the northern part of the Denver metro area, Denver Premium Outlets is scheduled to open in 2018 and will serve the metropolitan Denver market. Positioned along I-25, the north- south interstate running between Denver and Ft. Collins, the center will be located at the intersection of I-25 and Baseline Road, which carries 112,000 cars daily. The center has excellent visibility along I-25. Denver, Colorado is a market of 2.8 million people and is the 21st largest metro area in the United States. This center will be the only outlet center serving the north side of Denver. The Denver market also receives 12.7 million overnight visitors per year generating $3 billion in total spending. Positioned north of Denver, east of the Boulder market with 313,000 people just 15 miles away, southwest of the Greeley market with 267,000 people 30 miles away, and south of the Loveland/Ft. Collins market with 316,000 people 27 miles away, Denver Premium Outlets will benefit from these populous areas as well. GIFT CARDS CONTACT US Property Management: The Property Management Team has the primary responsibility for maintaining Simon's industry leading position, by providing our customers a quality shopping experience. This includes focus on such diverse elements as: quality of service, safety, convenience, visual appeal, cleanliness and comfort. Property Management fulfills all day-to-day operational responsibilities at the properties, as well as managing operational and upgrade capital investments to insure a consistent and reliable retail product with desirable customer touchpoints. Mall Manager Assistant Mall Manager WE WANT TO HEAR FROM YOU. *oops. you missed a few. Are you a Simon retailer? Yes No RequiredYes, I want Simon to be able to contact me as I am 18 years of age or older and agree to Simon Property Group’s Terms of Use, Privacy Policy & Cookie Policy. You agree that by providing personal information on this page, you are consenting to Simon’s use, storage and maintenance of the information for the intended purposes. WE WANT TO HEAR FROM YOU. *oops. you missed a few. Less than 12 month lease 12+ month lease RequiredYes, I want Simon to be able to contact me as I am 18 years of age or older and agree to Simon Property Group’s Terms of Use, Privacy Policy & Cookie Policy. You agree that by providing personal information on this page, you are consenting to Simon’s use, storage and maintenance of the information for the intended purposes.
{ "pile_set_name": "Pile-CC" }
Vancomycin-resistant Enterococcus faecium sensitivity to isopropyl alcohol before and after implementing alcohol hand rubbing in a hospital. A recent study reported enterococci that developed alcohol tolerance. We measured minimum inhibitory concentrations (MICs) of isopropyl alcohol against 55 vancomycin-resistant Enterococcus faecium. We did not find an increase in MICs when comparing the periods before and after the use of alcohol for hand hygiene in a hospital, and we did not find a single isolate with a MIC higher than 11.5%. We consider alcohol to still be an effective measure for hand antisepsis.
{ "pile_set_name": "PubMed Abstracts" }
636 F.2d 761 205 U.S.App.D.C. 53 UNITED STATES of Americav.Bernard GIBSON, Appellant.UNITED STATES of Americav.Deborah Y. HAGANS, Appellant. Nos. 80-1225, 80-1228. United States Court of Appeals,District of Columbia Circuit. Argued Sept. 25, 1980.Decided Nov. 24, 1980. Appeal from the United States District Court for the District of Columbia (D.C. Criminal No. 79-00552). Patrick J. Christmas, Washington, D. C., for Bernard Gibson. James H. Craddock, Washington, D. C., (appointed by this Court) for Deborah Y. Hagans. Charles W. Brooks, Asst. U. S. Atty., with whom Charles F. C. Ruff, U. S. Atty., John A. Terry, Michael W. Farrell and James F. Rutherford, Asst. U. S. Attys., Washington, D. C., were on the brief, for appellee. Before ROBINSON, WILKEY and GINSBURG, Circuit Judges. Opinion for the Court filed by Circuit Judge GINSBURG. GINSBURG, Circuit Judge: 1 Defendants Gibson and Hagans appeal from a conviction for possession of heroin with intent to distribute. The appeal raises four issues: the legality of two searches conducted at the time of arrest and the propriety of two evidentiary rulings made by the district court. I. Facts 2 Officer Haskins, an officer regularly assigned to narcotics investigations, was stationed at a third floor window of an apartment building in an area where residents had complained about narcotics transactions. He observed a Cadillac Seville carrying four persons pull into a parking lot adjoining the building and park "almost right up against the building." Transcript at 14. Haskins estimated that he was between thirty and forty-five feet from the occupants of the car. Using binoculars, Haskins observed this sequence of activity: defendant Gibson, seated in the driver's seat, counted out numerous glassine packets containing a white substance; defendant Hagans, seated behind Gibson, passed Gibson a sum of money; Gibson put most of the money and one of the packets into a black purse and gave the remaining money and packets to Hagans; Gibson then placed the black purse between the armrests of the front seat of the car. 3 Shortly thereafter, Haskins, joined by back-up officers, approached the car, identified himself, and ordered the four occupants from the car.1 While other officers held Gibson and searched Hagans, Haskins took the black purse from the car. He opened it and found $1325 and two packets of white powder, later identified as heroin. After arresting Gibson and Hagans, Haskins searched the trunk of the car and found a "partially opened" brown paper bag. Transcript at 32. He opened the bag further and removed from it two large vials of preludin pills. Meanwhile, officers had found sixteen packets of heroin and $60 on defendant Hagans and $561 on defendant Gibson. 4 The government charged Gibson and Hagans with possession of heroin and Gibson with possession of the preludin pills. The defendants moved to suppress the evidence found in the black purse and paper bag. After the district court denied the motion, the defendants agreed to a stipulated trial without a jury. The government in turn dismissed all charges except the charge of possession of heroin with intent to distribute. The district court found both defendants guilty of that charge. II. Fourth Amendment Issues 5 Defendants attack the searches of both the black purse and the paper bag. Since the contents of the paper bag related solely to the charge against Gibson that was dismissed, we need not address the legality of that search. The two vials of preludin pills could not have contributed to the defendants' convictions for heroin possession. Thus the failure to suppress, even if erroneous, was not prejudicial. 6 Seizure of the black purse from the car was permissible under the automobile exception to the warrant requirement. Under Chambers v. Maroney, 399 U.S. 42, 90 S.Ct. 1975, 26 L.Ed.2d 419 (1970), and our own decision in United States v. Hawkins, 595 F.2d 751 (D.C.Cir.1978), cert. denied, 441 U.S. 910, 99 S.Ct. 2005, 60 L.Ed.2d 380 (1979), the police could choose either to detain the car while seeking a warrant or to search the car immediately. 7 Defendants argue, however, that once the purse was seized, Arkansas v. Sanders, 442 U.S. 753, 99 S.Ct. 2586, 61 L.Ed.2d 235 (1979), mandated a warrant prior to police search of the purse's interior. We pretermit that argument, because the search was justified on other grounds. Officer Haskins testified that he observed Gibson putting packets containing a white substance into the black purse. This observation, we conclude, brings the case within the court's "plain view" holding in United States v. Johnson, 561 F.2d 832 (D.C.Cir.) (en banc ), cert. denied, 432 U.S. 907, 97 S.Ct. 2953, 53 L.Ed.2d 1080 (1977). 8 As a threshold matter, we note that Officer Haskins' use of binoculars to observe the activity in the car did not violate the Fourth Amendment. The car in which defendants were observed was parked in an open lot alongside an apartment building. Anyone happening along the street could have glanced into the car and observed the narcotics transaction.2 A person at any of the windows on the side of the building at which Officer Haskins was stationed might have looked into the car.3 Situated as they were, the defendants "had no right to assume that law enforcement officers would not enhance their ability to see ... them by use of various artificial means such as binoculars." United States v. Moore, 562 F.2d 106, 112 (1st Cir. 1977), cert. denied, 435 U.S. 926, 98 S.Ct. 1493, 55 L.Ed.2d 521 (1978). See United States v. Powell, 638 F.2d 71, (9th Cir. 1979) (amended Jan. 29, 1980) (upholding a conviction based in part on the actions of an officer who, standing 20-25 yards from a truck, used binoculars to peer into the truck).4 9 Officer Haskins' lawful observation of Gibson placing glassine packets in the black purse, and the police action taken within minutes thereafter make the instant case clearer than the one the court confronted en banc in Johnson, supra. There, a police officer, peering through the basement window of a residence, saw three men seated at a table holding narcotics paraphernalia and "a pyramid of white powder eight to ten inches high." 561 F.2d at 835. The officer returned forty minutes later with other officers and entered the house without a warrant. The three men were arrested, but the narcotics were no longer in sight. The officers thereupon searched the basement and eventually found bundles of narcotics between mattresses on a bed and in a canvas bag concealed in an old rug. The en banc opinion in Johnson concentrated on the questions whether the officer was trespassing when he looked through the basement window and whether, since there was a forty minute delay before entering the house, the officers should have obtained a warrant. Resolving those questions against the defendants, Judge McGowan, writing for the court, turned finally to the warrantless search of the basement. He reasoned that "the police ha(ving) seen a crime actually in progress with contraband in plain view ... they were fully authorized both to make arrests and to seek out the contraband." Id. at 844. Thus the search power could be "viewed as incident to arrest, or as deriving independently from the initial observation of the contraband." Id. at 845. 10 The instant case presents neither of the features that made Johnson problematic. No considerable time span separated the sighting of the packets from the search. Rather, the search followed on the heels of the observation. No extensive quest was involved. Officer Haskins proceeded at once to the place where the packets rested.5 In sum, guided as we are by the Johnson opinion, we find no error in the failure to suppress the evidence found in the black purse. III. Evidentiary Rulings 11 Defendant Hagans attacks two of the district court's evidentiary rulings; both challenges are meritless. 12 First, Hagans complains that Larry Kenan, a lay witness for defendants, was not allowed to testify that in his opinion he could not have seen into the interior of the car if he had been standing at a second story window using binoculars. Kenan and defendant Gibson had attempted to recreate the circumstances surrounding the arrest in an effort to show that Officer Haskins could not have seen into the car carrying Gibson and Hagans. The two went to a second story window of the building in which Haskins had been stationed, made observations, and took pictures. They did not, however, take along binoculars. Kenan was allowed to testify about what he saw during the experiment but he was not allowed to speculate about what he might have seen if he had used binoculars. This was not error. While a lay witness may offer opinion testimony when that testimony will be helpful to the trier of fact, Fed.R.Evid. 701, we know of no case holding that a trial judge must permit a lay witness to use one set of observations as the foundation for an opinion about what he might have seen under different circumstances. 13 Second, Hagans argues that the district court improperly excluded an advertising brochure displaying a Cadillac Seville. The brochure was offered to buttress the defendants' theory that Officer Haskins could not have seen into the car. The district court found, however, that the car portrayed in the brochure differed in a material respect from the car in which Gibson and Hagans were found. The district court did allow the defendants to introduce photographs of the actual car involved in the episode, photographs Gibson had taken at the scene of the arrest. In view of the district court's "wide discretion to admit or exclude evidence where the question is one of relevancy or materiality," United States v. Morgan, 581 F.2d 933, 936 (D.C.Cir.1978), we find no error in the ruling excluding the brochure. Conclusion 14 The items seized from the black purse were in "plain view" of the arresting officer; therefore they were properly allowed into evidence. No other alleged error called to our attention warrants reversal of the convictions. Accordingly, the judgment of the district court is 15 Affirmed. 1 The other two occupants were not arrested and are not involved in this appeal 2 Cf. Cardwell v. Lewis, 417 U.S. 583, 590, 94 S.Ct. 2464, 2469, 41 L.Ed.2d 325 (1974) (plurality opinion) ("A car has little capacity for escaping public scrutiny. It travels public thoroughfares where both its occupants and its contents are in plain view.") 3 One of the defendants' witnesses testified that, during an experiment designed to duplicate the facts surrounding the arrest, he could not see with his naked eye into the car's interior from a second floor window. At another point, however, the same witness testified that he could in fact see the back of the front seat from the window. Transcript at 103 4 See also United States v. Minton, 488 F.2d 37 (4th Cir. 1973) (per curiam) (officer used binoculars to watch defendant unload one-gallon jugs of illicit whiskey from a truck), cert. denied, 416 U.S. 936, 94 S.Ct. 1936, 40 L.Ed.2d 287 (1974); United States v. Loundmannz, 472 F.2d 1376 (D.C.Cir.1972) (per curiam) (officer used binoculars to observe defendant approach car and hand small slips of white paper and money to a man seated in the car), cert. denied, 410 U.S. 957, 93 S.Ct. 1431, 35 L.Ed.2d 291 (1973); United States v. Grimes, 426 F.2d 706 (5th Cir. 1970) (per curiam) (officer positioned 50 yards from defendant used binoculars to watch defendant load large cardboard boxes into car). But cf. United States v. Kim, 415 F.Supp. 1252 (D.Haw.1976) (intrusion into private home by FBI agents using 800 millimeter telescope at a location a quarter of a mile from the surveillance site held an unreasonable search) 5 Here, in contrast to Arkansas v. Sanders, supra, the officer did not merely suspect the presence of contraband, he had seen the packets in defendants' possession
{ "pile_set_name": "FreeLaw" }
Nasr ibn Sayyar Naṣr ibn Sayyār al-Lāythi al-Kināni (; 663–748) was an Arab general and the last Umayyad governor of Khurasan in 738–748. Nasr played a distinguished role in the wars against the Turgesh, although he failed to decisively confront the rebellion of al-Harith ibn Surayj in its early stages. Although respected as a soldier and a statesman, he owed his appointment as governor more to his obscure tribal background, which rendered him dependent on the Caliph. His tenure was nevertheless successful, as Nasr introduced long-overdue tax reforms that alleviated social tension and largely restored and stabilized Umayyad control in Transoxiana, which had been greatly reduced under the Turgesh onslaught. His last years were occupied by inter-tribal rivalries and uprisings, however, as the Caliphate itself descended into a period of civil war. In 746 Nasr was driven from his capital by Ibn Surayj and Juday al-Kirmani, but returned after the latter fell out among themselves, resulting in Ibn Surayj's death. Preoccupied with this conflict, Nasr was unable to stop the outbreak and spread of the Abbasid Revolution, whose leader, Abu Muslim, exploited the situation to his advantage. Evicted from his province in early 748, he fled to Persia pursued by the Abbasid forces, where he died on 9 December 748. Early life and career Nasr was a military leader with long service and experience in Khurasan. As early as 705 he participated in a campaign along the upper Oxus River, led by Salih, the brother of Qutayba ibn Muslim, the general who had been tasked with subduing Transoxiana. For his service during this campaign, Nasr was awarded an entire village in this region. Despite the successes of Qutayba, much of Central Asia east of the Oxus remained outside effective Arab control; while garrisons had been established in places like Samarkand, Balkh, or Bukhara, the Caliphate largely relied on cliental relationships with the multitude of local rulers, who became tributary to the Umayyads. In addition, clashes with the Chinese-backed Türgesh, the ambiguous policy followed regarding conversion of the native population (mass conversions would lessen the taxable population and hence the amount of tribute received) and increasing inter-Arab tribal factionalism weakened Umayyad control over the region and necessitated increased military activity. In 724, Nasr is recorded as heading a Mudari army sent against Balkh, where restive Yemenite troops refused to participate in the expedition against Ferghana that resulted in the disastrous "Day of Thirst". His troops, reinforced by men from the subject Hephthalite principality of Chaghaniyan, clashed with the Yemenis at Baruqan and prevailed over them. This led to resentment towards his person among the Yemenis, especially from those around Balkh; and during the governorship of the Yemeni Asad ibn Abdallah al-Qasri, along with other Mudari leaders, Nasr fell into disfavour and was mistreated. Nasr was one of the few Muslim leaders to distinguish himself in the disastrous Battle of the Defile in July 731. In 734 he was appointed as governor of Balkh, after arresting the previous governor. There he faced the rebellion of the local Khurasani troops under al-Harith ibn Surayj, who called for reforms in taxation and the ending of discrimination towards the native converts (mawali). Ibn Surayj marched on Balkh and took the city with only 4,000 followers, even though Nasr commanded 10,000 men. It is unclear from the sources whether the town was seized from Nasr, or whether it was captured in his absence and then successfully held against him. In any case, Nasr and his army remained passive for the remainder of the revolt; they did not aid the provincial capital, Merv, when the rebels attacked it, and this stance encouraged several local tribes to join the uprising. Eventually however the rebels were defeated by Juday al-Kirmani, with Ibn Surayj fleeing across the Oxus to the Türgesh. Appointment as governor of Khurasan In July 738, at the age of 74, Nasr was appointed as governor of Khurasan. Despite his age, he was widely respected both for his military record, his knowledge of the affairs of Khurasan and his abilities as a statesman. Julius Wellhausen wrote of him that "His age did not affect the freshness of his mind, as is testified not only by his deeds, but also by the verses in which he gave expression to his feelings till the very end of his life". However, in the climate of the times, his nomination owed more to his appropriate tribal affiliation than his personal qualities. From the early days of the Muslim conquests, Arab armies were divided into regiments drawn from individual tribes or tribal confederations (butun or ʿashaʿir). Despite the fact that many of these groupings were recent creations, created for reasons of military efficiency rather than any common ancestry, they soon developed a strong and distinct identity. Eventually, and certainly by the beginning of the Umayyad period, this system progressed to the formation of ever-larger super-groupings, culminating in the two super-groups: the northern Arab Mudaris or Qaysis, and the southern Arabs or "Yemenis" (Yaman), dominated by the Azd and Rabi'ah tribes. By the 8th century, this division had become firmly established across the Caliphate and was a source of constant internal instability, as the two groups formed in essence two rival political parties, jockeying for power and separated by a fierce hatred for each other. During Hisham ibn Abd al-Malik's reign, the Umayyad government appointed Mudaris as governors in Khurasan, except for Asad ibn Abdallah al-Qasri's tenure in 735–738. Nasr's appointment came four months after Asad's death. In the interim, the sources report variously that the province was run either by the Syrian general Ja'far ibn Hanzala al-Bahrani or by Asad's lieutenant Juday al-Kirmani. At any rate, the sources agree that al-Kirmani stood at the time as the most prominent man in Khurasan and should have been the clear choice for governor. His Yemeni roots (he was the leader of the Azd in Khurasan), however, made him unpalatable to the Caliph. Nasr on the other hand, in addition to his other qualities, was a Mudari and married to a Tamimi wife. He would therefore be acceptable to the numerous Mudari element of the Khurasani army, which outnumbered the Yemenis, but could also, as a local, help to reduce the Khurasani Arabs' discontent towards the Syria-centric Umayyad government. Nasr's own relatively obscure tribal background—from a non-noble family of the Layth tribe from Kinanah—also suited the Caliph's purposes, as it meant that he lacked any local power base of his own. Indeed, Nasr's rule throughout his tenure was not fully accepted by many Arab tribesmen: aside from the Yemenis, who favoured their "own" candidate al-Kirmani and resented the shift in power back towards the Mudaris, the Qays around Nishapur refused to support him, and even the Syrian contingent sided with his opponents. Nasr was hence mostly reliant on the support of his wife's powerful Tamim tribe living around Marv. As long as he was supported by a strong central government in Damascus, Nasr was able to keep his internal enemies in check, but in the troubles that followed Hisham's death in 743, that support vanished. In the event, Nasr would succeed in retaining his office for a decade, despite the turmoil that swept the Caliphate after 743. When Yazid III came to power in early 744, he initially ordered Nasr replaced. Nasr refused to accept this, and held on to the post, being eventually confirmed to it a few months later. After Marwan II's rise to power in December 744, he likewise affirmed Nasr's position. Reforms and campaigns Nasr gave his province an unprecedented period of good government, stability and prosperity, so that, in the words of the 9th-century historian al-Mada'ini, "Khurasan was built up as it had never been before". His major achievements during his tenure were the reform of the tax system and the restoration of Umayyad control over Transoxiana. The Khurasani tax system had been established at the time of the Muslim conquest and remained unchanged since. It relied on the collection of a fixed tribute by the local non-Muslim (mostly Zoroastrian) gentry, the dihqans, who often discriminated against the Muslim settlers and the native converts. This contributed to the latter's increasing resentment of Umayyad rule, and the demand for a tax reform had fuelled past revolts like that of Ibn Surayj. Consequently, Nasr streamlined the tax system in 739, implementing a blanket imposition (the kharaj) on all owners of agricultural land and forcing the non-Muslims to pay an additional poll tax (the jizyah). In this way, the chroniclers report, 30,000 Muslims were absolved of the jizyah, and 80,000 non-Muslims were forced to pay it instead. Attention was also paid to the accurate collection of the kharaj in accordance with treaties with the local rulers, as a result of which the tax burden was generally eased. This reform is traditionally held to have assisted in regaining the loyalty of the local populations and their princes, who returned quickly to the Arab fold. Other modern scholars however consider the effect of this belated reform on the prevailing anti-Umayyad climate as minimal. Upon his appointment, Nasr also moved the provincial capital back to Merv from Balkh, where Asad had established it. Additionally, for the first time in the province's history he appointed sub-governors. They were drawn from among his allies and supporters in order to reward them and to improve his own control of the province. Taking advantage of the disintegration of the Türgesh khaganate after the murder of the khagan Suluk, Nasr moved aggressively across the Oxus. His first campaign, immediately after his appointment, was in the area of Chaghaniyan; his second campaign, in 740, recovered much territory in Sogdia, including Samarkand, with little apparent resistance. Aiming to recover all the lands previously conquered under Qutayba ibn Muslim and to curtail the activities of the renegade Ibn Surayj, who was based there, Nasr then launched an expedition targeting al-Shash (Tashkent). The principality of Usrushana submitted peacefully, but when the Muslim army reached the Jaxartes, it was confronted by a 15,000-strong force from Shash along with Ibn Surayj's men and some Türgesh; according to Arab tradition, the latter were led by Suluk's murderer and successor, Kursul. According to the Arab sources, Nasr was able to drive off the Türgesh and scored a victory against one of their detachments, killing its chief. He apparently failed to subdue al-Shash, for he was forced to content himself with an agreement with the ruler of Shash, whereby Ibn Surayj was evicted to Farab, where the latter was left unmolested to continue his opposition to the Umayyads. Nasr also launched two expeditions against Ferghana, which plundered and ravaged the countryside and took many captives. It seems, however, that the Muslim reconquest at this time did not extend much further than Samarkand, with occasional tribute being possibly levied from the remoter principalities. Outwardly at least, by 743 the Umayyad position in Khurasan appeared stronger than ever. The reality beneath the splendid façade however was different. Tension and mutual mistrust existed between the Khurasani Arab levies (muqatila) and the 20,000 Syrian troops introduced into the province as a security measure after the disastrous Battle of the Defile in 731, while tribal antagonisms continued to create trouble: apart from continued Yemeni resentment at Nasr, there was strong dislike of the Umayyads' Syrian regime, fanned by their unjust tax policies. Although Nasr tried to remedy the situation, it was too late. In addition, Khurasan was a major center of early Shiism, and specifically of the Kaysanite sect of the Hashimiyya, which had gained wide acceptance in the province, especially among the mawali. In 742–743, Nasr confronted and defeated a revolt led by Yahya, son of Zayd ibn Ali and the leader of the Hashimiyya in Khurasan. Yahya was captured and executed, and the resulting vacuum in Hashimi leadership opened the path for the Khurasani branch of the movement to come under the control of the Abbasid family. It is however, a testament to the "respect and even affection" (Gibb) with which Nasr was regarded by the native population in Transoxiana, that in contrast to Khurasan no native city there welcomed the Hashimi missionaries, and that they remained loyal to him even during the later Abbasid Revolution. Civil wars and the Abbasid Revolution In 743, after the death of Caliph Hisham, his successor Walid II reconfirmed Nasr in his post, but the influential governor of Iraq, Yusuf ibn Umar al-Thaqafi, an opponent of Nasr, tried to lure him away from his province by calling him to Iraq. Nasr delayed his departure, stalling for time, and was saved by the murder of Walid in April 744. Walid's successor, Yazid III, moved to install a regime dominated by the Yemeni Kalb tribe. Nasr's position was severely undermined, and the Yemeni faction now hoped to see their leader Juday al-Kirmani appointed governor in his stead. Indeed, Yazid appointed his favourite, the Kalbi Mansur ibn Jumhur, as governor of Iraq, and he in turn nominated his own brother as Nasr's replacement. Nasr refused to accept this, and was again fortunate in his persistence, for Mansur fell out of favour and was dismissed after only two months. Agitation among the Yemeni faction persisted, amidst rumours that Nasr had intercepted letters appointing al-Kirmani as governor, and a dispute on the payment of stipends to the muqatila. Nasr tried to secure his own position by deposing al-Kirmani from his leadership of the Azd, as well as by trying to win over Azd and Rabi'ah leaders. This led to a general uprising by the Azd and Rabi'ah under al-Kirmani. It is indicative of the lingering inter-tribal antagonism of the late Umayyad world that the rebellion was launched in the name of revenge for the Muhallabids, an Azd family that had been purged after rebelling in 720—an act which had since become a symbol of Yemeni resentment of the Umayyads and their northern Arab-dominated regime. On 13 July 744, Nasr captured and imprisoned al-Kirmani. After barely a month, the latter escaped, and his rebellion was joined not only by Azd soldiers, but also by many of the Arab settlers around Marv. A tentative truce was initially agreed upon, during which fruitless negotiations were conducted, but after Yazid reconfirmed Nasr in his post, al-Kirmani and the Yemenis—in reality, al-Kirmani's followers included other tribes as well, including most of the Syrians and even some Mudaris, but they were collectively called Yamaniyya in the sources—resumed their revolt. Nasr in turn tried to strengthen his own position by enlisting the services of al-Harith ibn Surayj, al-Kirmani's one-time adversary, who enjoyed considerable support among some Arab tribes and especially his fellow Tamimis. When Ibn Surayj arrived at Merv in July 745 he was enthusiastically received by the town's inhabitants. Scorning Nasr's proposals for cooperation, Ibn Surayj soon withdrew to the countryside and rose in rebellion as well. Ibn Surayj was also able to exploit the unpopularity of Marwan II among the Mudaris and Nasr's followers, even though Nasr recognized him as the legitimate Caliph in exchange for his own confirmation to his post. Exploiting this resentment, Ibn Surayj soon gathered around him an army of over 3,000 men. In March 746 Ibn Surayj's army attacked Marv, but was repulsed with many casualties, and he then made common cause with al-Kirmani—of whose activities between his escape in 744 and this point nothing is known. With Marwan II still trying to consolidate his own position in Syria and Mesopotamia, Nasr was bereft of any hopes of reinforcement, and the allied armies of Ibn Surayj and al-Kirmani drove him out of Merv towards the end of 746. Nasr retreated to Nishapur, but within days al-Kirmani and Ibn Surayj fell out among themselves and clashed, resulting in the death of Ibn Surayj. Al-Kirmani then destroyed the Tamimi quarters in the city, a shocking act, as dwellings were traditionally considered exempt from warfare in Arab culture. As a result, the Mudari tribes, hitherto reserved towards Nasr, now came over to him. Backed by them, especially the Qays settled around Nishapur, Nasr now resolved to take back the capital. During summer 747, Nasr's and al-Kirmani's armies confronted each other before the walls of Marv, occupying two fortified camps and skirmishing with each other for several months. The fighting stopped only when news came of the start of the Hashimi uprising under Abu Muslim. Negotiations commenced, but were almost broken off when a member of Nasr's entourage, an embittered son of Ibn Surayj, attacked and killed al-Kirmani. Calmer heads prevailed for the moment, the two sides were able to tentatively settle their differences, and Nasr re-occupied his seat in Marv. Tensions however remained and Abu Muslim soon managed to persuade al-Kirmani's son and successor Ali that Nasr had been involved in his father's murder. As a result, both Ali al-Kirmani and Nasr separately appealed for aid against each other to Abu Muslim, who now held the balance of power. The latter eventually chose to support al-Kirmani. On 14 February 748, the Hashimi army occupied Marv, and Nasr again had to flee the city. Pursued by the Hashimi forces under Qahtaba ibn Shabib al-Ta'i, Nasr was forced to abandon Nishapur too after his son Tamim was defeated at Tus, and retreat to the region of Qumis, on the western borderlands of Khurasan. At this point, the long-awaited reinforcements from the Caliph arrived, but their general and Nasr failed to coordinate their movements, and Qahtaba was able to defeat the Caliph's army at Rayy and kill its commander. Nasr was now forced to abandon Qumis and flee towards Hamadan. On the way, in the town of Sawa, he fell ill and died on 9 December, at the age of 85. His grandson, Rafi ibn al-Layth, led a large-scale rebellion against the misgovernment of the Abbasid governor Ali ibn Isa ibn Mahan in 807–810, which spread across Khurasan and Transoxiana. References Sources Category:663 births Category:748 deaths Category:Umayyad governors of Khurasan Category:Generals of the Umayyad Caliphate Category:Muslim conquest of Transoxiana Category:Arab generals Category:8th-century Arabs Category:Abbasid Revolution
{ "pile_set_name": "Wikipedia (en)" }
21st Infantry Regiment (Thailand) The 21st Infantry Regiment, Queen Sirikit's Guard () (ร.21 รอ.) is a King's Guard regiment under the 2nd Infantry Division, Queen Sirikit's Guard of the Royal Thai Army. The regiment was created in 1950. It is known as the Queen's Guard or Thahan Suea Rachini (, translated as "Queen's Tiger Soldiers"). It is sometimes referred to as the "Eastern Tigers". The regiment is based in Chonburi. Origins The 21st Regiment of the Royal Thai Army, or the Queen's Guard, was formed on 22 September 1950 at the request of United Nations Command. Its purpose was to help the US-led UN troops fight the Korean People's Army and the Chinese People's Volunteers in the Korean War. Campaigns Korean War service Called "Little Tigers". Voluntary service in the Vietnam War in 1968-1969 Called the "Queen's Cobra". Suppressed communist terrorists and helped civilians in Nan Province in 1975. Received the Order of Rama for stopping Vietnamese border incursions on the Thai-Cambodian border in 1983. Organization The regiment is composed of three subordinate units: the 1st, 2nd, and 3rd Infantry Battalions. 1st Infantry Battalion, 21st Infantry Regiment, Queen's Guard 2nd Infantry Battalion, 21st Infantry Regiment, Queen's Guard 3rd Infantry Battalion, 21st Infantry Regiment, Queen's Guard Uniform Rajchawanlop hat with black tuft with the royal cypher of the queen. Purple woolen top with black woolen mane embroidered with the queen's cypher on the wrist. Black woolen trousers with two purple stripes per side. Training Selection Trainee must be serve in the 21st Regiment Queen's Guard or be permitted by the Royal Thai Army to attend the training. Training content The Queen's Tigers run a training course every two years. Its duration is 16 weeks. Physical and mental conditioning in preparation for the next phase. This phase takes four weeks. Only those who passing this phase move to the next phase. Forest and mountain training (four weeks): This phase focuses on infiltration by air and ground. Small unit tactics. Guerrilla warfare tactics. Sea phase (three weeks): Water infiltration and tactical diving. Coastal patrolling, amphibious warfare, living off the sea, parachuting into water. Urban phase (three weeks): Urban operations, anti-terrorist ops, hostage rescue, tactical us of motorbikes. Air phase (two weeks): Parachuting, parachute packing and problem solving. Award for completion Those who successfully complete the tiger training course receive a military capabilities plate from the queen. The metal plate is decorated with a purple heart and the queen's cypher. The lower part is a blue ribbon contain the honorific "Tiger Soldier". To both sides of the purple heart are tigers soaring above mountains, waves, and clouds. Political influence In the 1990s, according to one academic, "...the Eastern Tigers amassed considerable wealth by trading gems with Cambodian Khmer Rouge insurgents based along the two countries' border, a racket which 'directly benefited'... some of its commanders. Within a decade, the Eastern Tigers dominated the Thai military." The Queen's Guard have since had an inordinate influence on Thai politics. Former Queen's Guard commanders led the May 2014 Thai coup d'état that toppled the elected government. References External links Official Website of the 1st Infantry Battalion, 21st Infantry Regiment, Queen's Guard Category:King's Guard units of Thailand Category:Military units and formations established in 1950 Category:1950 establishments in Thailand
{ "pile_set_name": "Wikipedia (en)" }
Your Source for Everything RIEGL Main Menu Performance Considerations for Small-Footprint Topobathymetric LiDAR Amar Nayegandhi, Manager of Elevation Technologies at Dewberry, posted this article earlier in the week about performance considerations for small-footprint topobathymetric LIDAR and their use of the RIEGL VQ-820-G airborne laser scanner. Enjoy! A new suite of commercial small-footprint, green-wavelength airborne LiDAR systems are being developed to enable topobathymetric mapping in coastal and riverine environments. These sensors can provide seamless topography across the land-water interface at very high spatial resolution (five to six points per square meter). The Role of Water Clarity in Mapping Submerged TopographyWater clarity plays a vital role in the ability of topobathymetric systems to map submerged topography. Compared to traditional bathymetric LiDAR systems, topobathymetric LiDAR uses a low-power laser pulse, resulting in a depth performance between one and two Secchi depth. Traditional bathymetric LiDAR sensors offer up to three Secchi depths, but with a footprint 20 times wider than topobathymetric LiDAR. Small-footprint topobathymetric LiDAR sensors can map submerged topography between 20 to 25 meters in clear water with high reflective bottom (such as sand), but may only map up to two meters in turbid waters. Mapping Topobathymetry in Various Riverine EnvironmentsIn collaboration with Watershed Sciences, Inc., we used the Riegl VQ-820-G sensor to collect and process topobathymetric data in Sandy River for the Oregon Department of Geology and Mineral Studies. We mapped channel and floodplain morphology and evaluated the effectiveness of new topobathymetric LiDAR technology in a riverine environment. The results showed that in more than 83 percent of the channel (a “high confidence” area), bathymetric point density averaged two points per square meter, with water depths ranging from zero to three meters. The remaining 17 percent of the channel (a “low confidence” area) contained water deeper than three meters. In the high confidence area, we compared the LiDAR measurements with 303 channel points acquired using GPS-based techniques along channel cross sections. The bathymetric accuracy was assessed at 18.4 centimeters RMSE. These results suggest that topobathymetric LiDAR is a viable solution to mapping channel and floodplain morphology at Sandy River for ongoing monitoring studies to understand the impacts of the 2007 Marmot Dam removal on downstream morphology and fish habitat. The image of the left shows the mouth of the Sandy River flowing into the Columbia River. Using the Riegl VQ-820-G sensor, we obtained a seamless topobathymetric Digital Elevation Model (DEM) of the same area—water depths ranging from zero to three meters. Commercializing Small-Footprint Topobathymetric LiDARThe commercialization of small-footprint topobathymetric LiDAR has opened the possibility of high-resolution seamless topography and bathymetry in coastal and riverine environments. The applications of these data are endless and there is a lot of excitement within the geospatial community on the use of this technology. However, it’s important to understand the technology’s limitations and the conditions that will enable a successful survey. Water clarity and bottom reflectivity play a very important role. Knowledge of the LiDAR sensor and production process is crucial to a successful topobathy dataset. At Dewberry, we’re at the forefront of this new commercial technology with successful completion of three recent topobathymetric projects, such as Sandy River, using the Riegl VQ-820-G sensor.
{ "pile_set_name": "Pile-CC" }
const func1 = function() {}; const object = { func2: function() {} }; console.log(func1.name); // expected output: "func1" console.log(object.func2.name); // expected output: "func2"
{ "pile_set_name": "Github" }
20 Kan. App. 2d 361 (1995) ERROL JOE KAMPSCHROEDER, Appellee, v. NORMA W. KAMPSCHROEDER and SHERRYL HOLMES, Appellants. No. 71,720 Court of Appeals of Kansas. Opinion filed January 6, 1995. Gerald L. Cooley, John M. Cooley, and Randall F. Larkin, of Allen, Cooley & Allen, of Lawrence, for appellant Norma W. Kampschroeder. Stephen M. Fletcher, of Overland Park, for appellant Sherryl Holmes. Byron E. Springer, of Barber, Emerson, Springer, Zinn & Murray, L.C., of Lawrence, for appellee. Before GERNON, P.J., ELLIOTT and LEWIS, JJ. LEWIS, J.: Errol Joe Kampschroeder was born to the marriage of Robert and Waneta Kampschroeder. Waneta died in April *362 1980, and Robert married Norma in October 1980. The marriage was not accepted well by Errol Joe and appears to have affected the relationship between the parties from that point on. Robert and Norma remained married until Robert's death in 1990. Upon Robert's death, most of his and Norma's assets were held in joint tenancy with the right of survivorship. Norma placed these assets in her own name and the name of Sherryl Holmes, her daughter. Errol Joe commenced the present action to impose a constructive trust on the jointly held assets. The trial court held in favor of Errol Joe, and Norma and Sherryl appeal. We affirm the decision of the trial court. Litigation of this nature is particularly fact driven. The facts in this case are not, unfortunately, unusual. This lawsuit is between a stepson and his stepmother over property owned by the son's father and stepmother's husband at the time of his death. There was an extensive trial, and the trial court made 32 detailed findings of fact. We have reviewed the record and conclude that all of the trial court's findings of fact are supported by substantial competent evidence. After hearing all the evidence, the trial court held that Norma and Robert agreed, for the convenience of the parties, to hold most of their assets in joint tenancy. This was to allow the properties accumulated by both parties or brought into the marriage by both parties to become the property of their heirs after their death. They intended that "the properties of Robert go to Errol and the properties of Norma go to Sherryl." Although we concede that a different spin might have been put on the evidence, the analysis adopted by the trial court is substantially supported by the record. The trial court found five significant factors in reaching its conclusions: "a. The Antenuptial Agreement showed their original intentions to keep their property separate. "b. Robert's attitude toward Sherryl's son was emphatic that he not receive any of Robert's property and was certainly corroborative of their intent that the properties of Robert go to Errol, and the properties of Norma go to Sherryl. "c. Clearly, the taped conversation of Norma and Nancy corroborates the testimony and position of the Plaintiff. Norma's testimony that she wanted to *363 be fair did not refer to her deciding whether commingled property should be separated because that had already been decided by the parties. That was clear by their intent as indicated on the taped conversation. When Norma indicated she wanted to be fair it is clear from the testimony she was overwhelmed by the process of having to separate the property out, of deciding just what was hers and what was Robert's, and thus would be Errol's. "d. Robert's comment: `Make certain that Norma will be cared for' is not the language or the statement of a man who was leaving his entire estate of some worth to his wife. The fact that he wanted to make certain Norma was cared for indicated to me on his part a confusion as to what the wills would be. "e. Norma's comment: `This will is no good,' certainly again corroborates the testimony or the position that this was — indeed, the intentions of the parties was to make certain that what was Robert's went to Errol, and what was Norma's went to Sherryl." Once again, the analysis of the trial court is well within the evidence shown. The five factors cited by the trial court are clearly supported by substantial competent evidence. In the final analysis, the trial court concluded that the parties had entered into an understanding where each was to have the use of the income from the property of the other until their death, at which time the property would go to their respective children. This understanding formed the basis for the consideration of the agreement. The trial court went on to conclude: "Plaintiff has by clear and convincing standards shown that there was an agreement entered into, and, in fact, always understood by Norma and Robert, that upon the death of the first to die, the income from the property brought into the marriage by that person would be enjoyed by the surviving spouse, and then pass on to the children of Norma or Robert, depending upon the situation." This conclusion is consistent with the trial court's findings of fact. Norma had breached this understanding, which gave rise to the constructive trust imposed. The trial court went on to determine which assets were subjected to the constructive trust. The total value of those assets is $323,233.11. The constructive trust is such that Norma is to receive the income from these assets until her death, at which time they are to be paid to Errol Joe. In appellants' brief is the following statement: "While defendants admit that the trial court's findings of fact are supported by substantial competent evidence in the record, defendants deny *364 that those findings of fact support the trial court's conclusions of law or its judgment." During oral argument before this court, counsel for Norma conceded that the trial court's findings of fact were supported by substantial competent evidence. On the other hand, counsel for Sherryl was unwilling to make such a concession. The problem with Sherryl's position is that her attorney did not file a separate brief. He joined in a single brief filed by the attorney for Norma. Sherryl is not in a position to contradict admissions made in the brief filed. However, we have examined the record, and we conclude that the findings of fact are supported by substantial competent evidence. An oral trust must be proved by clear and convincing evidence. Wehking v. Wehking, 213 Kan. 551, 554, 516 P.2d 1018 (1973). Upon review, we operate under the assumption that the trial court applied the correct standard of proof and was satisfied with the quantum of evidence introduced. A constructive trust arises "`wherever the circumstances under which the property was acquired make it inequitable that it should be retained by the person who holds the legal title.'" Hile v. DeVries, 17 Kan. App.2d 373, 374, 836 P.2d 1219 (1992) (quoting Clester v. Clester, 90 Kan. 638, 642, 135 Pac. 996 [1914]). An essential element of proving a constructive trust is a showing of fraud. However, there are two types of fraud, actual and constructive. "Actual fraud is an intentional fraud, and the intent to deceive is an essential element of the action. Constructive fraud, however, is a breach of a legal or equitable duty which, irrespective of moral guilt, the law declares fraudulent because of its tendency to deceive others or violate a confidence, and neither actual dishonesty of purpose or intent to deceive is necessary. [Citation omitted.]" Moore v. State Bank of Burden, 240 Kan. 382, 389, 729 P.2d 1205 (1986), cert. denied 482 U.S. 906 (1987). In the context in which this issue is presented, we are not dealing with actual dishonesty of purpose or intent to deceive. The evidence indicates Norma was guilty of a breach of duty amounting to constructive fraud. Absent actual fraud, there are two additional elements which are required to be proven. First, there must be a confidential *365 relationship. Secondly, the confidence reposed must be betrayed, or a duty imposed by the relationship must be breached. See Winsor v. Powell, 209 Kan. 292, 302-03, 497 P.2d 292 (1972). A confidential relationship is not presumed, and the burden of proving such a relationship existed rests upon the party asserting its existence. Paul v. Smith, 191 Kan. 163, Syl. ¶ 4, 380 P.2d 421 (1963). The mere fact that a transfer of property occurs between a husband and wife and no valuable consideration passes is not sufficient to raise a trust by implication. Clester v. Clester, 90 Kan. 638, 641, 135 Pac. 996 (1914). Under the facts shown, Errol Joe seeks to impress a trust on property which Norma owns by virtue of a joint tenancy contract with Robert. There is no question but that the property held in joint tenancy may be the subject of a trust. Wehking v. Wehking, 213 Kan. 551, Syl. ¶ 2; Winsor v. Powell, 209 Kan. at 300. The facts of this case are strikingly similar to those in Winsor v. Powell. In that action, the decedent, when discussing his affairs, spoke of his daughter, Sarah, and said, "`She'll do the right thing.'" 209 Kan. at 301. In this action, Robert told Errol Joe that he had $350,000, that Norma would be fair, and that Errol Joe could trust her. Robert told Errol Joe that Norma was to get the interest and, upon her death, Errol Joe was to get the principal. In addition, Norma acknowledged to Errol Joe's wife the necessity of her separating Robert's assets from her own. These facts in Winsor v. Powell were held sufficient to raise a constructive trust, and they are equally sufficient in this action. Norma and Sherryl argue that the agreement found by the court was not proven by clear and convincing evidence. "To be clear and satisfactory, evidence should be `clear' in the sense that it is certain, plain to the understanding, and unambiguous, and `satisfactory' in the sense that it is so believable that persons of ordinary intelligence, discretion, and caution may have confidence in it. Clear and satisfactory evidence is not a quantum of proof, but a quality of proof." Barbara Oil Co. v. Kansas Gas Supply Corp., 250 Kan. 438, Syl. ¶ 7, 827 P.2d 24 (1992). Norma and Sherryl suggest that there was no direct evidence of an agreement between Robert and Norma. However, we note that in the recorded conversation between Norma and Errol Joe's *366 wife, Norma acknowledges the existence of some understanding between her and Robert and indicates that in order to carry out that understanding, she must separate Robert's assets from her own. We consider this to be direct evidence of the existence of an agreement. Indeed, circumstantial evidence may be used to prove the existence of an agreement. Staab v. Staab, 160 Kan. 417, 419, 163 P.2d 418 (1945). Earlier in this opinion, we enumerated the five significant factors relied on by the court in reaching its conclusion. Norma and Sherryl argue that these factors do not show by clear and convincing standards that an agreement existed. We do not review for the quantum of evidence, but rather the quality. "On review, this court considers only the evidence of the successful party to determine whether it is substantial and whether it is of a clear and convincing quality. See Newell v. Krause, 239 Kan. 550, 557, 722 P.2d 530 (1986)." Barbara Oil Co. v. Kansas Gas Supply Corp., 250 Kan. at 448. As we review the evidence in light of our standard of review, we conclude that each of the five factors relied upon by the trial court is supported by evidence of a clear and convincing quality. In the final analysis, this was a factual situation. The facts were resolved in favor of Errol Joe, and we will not engage in factfinding or substitute our judgment on that issue. The element of a confidential relationship is shown by the evidence. Under the trial court's construction of the facts, Robert and Norma entered into an agreement in which each relied on the survivor to see that the assets were properly distributed. Robert placed trust and confidence in Norma to see that Errol Joe received the proper distribution of assets, and it would be inequitable to permit her to disregard the terms of that agreement. Finally, it is suggested that even if there was an agreement and a confidential relationship, Norma did not breach either. The argument is that under the terms of the agreement, Norma was to enjoy the income for her lifetime, and only upon her death was the principal to pass to Errol Joe. It then follows that there cannot be a breach of fiduciary duty or a betrayal of confidence unless and until Norma dies without the necessary provisions in her will. *367 While this argument may have some logical basis, it ignores the realities of the situation. After Robert's death, some of the assets were placed in joint tenancy with Norma's daughter, Sherryl. This was obviously done with the intent that upon Norma's death, these assets would pass to Sherryl. In addition, Norma now denies that any agreement existed and testified, "I never made any commitment to Bob." These facts point to a breach of the agreement by Norma. In summary, the findings of the trial court were supported by substantial competent evidence and the conclusions of law are consistent with and supported by the findings of fact. EXHIBITS 6 AND 14 THROUGH 20 Norma and Sherryl next argue that the trial court erred in admitting into evidence plaintiff's exhibit 6 and plaintiff's exhibits 14 through 20. This argument is principally based upon the premise that an inadequate foundation was shown. The trial court is possessed of discretion when ruling on admissibility of evidence. An attack on an evidentiary ruling requires that the party attacking that ruling show that the trial court abused its discretion. An abuse of discretion exists only when no reasonable person would take the view adopted by the trial court. St. Francis Regional Med. Center, Inc. v. Weiss, 254 Kan. 728, 748, 869 P.2d 606 (1994). K.S.A. 60-407(f) provides that all relevant evidence is admissible unless otherwise provided by statute. Relevant evidence is evidence having "any tendency in reason to prove any material fact." K.S.A. 60-401(b). "It is axiomatic that a foundation must be laid establishing the competency, materiality and relevancy of all evidence prior to admission." Cansler v. Harrington, 231 Kan. 66, 69, 643 P.2d 110 (1982). We conclude that the trial court did not err in admitting the exhibits in question. Exhibit 6 was a photocopy of the schedule "E" of Robert's estate tax return. This exhibit listed all of Robert's jointly held property. In addition to schedule "E," the exhibit contains a listing of separate assets held by Norma at Robert's death. The separate property was identified by Norma on direct *368 examination. We conclude this exhibit was clearly relevant and material and that a proper foundation was laid. Exhibits 14 through 20 consisted of financial records which traced the assets from the time Robert and Norma were married until Robert's death. These exhibits were clearly relevant. One of the principal issues in this action was to identify which assets originated as Robert's separate property and which assets were accumulated during the marriage. Exhibits 14 through 20 were relevant on that issue. Norma and Sherryl also argue about the authenticity of the records. They suggest that these exhibits were admitted without proper foundation, identification, or indicia of trustworthiness. The principal problem with this particular argument is that the parties stipulated as to the authenticity of the records prior to trial. We see no need to describe with particularity the evidence purported to be shown by each exhibit. It seems to us that one of the principal issues in the admission of evidence of this sort is its authenticity. The parties stipulated as to the authenticity of those records, and we find no error on the part of the trial court in admitting exhibits 6 and 14 through 20. JUDGMENT AGAINST SHERRYL HOLMES Sherryl takes issue with the trial court's finding of fact No. 32. This finding identifies assets which were brought into the marriage by Robert and later transferred by Norma into joint tenancy between herself and Sherryl. Sherryl argues that this finding of fact is not supported by substantial competent evidence. We disagree and have previously indicated our decision that all of the trial court's findings of fact were supported by substantial competent evidence. Our earlier comments are also relevant concerning the position of Sherryl in arguing that the findings of fact were not supported by substantial competent evidence. Sherryl also argues that no findings of fact remain which would support the judgment entered against her. The trial court does not suggest that Sherryl was culpable in procuring the transfers to her mother and herself as joint tenants. *369 Culpability is not the issue. The stark fact is that Sherryl is a joint tenant on a substantial amount of assets on which the trial court has imposed a constructive trust. "If the trustee in breach of trust transfers trust property and no value is given for the transfer, the transferee does not hold the property free of the trust, although he had no notice of the trust." Kline v. Orebaugh, 214 Kan. 207, Syl. ¶ 6, 519 P.2d 691 (1974). The fact that Sherryl did not procure the transfer of the property does not entitle her to hold it free of trust nor warrant a conclusion that the judgment against her is invalid. Norma testified that she wanted Sherryl to have access to the joint tenancy accounts in case they were needed to take care of Norma. In addition, Norma testified that she intended Sherryl to get the accounts upon her death. We hold that the trial court did not err in entering judgment against Sherryl. The findings of fact made by the trial court support that judgment. Affirmed.
{ "pile_set_name": "FreeLaw" }
#include "api_config.h" #include "common.h" #include <iostream> #include <boost/property_tree/ptree.hpp> #include <boost/property_tree/ini_parser.hpp> #include <boost/algorithm/string.hpp> // === implementation of the api_config class === using namespace lsl; using namespace lslboost::algorithm; /// Helper function: Substitute the "~" character by the full home directory (according to environment variables). std::string expand_tilde(const std::string &filename) { if (!filename.empty() && filename[0] == '~') { std::string homedir; if (getenv("HOME")) homedir = getenv("HOME"); else if (getenv("USERPROFILE")) homedir = getenv("USERPROFILE"); else if (getenv("HOMEDRIVE") && getenv("HOMEPATH")) homedir = std::string(getenv("HOMEDRIVE")) + getenv("HOMEPATH"); else { std::cerr << "Cannot determine the user's home directory; config files in the home directory will not be discovered." << std::endl; return filename; } return homedir + filename.substr(1); } return filename; } /// Helper function: Parse a set specifier (a string of the form {a, b, c, ...}) into a vector of strings. static std::vector<std::string> parse_set(const std::string &setstr) { std::vector<std::string> result; if ((setstr.size() > 2) && setstr[0] == '{' && setstr[setstr.size()-1] == '}') { // non-empty set: split by "," std::string sub = setstr.substr(1,setstr.size()-2); lslboost::algorithm::split(result,sub,lslboost::algorithm::is_any_of(",")); // remove leading and trailing whitespace from each element for (std::vector<std::string>::iterator i=result.begin(); i!=result.end(); i++) trim(*i); } return result; } // Returns true if the file exists and is openable for reading bool file_is_readable(const std::string& filename) { std::ifstream f(filename.c_str()); return f.good(); } /** * Constructor. * Applies default settings and overrides them based on a config file (if present). */ api_config::api_config() { // for each config file location under consideration... std::string filenames[] = {"lsl_api.cfg", expand_tilde("~/lsl_api/lsl_api.cfg"), "/etc/lsl_api/lsl_api.cfg"}; for (std::size_t k=0; k < sizeof(filenames)/sizeof(filenames[0]); k++) { try { if (file_is_readable(filenames[k])) { // try to load it if the file exists load_from_file(filenames[k]); // successful: finished return; } } catch(std::exception &e) { std::cerr << "Error trying to load config file " << filenames[k] << ": " << e.what() << std::endl; } } // unsuccessful: load default settings load_from_file(); } /** * Load a configuration file (or use defaults if a filename is empty). * Expects a proper platform-native file name. Throws if there's an error. */ void api_config::load_from_file(const std::string &filename) { try { lslboost::property_tree::ptree pt; if (!filename.empty()) read_ini(filename, pt); // read out the [ports] parameters multicast_port_ = pt.get("ports.MulticastPort",16571); base_port_ = pt.get("ports.BasePort",16572); port_range_ = pt.get("ports.PortRange",32); allow_random_ports_ = pt.get("ports.AllowRandomPorts",true); #ifdef __APPLE__ ipv6_ = pt.get("ports.IPv6","disable"); // on Mac OS (10.7) there's a bug in the IPv6 implementation that breaks LSL when it tries to use both v4 and v6 #else ipv6_ = pt.get("ports.IPv6","allow"); #endif // fix some common mis-spellings if (ipv6_ == "disabled") ipv6_ = "disable"; if (ipv6_ == "allowed") ipv6_ = "allow"; if (ipv6_ == "forced") ipv6_ = "force"; if (ipv6_ != "disable" && ipv6_ != "allow" && ipv6_ != "force") throw std::runtime_error("Unsupported setting for the IPv6 parameter."); // read the [multicast] parameters resolve_scope_ = pt.get("multicast.ResolveScope","site"); listen_address_ = pt.get("multicast.ListenAddress",""); std::vector<std::string> machine_group = parse_set(pt.get("multicast.MachineAddresses","{127.0.0.1, FF31:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2}")); std::vector<std::string> link_group = parse_set(pt.get("multicast.LinkAddresses","{255.255.255.255, 224.0.0.183, FF02:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2}")); std::vector<std::string> site_group = parse_set(pt.get("multicast.SiteAddresses","{239.255.172.215, FF05:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2}")); std::vector<std::string> organization_group = parse_set(pt.get("multicast.OrganizationAddresses","{239.192.172.215, FF08:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2}")); std::vector<std::string> global_group = parse_set(pt.get("multicast.GlobalAddresses","{}")); multicast_ttl_ = -1; // construct list of addresses & TTL according to the ResolveScope. if (resolve_scope_ == "machine") { multicast_addresses_ = machine_group; multicast_ttl_ = 0; } if (resolve_scope_ == "link") { multicast_addresses_ = machine_group; multicast_addresses_.insert(multicast_addresses_.end(),link_group.begin(),link_group.end()); multicast_ttl_ = 1; } if (resolve_scope_ == "site") { multicast_addresses_ = machine_group; multicast_addresses_.insert(multicast_addresses_.end(),link_group.begin(),link_group.end()); multicast_addresses_.insert(multicast_addresses_.end(),site_group.begin(),site_group.end()); multicast_ttl_ = 24; } if (resolve_scope_ == "organization") { multicast_addresses_ = machine_group; multicast_addresses_.insert(multicast_addresses_.end(),link_group.begin(),link_group.end()); multicast_addresses_.insert(multicast_addresses_.end(),site_group.begin(),site_group.end()); multicast_addresses_.insert(multicast_addresses_.end(),organization_group.begin(),organization_group.end()); multicast_ttl_ = 32; } if (resolve_scope_ == "global") { multicast_addresses_ = machine_group; multicast_addresses_.insert(multicast_addresses_.end(),link_group.begin(),link_group.end()); multicast_addresses_.insert(multicast_addresses_.end(),site_group.begin(),site_group.end()); multicast_addresses_.insert(multicast_addresses_.end(),organization_group.begin(),organization_group.end()); multicast_addresses_.insert(multicast_addresses_.end(),global_group.begin(),global_group.end()); multicast_ttl_ = 255; } if (multicast_ttl_ == -1) throw std::runtime_error("This ResolveScope setting is unsupported."); // apply overrides, if any int ttl_override = pt.get("multicast.TTLOverride",-1); std::vector<std::string> address_override = parse_set(pt.get("multicast.AddressesOverride","{}")); if (ttl_override >= 0) multicast_ttl_ = ttl_override; if (!address_override.empty()) multicast_addresses_ = address_override; // read the [lab] settings known_peers_ = parse_set(pt.get("lab.KnownPeers","{}")); session_id_ = pt.get("lab.SessionID","default"); // read the [tuning] settings use_protocol_version_ = std::min(LSL_PROTOCOL_VERSION,pt.get("tuning.UseProtocolVersion",LSL_PROTOCOL_VERSION)); watchdog_check_interval_ = pt.get("tuning.WatchdogCheckInterval",15.0); watchdog_time_threshold_ = pt.get("tuning.WatchdogTimeThreshold",15.0); multicast_min_rtt_ = pt.get("tuning.MulticastMinRTT",0.5); multicast_max_rtt_ = pt.get("tuning.MulticastMaxRTT",3.0); unicast_min_rtt_ = pt.get("tuning.UnicastMinRTT",0.75); unicast_max_rtt_ = pt.get("tuning.UnicastMaxRTT",5.0); continuous_resolve_interval_ = pt.get("tuning.ContinuousResolveInterval",0.5); timer_resolution_ = pt.get("tuning.TimerResolution",1); max_cached_queries_ = pt.get("tuning.MaxCachedQueries",100); time_update_interval_ = pt.get("tuning.TimeUpdateInterval",2.0); time_update_minprobes_ = pt.get("tuning.TimeUpdateMinProbes",6); time_probe_count_ = pt.get("tuning.TimeProbeCount",8); time_probe_interval_ = pt.get("tuning.TimeProbeInterval",0.064); time_probe_max_rtt_ = pt.get("tuning.TimeProbeMaxRTT",0.128); outlet_buffer_reserve_ms_ = pt.get("tuning.OutletBufferReserveMs",5000); outlet_buffer_reserve_samples_ = pt.get("tuning.OutletBufferReserveSamples",128); inlet_buffer_reserve_ms_ = pt.get("tuning.InletBufferReserveMs",5000); inlet_buffer_reserve_samples_ = pt.get("tuning.InletBufferReserveSamples",128); smoothing_halftime_ = pt.get("tuning.SmoothingHalftime",90.0f); force_default_timestamps_ = pt.get("tuning.ForceDefaultTimestamps", false); } catch(std::exception &e) { std::cerr << "Error parsing config file " << filename << " (" << e.what() << "). Rolling back to defaults." << std::endl; // any error: assign defaults load_from_file(); // and rethrow throw e; } } /** * Instantiate / retrieve singleton. */ const api_config *api_config::get_instance() { lslboost::call_once(&called_once,once_flag); return get_instance_internal(); } api_config *api_config::get_instance_internal() { static api_config cfg; return &cfg; } void api_config::called_once() { get_instance_internal(); } lslboost::once_flag api_config::once_flag = BOOST_ONCE_INIT;
{ "pile_set_name": "Github" }
Goshen, New Hampshire Goshen is a town in Sullivan County, New Hampshire, United States. The population was 810 at the 2010 census. History Incorporated in 1791, Goshen was first settled in 1768 as a part of Saville (now Sunapee). The name Goshen may have been taken from Goshen, Connecticut, where many residents had relatives. Geography According to the United States Census Bureau, the town has a total area of , of which is land and is water, comprising 0.40% of the town. The long ridge of Mount Sunapee occupies the eastern edge of town. The highest point in Goshen is an unnamed knob on the ridge (near Goves Mountain) where the elevation reaches above sea level. Goshen lies almost fully within the Connecticut River watershed, though a small corner in the southeast of town is in the Merrimack River watershed. Adjacent municipalities Sunapee, New Hampshire (north) Newbury, New Hampshire (east) Washington, New Hampshire (south) Lempster, New Hampshire (southwest) Unity, New Hampshire (west) Newport, New Hampshire (northwest) Demographics As of the census of 2000, there were 741 people, 279 households, and 219 families residing in the town. The population density was 32.9 people per square mile (12.7/km²). There were 389 housing units at an average density of 17.3 per square mile (6.7/km²). The racial makeup of the town was 97.03% White, 1.62% Native American, 0.13% Asian, 0.13% from other races, and 1.08% from two or more races. Hispanic or Latino of any race were 0.40% of the population. There were 279 households out of which 33.3% had children under the age of 18 living with them, 64.5% were married couples living together, 9.7% had a female householder with no husband present, and 21.5% were non-families. 17.2% of all households were made up of individuals and 7.2% had someone living alone who was 65 years of age or older. The average household size was 2.63 and the average family size was 2.96. In the town, the population was spread out with 24.2% under the age of 18, 8.0% from 18 to 24, 27.5% from 25 to 44, 26.7% from 45 to 64, and 13.6% who were 65 years of age or older. The median age was 40 years. For every 100 females, there were 97.6 males. For every 100 females age 18 and over, there were 96.5 males. The median income for a household in the town was $42,625, and the median income for a family was $45,208. Males had a median income of $33,333 versus $22,727 for females. The per capita income for the town was $20,561. About 6.9% of families and 8.8% of the population were below the poverty line, including 10.8% of those under age 18 and 21.5% of those age 65 or over. Education Goshen and the neighboring town of Lempster maintained a combined elementary and middle school, called Goshen-Lempster Cooperative School, located in Lempster. The school served kindergarten through 8th grade. The cooperative was dissolved in June 2016. The majority of Goshen elementary and middle-school aged children now attend Newport, NH schools; the Newport school system now acts as the anchor system for Goshen students. After 8th grade, students are given the choice to attend several neighboring high schools, including Newport High School, Sunapee Senior High School, and Kearsarge Regional High School. Notable people John Williams Gunnison, US Army officer and explorer of the American West References External links Town of Goshen official website New Hampshire Economic and Labor Market Information Bureau Profile Sunapee-Ragged-Kearsarge Greenway Coalition Category:Towns in Sullivan County, New Hampshire Category:Towns in New Hampshire
{ "pile_set_name": "Wikipedia (en)" }
[Biotherapy of malignant peritoneal effusions in ovarian carcinoma]. Malignant peritoneal effusions often arise in patients with ovarian carcinoma. They are a hazardous complication of cancer. Systematic intraperitoneal chemotherapy is not necessarily followed by long-term remission and may even induce untoward side effects. Intraperitoneal interleukin-2 (IL-2) and IL-2/lymphokine-activated killers (LAK) biotherapy showed high efficacy in treatment of ovarian carcinoma patients suffering from peritoneal effusions. The objective effect was 80.1% and 82.6%, respectively. Our results suggest that intraperitoneal biotherapy may be extended to dealing with malignant peritoneal effusions in ovarian carcinoma.
{ "pile_set_name": "PubMed Abstracts" }
Q: Output of awk is not shown when used after a grep command I want to get CPU % of a process using pid from top command in Mac OS. When I use top | awk '{print $3}' I get the CPU % for all the processes. However, using top | grep 11568 | awk '{print $3}' returns nothing. The output of top | grep 11568 is 11568 java 0.0 09:48.45 663 2 1533+ 521M+ 0B 741M+ 11560 11560 sleeping *0[64+] 0.00000 0.00000 501 1335625+ 803+ 12376+ 4146+ 37032037+ 28783+ 3748122+ 514+ 112576 0.0 0 0 amar N/A N/A N/A N/A N/A N/A A: Haven't tested this command since I don't have mac os, could you please try following. your_command | awk -F' +' '/11568/{print $3}'
{ "pile_set_name": "StackExchange" }
A Finnair pilot arrested with more than $800,000 while on his Melbourne honeymoon thought the cash was his wife's, a court has heard. A lawyer for Finnish national Lauri Metsaranta, who is charged with dealing with property suspected to be the proceeds of crime, told the Melbourne Magistrates Court his client was just in the wrong place at the wrong time. In a police interview, Metsaranta said he thought the cash was his wealthy wife's money, "like someone owed her money or something", the court heard. Metsaranta said he did not know he and his common law wife, Changchen Chen, would be collecting the cash en route to their suite at the Hyatt in February, on the Australian leg of a trip that was "sort of a honeymoon". Chen has also been charged with dealing with property suspected to be the proceeds of crime. In the interview read to the court, Metsaranta said he had picked up instruments or other small things for his wife before, but the large sum of money was unusual. Advertisement Metsaranta had never met his wife's family, who live in China, but he knew they were very wealthy, the court heard. He and his wife had known each other about 18 months when they were arrested in Melbourne. He said when they collected the cash from the Mantra hotel he was not concerned about where it was coming from or where it was going, he was just hoping they wouldn not be robbed. "To me it felt a little grey area," Metsaranta told police. "I have to admit I was a little uncomfortable with that much money." Metsaranta said his wife had received a call after they landed in Melbourne, but he did not know what it was about because he could not speak Mandarin or Cantonese. They travelled to a second hotel "where a huge pile of money was". "All I see is the sports bag but I have no idea of the origins of the money," Metsaranta said. He said his wife dealt with an Asian man at the hotel, but he did not have a conversation with him and had never met him before.
{ "pile_set_name": "Pile-CC" }
Body energy metabolism is a finely tuned system dependent on the interactions of numerous endocrine axes. Insulin is a key regulator of metabolism and growth that maintains serum glucose and other metabolites within a narrow, predefined range. While it has been traditionally thought that insulin activity is controlled solel by its abundance in circulation, modulation of tissue insulin sensitivity has been described during [instances of environmental stress (e.g. infection/inflammation and pregnancy). It is well known that inflammation can induce insulin resistance in multiple tissues and that this response may be important for the body's response to infection.] Developing an understanding of how [inflammation regulates] tissue insulin responsiveness is important as it could provide novel insights into [pathologies] associated with tissue insulin resistance, such as metabolic syndrome and Type 2 Diabetes Mellitus. Several groups have shown that inflammation can induce insulin resistance through [direct] post-translational modification of components of the insulin-signaling cascade [by inflammation activated kinases]. The existing model suggests that resistance should develop relatively rapidly in response to inflammation, consistent with the kinetics of post-translational modifications. Paradoxically, the observed resistance to insulin requires prolonged treatment with inflammatory cytokines, suggesting that inflammation may [induce a transcriptional program that results in insulin resistance.] The goal of my Ph.D. thesis, and this proposal, is to investigate the mechanisms of [chronic inflammation induced] insulin resistance. Based upon the observed kinetics and our preliminary data, we hypothesize that chronic inflammation induces a [transcriptional program that leads to impaired tissue responsiveness to insulin. Specifically, we hypothesize that TNF? induces/suppresses previously uncharacterized protein(s) that regulate insulin signaling.] Since the liver coordinates whole body metabolism and is known to change its responsiveness to insulin signaling under inflammatory conditions, we propose to investigate the role of chronic inflammation in regulating hepatic insulin signaling using [in vivo and ex vivo models, specifically focusing on the effect of chronic TNF? treatment]. In the first aim, we will investigate the effects of [ex vivo and in vivo] chronic TNF? on [hepatocyte insulin-signaling and functional response including gluconeogenesis, glycogen synthesis, and lip genesis]. The second aim will address the [role of differential gene expression] induced by chronic inflammation and identify the [gene(s) and mechanism(s)] responsible for altering insulin signaling. Our studies represent a novel approach to understanding the mechanisms that regulate insulin signaling. Ultimately, this may lead to the development of novel therapeutic targets for metabolic syndrome and Type 2 Diabetes Mellitus. PUBLIC HEALTH RELEVANCE: Altered insulin signaling in the liver is a key component of many chronic diseases affecting the population, including metabolic syndrome and type 2 diabetes. Inflammation represents a common, reversible, non- disease state that is associated with altered hepatic insulin signaling. Understanding inflammatory regulation of insulin signaling will provide insight into the mechanisms regulating insulin signaling, inform the treatment of diseases that result from altered insulin signaling, as well as identify potentially novel therapeutic targets.
{ "pile_set_name": "NIH ExPorter" }
Mario Manningham paused from his own injury rehabilitation Thursday to rally 49ers teammates in the wake of Michael Crabtree's Achilles tear."It's sad to see somebody get hurt that's a great value to our team," Manningham told SiriusXM NFL Radio, "but the next person has to step up, man. We all know injuries are a part of the game."Manningham sustained a season-ending left knee injury Dec. 23 at Seattle, and he's been constantly rehabilitating from that anterior cruciate ligament tear."I have started and running cutting and doing little things," Manningham said. "When you have knee injuries, you can't really take any time off. Every time I think about it, I'm trying to do something with my knee. I'm not rushing it but I am going hard on my knee."Manningham, the 49ers' second-leading receiver last year, is not expected to be ready for the start of training camp in two months. The 49ers open defense of their NFC title Sept. 8 against the Green Bay Packers at Candlestick Park.Rather than announce a timetable for his return, Manningham said: "Whenever God wants me to come out and play, then when I'm 100 (percent), that's when I'm going to go out there."Crabtree is likely out for at least the next five months. The 49ers are currently without three of their top four receivers, with only Anquan Boldin fully healthy while Crabtree, Manningham and Kyle Williams (knee) rehabilitate their injuries.
{ "pile_set_name": "Pile-CC" }
var path = require('path'); var test = require('tape'); var resolve = require('../'); test('mock', function (t) { t.plan(8); var files = {}; files[path.resolve('/foo/bar/baz.js')] = 'beep'; function opts(basedir) { return { basedir: path.resolve(basedir), isFile: function (file, cb) { cb(null, Object.prototype.hasOwnProperty.call(files, path.resolve(file))); }, readFile: function (file, cb) { cb(null, files[path.resolve(file)]); } }; } resolve('./baz', opts('/foo/bar'), function (err, res, pkg) { if (err) return t.fail(err); t.equal(res, path.resolve('/foo/bar/baz.js')); t.equal(pkg, undefined); }); resolve('./baz.js', opts('/foo/bar'), function (err, res, pkg) { if (err) return t.fail(err); t.equal(res, path.resolve('/foo/bar/baz.js')); t.equal(pkg, undefined); }); resolve('baz', opts('/foo/bar'), function (err, res) { t.equal(err.message, "Cannot find module 'baz' from '" + path.resolve('/foo/bar') + "'"); t.equal(err.code, 'MODULE_NOT_FOUND'); }); resolve('../baz', opts('/foo/bar'), function (err, res) { t.equal(err.message, "Cannot find module '../baz' from '" + path.resolve('/foo/bar') + "'"); t.equal(err.code, 'MODULE_NOT_FOUND'); }); }); test('mock from package', function (t) { t.plan(8); var files = {}; files[path.resolve('/foo/bar/baz.js')] = 'beep'; function opts(basedir) { return { basedir: path.resolve(basedir), isFile: function (file, cb) { cb(null, Object.prototype.hasOwnProperty.call(files, file)); }, 'package': { main: 'bar' }, readFile: function (file, cb) { cb(null, files[file]); } }; } resolve('./baz', opts('/foo/bar'), function (err, res, pkg) { if (err) return t.fail(err); t.equal(res, path.resolve('/foo/bar/baz.js')); t.equal(pkg && pkg.main, 'bar'); }); resolve('./baz.js', opts('/foo/bar'), function (err, res, pkg) { if (err) return t.fail(err); t.equal(res, path.resolve('/foo/bar/baz.js')); t.equal(pkg && pkg.main, 'bar'); }); resolve('baz', opts('/foo/bar'), function (err, res) { t.equal(err.message, "Cannot find module 'baz' from '" + path.resolve('/foo/bar') + "'"); t.equal(err.code, 'MODULE_NOT_FOUND'); }); resolve('../baz', opts('/foo/bar'), function (err, res) { t.equal(err.message, "Cannot find module '../baz' from '" + path.resolve('/foo/bar') + "'"); t.equal(err.code, 'MODULE_NOT_FOUND'); }); }); test('mock package', function (t) { t.plan(2); var files = {}; files[path.resolve('/foo/node_modules/bar/baz.js')] = 'beep'; files[path.resolve('/foo/node_modules/bar/package.json')] = JSON.stringify({ main: './baz.js' }); function opts(basedir) { return { basedir: path.resolve(basedir), isFile: function (file, cb) { cb(null, Object.prototype.hasOwnProperty.call(files, path.resolve(file))); }, readFile: function (file, cb) { cb(null, files[path.resolve(file)]); } }; } resolve('bar', opts('/foo'), function (err, res, pkg) { if (err) return t.fail(err); t.equal(res, path.resolve('/foo/node_modules/bar/baz.js')); t.equal(pkg && pkg.main, './baz.js'); }); }); test('mock package from package', function (t) { t.plan(2); var files = {}; files[path.resolve('/foo/node_modules/bar/baz.js')] = 'beep'; files[path.resolve('/foo/node_modules/bar/package.json')] = JSON.stringify({ main: './baz.js' }); function opts(basedir) { return { basedir: path.resolve(basedir), isFile: function (file, cb) { cb(null, Object.prototype.hasOwnProperty.call(files, path.resolve(file))); }, 'package': { main: 'bar' }, readFile: function (file, cb) { cb(null, files[path.resolve(file)]); } }; } resolve('bar', opts('/foo'), function (err, res, pkg) { if (err) return t.fail(err); t.equal(res, path.resolve('/foo/node_modules/bar/baz.js')); t.equal(pkg && pkg.main, './baz.js'); }); });
{ "pile_set_name": "Github" }
Fourth Court of Appeals San Antonio, Texas January 23, 2019 No. 04-18-00781-CR, 04-18-00782-CR, 04-18-00783-CR & 04-18-00784-CR The STATE of Texas, Appellant v. Fernando Jefte MATA, Appellee From the County Court, Kinney County, Texas Trial Court No. 10054CR, 10138CR, 10187CR & 9964CR Honorable Spencer W. Brown, Judge Presiding ORDER The State’s Motion Relating to Case Record and to Findings of Fact and Conclusions of Law is hereby DENIED. _________________________________ Sandee Bryan Marion, Chief Justice IN WITNESS WHEREOF, I have hereunto set my hand and affixed the seal of the said court on this 23rd day of January, 2019. ___________________________________ KEITH E. HOTTLE, Clerk of Court
{ "pile_set_name": "FreeLaw" }
Interstate 265 Interstate 265 (I-265) is a Interstate Highway encircling the Louisville, Kentucky, metropolitan area, which includes Southern Indiana. In Kentucky, it travels through Jefferson County, from I-65 in the southern part of Louisville, meeting I-65 again in Indiana, where the road continues west to I-64, where it ends. The entire Kentucky stretch of the road is co-signed with Kentucky Route 841 (KY 841). An additional stretch of freeway between US 31W/US 60/KY 1934 and I-65 in the south part of Louisville is solely designated as KY 841. The highway is named the Gene Snyder Freeway (originally named the Jefferson Freeway), after the former congressman, and usually called "the Snyder" by locals. It is considered part of Louisville's beltline. Route description |- | IN || 13.1 || 21.1 |- | KY || 38.9 || 62.6 |- | Total || 52 || 83.7 |} Indiana Interstate 265 (I-265) in the U.S. state of Indiana presently runs from I-64 at the western edge of New Albany to the Lewis and Clark Bridge near Utica. Beginning at its western terminus, the freeway is concurrent with Indiana State Road 62 until Exit 10. Kentucky Interstate 265 (I-265) in the U.S. state of Kentucky presently runs from the Lewis and Clark Bridge in northern Louisville to an interchange with I-65 in southern Louisville. The entire freeway is concurrent with Kentucky Route 841. The Gene Snyder Freeway in which KY 841 and I-265 overlap for between I-65 and the Indiana state line has seen an increase in serious accidents. The primary factors stem from its low-level grass median which offers little to no protection for crossover incidents. Driver inattention and increased traffic and congestion has led to a decline in the overall level-of-service. In 2006, cable barriers were installed in the median for between I-71 and I-64, with further installation possible in the near future. Part of the road is currently signed in kilometers, which is unusual in the United States. Kentucky Route 841 Kentucky Route 841 (KY 841) is a state highway in the suburbs of Louisville. The route is a partial beltway, encircling Louisville on its southern and eastern sides. Compass direction changes to the north and south of exit 23, Taylorsville Road interchange. The western terminus of the route is at U.S. Route 31W (US 31W) and US 60 in the southwest Louisville community of Valley Station, where KY 841 continues to the west as KY 1934 while the northern terminus is at the Lewis and Clark Bridge and to the north of the East End Tunnel. The section between its terminus at KY 1934 and I-65 is solely designated as KY 841. History Originally signed just as KY 841, the Jefferson Freeway was constructed originally with two sections, one between KY 155 (Taylorsville Road) and US 60 (Shelbyville Road) and a second section between KY 1447 (Westport Road) and US 42 in the 1960s as short connectors to the eastern suburban expansion as well as a new Ford plant. I-264 by 1970 was woefully congested and was in dire need of reconstruction and other improvements, therefore I-265 was proposed as an outer beltway to provide pass-through motorists relief from the congestion of I-264. Construction started in the early 1980s and was finished later that decade and signed in 1987. The road is signed as I-265 and KY 841 from the I-65 interchange to the Indiana state line. From I-65 west to US 31W (although it is up to Interstate Highway standards), is signed solely as KY 841 due to American Association of State Highway and Transportation Officials numbering rules. KY 841 is signed throughout the entire designation of I-265. The exit numbering for the entire beltway starts at the western terminus of KY 841. Indiana State Road 265 The segment of the highway between I-65 and SR 60 at Exit 10 in Indiana was formerly known as Indiana State Road 265. It was changed in June 2019 to an Interstate under approval by AASHTO. Studies have also been conducted for the reconfiguration of the I-265 and I-64 interchange. It is currently an underpowered cloverleaf with no collector–distributor lanes, a relic of the original Jefferson Freeway. In late 2005, members of the Louisville Metro Council proposed a committee to begin planning a western bridge to link the southwestern end of the highway in Kentucky to Indiana. However the proposal of the western bridge was not put into action yet. On December 18, 2016, State Road 265 was extended east of State Road 62, which crosses the Ohio River connecting with KY 841, which was extended north of U.S. 42 in Kentucky as part of the Ohio River Bridges Project, creating a bypass around the eastern side of the city of Louisville. On June 4, 2019, the two disjointed sections of I-265 were finally connected under AASHTO approval, with the Indiana State Road 265 designation decommissioned and replaced by I-265. However, the signage has not yet been replaced to reflect the AASHTO approval. The Kentucky Route 841 designation mostly concurrent with I-265 in Kentucky has remained. Lewis and Clark Bridge In various discussions for over 30 years, the Lewis and Clark Bridge (previously referred to as the East End Bridge) is part of a new highway that connects State Road 265 in Indiana to KY 841 in Kentucky. The completion of the bridge connected the two disjointed highways to form a three-quarter beltway around the Louisville, Kentucky, metro area. The bridge was opened to traffic on December 18, 2016. There are currently no plans to construct a bridge on the west end of I-265. Exit list See also Roads in Louisville, Kentucky References External links KentuckyRoads.com: I-265 65-2 65-2 65-2 65-2 Kentucky Category:Transportation in Jefferson County, Kentucky 0265 Category:Transportation in Clark County, Indiana Category:Transportation in Floyd County, Indiana
{ "pile_set_name": "Wikipedia (en)" }
Q: irrationality of $\sqrt{2}^{\sqrt{2}}$. The fact that there exists irrational number $a,b$ such that $a^b$ is rational is proved by the law of excluded middle, but I read somewhere that irrationality of $\sqrt{2}^{\sqrt{2}}$ is proved constructively. Do you know the proof? A: Since this is a well-established result, this is a community wiki post. Relevant question: Deciding whether $2^{\sqrt2}$ is irrational/transcendental Kuzmin proved the following claim in 1930: Theorem: If $\alpha\neq 0,1$ is algebraic, $\beta$ is positive and rational, not a perfect square, then $\alpha^{\sqrt{\beta}}$ is transcendental. Unfortunately the paper is in Russian and I failed to find an English translation. A corollary of this is that $2^{\sqrt{2}}$ is transcendental, and so is its square root $\sqrt{2}^{\sqrt{2}}$. The outlines of both Gelfond and Kuzmin's constructive proof can be found here. As David Mitra pointed out the comments, Niven's book had a section dedicated to this. I love Niven's book so much. The technique is similar to the adapted proof I posted here, proof by contradiction. Rough idea about the construction: First assuming $\alpha^{\sqrt{\beta}}$ is algebraic. Then using sufficient large degree Lagrange interpolation polynomial to approximate $e^{(\ln \alpha)x}$ at points $\{a+ b\sqrt{2}\}$ for $a,b\in \mathbb{Z}$. Let the number of points go to infinity the error will go to zero, this shows a transcendental function $\alpha^x$ can be interpolate using countably many algebraic points. Contradiction.
{ "pile_set_name": "StackExchange" }
Peshekee River The Peshekee River is a river on the Upper Peninsula of Michigan in the United States. It is a tributary of Lake Michigamme, and its waters flow via the Michigamme River and the Menominee River to Lake Michigan. See also List of rivers of Michigan References Michigan Streamflow Data from the USGS Category:Rivers of Michigan Category:Tributaries of Lake Michigan
{ "pile_set_name": "Wikipedia (en)" }
Indie, Noise, Shoegaze… Music “We wanted to make a more energetic record. I personally looked to artists like Springsteen, 70’s Bowie, The Smiths, The Cure, Neil Young as inspiration for—not really for sound as much as for that dichotomy of bands who were entertainers still making, at times, weird dark music and writing songs that seem totally over-the-top by today’s rock band standards,” says Cymbals Eat Guitars bassist Matthew Whipple of his band’s wildly ambitious fourth LP, Pretty Years” (Press)
{ "pile_set_name": "Pile-CC" }
def extractLambytlWordpressCom(item): ''' Parser for 'lambytl.wordpress.com' ''' vol, chp, frag, postfix = extractVolChapterFragmentPostfix(item['title']) if not (chp or vol) or "preview" in item['title'].lower(): return None tagmap = [ ('King of Classical Music', 'King of Classical Music', 'translated'), ('PRC', 'PRC', 'translated'), ('Loiterous', 'Loiterous', 'oel'), ] for tagname, name, tl_type in tagmap: if tagname in item['tags']: return buildReleaseMessageWithType(item, name, vol, chp, frag=frag, postfix=postfix, tl_type=tl_type) return False
{ "pile_set_name": "Github" }
The firm was founded in 1997 by Robert J. Wise Jr., who has 20 years of experience in the historic preservation field. He is assisted by Seth Hinshaw, Senior Planner, who has been with the firm since 2001. Both planners have M.S. degrees in historic preservation from the University of Pennsylvania and exceed the 36 CFR 61 Professional Qualification Standards established by the National Park Service for architectural historians. Overview Wise Preservation Planning LLC completed a Historic Resource Impact Study for Cheyney University in 2010. The Study was submitted per Thornbury Township, Delaware County historic resource protection ordinance. The project involved the demolition of the non-historic Robinson Hall Dormitory (and later two other dormitories in the residential complex) and the construction of a new residential complex of buildings. The study was required because the proposed project was adjacent to several identified historic resources, including the University's early 20th century Quadrangle. The "Quad", consisting of substantial stone buildings, is the heart of the campus. In recent years, the University has renovated several historic buildings and has started to relocate activities to the historic Quad. Major recommendations by Wise called for reducing any negative impact of the new construction upon the Quad and in fact visually and functionally integrating the new complex with specific historic resources within the Quad. Cheyney University is the current name of an institution that was founded in Philadelphia in 1837. It operated under the guidance of a board of Quakers until the early 20th century. The campus relocated to Thornbury Township in 1903. Buildings on the historic Quad were built during the first three decades of the 20th century, partially during the long presidency of Leslie P. Hill (1913-1951). During the 1960s and 1970s, Cheyney extended the campus off the Quad. Robinson Hall was built on the site of the former Elkinton Athletic Field in 1964. Cheyney became a University in 1983, and with the passage of time, it has been increasingly interested in the re-use of its historic buildings. The image on the left shows the architects' rendering of the proposed new residence hall. The Impact Study recommendations were generally accepted by the University and the Township. The report commended Cheyney University for its foresight in planning for the future of its campus. The reconstruction project is now underway.
{ "pile_set_name": "Pile-CC" }
A semi-biased commentary on British and American politics, culture and current affairs Contraception CULTURE Amanda Marcotte, writing at Slate magazine, makes a compelling case for movie scriptwriters and directors to show more condom use in their movies. She makes a fair point: “In the world of movies and TV, people seem to be having sex all the time, but they almost never talk about or are shown using contraception. Since so much of movie sex serves the plot, you get encounters that are much more spontaneous than they would be in real life, without any pause in the action to wrap it up. Young viewers could easily get the sense that the norm is to hop right in bed with someone without ever worrying about unintended pregnancy.” And it’s true – if realism is your aim (and admittedly this is not always the case), pretending that people hop into bed with each other without going through that awkward “fumbling in the bedside cabinet drawer” moment is a misrepresentation, and one that can be easily (and, if done well, humorously) corrected. Jim Henson Studios, creator of The Muppets, is boycotting Chick-fil-A over that company’s president’s condemnation of gay marriage. In a stern rebuke, their statement reads: “The Jim Henson Company has celebrated and embraced diversity and inclusiveness for over fifty years and we have notified Chick-Fil-A that we do not wish to partner with them on any future endeavors”. Proco Moreno, Alderman of Chicago’s 1st Ward, joined in the anti Chick-fil-A backlash, stating that he would block the restaurant chain’s attempts to open their second Chicago outlet in his district because of the aforementioned statement issued by their CEO. His statement is somewhat over-the top – “If you are discriminating against a segment of the community, I don’t want you in the 1st Ward” – it is hard to see how any discrimination is taking place, as the restaurant does not check the sexual orientation of its customers upon entry, or have any policies in place that discriminate against one or another. But the fact remains that needlessly coming out in favour of a regressive social policy position that has no direct impact on your business or bottom line, can cost you money. Getting in on the act, The Onion reports on Chick-fil-A’s new homophobic sandwich. Reports The Onion: “In a press conference to reporters, company representatives said the homophobic new sandwich will include the national fast food chain’s trademark fried chicken filet wrapped in a piece of specially-smoked No Homo ham that would be topped with a slice of Swiss cheese and lathered in a creamy new Thousand Island-based Fag Punching sauce”. BRITISH POLITICS The UK economy shrank by another 0.7% according to the latest figures released today. Iain Martin, writing in The Telegraph, thinks that George Osborne has six months to turn things around. I would guess that this estimate sounds about right, but I am not optimistic that Osborne will do anything differently, given his obstinate refusal to implement the needed supply-side reforms, and his obsession with trying to score cheap political points from Ed Balls, a diversion which should be beneath him. The Guardian’s foremost education journalist twists herself in knots trying to explain why she is against private schools, and yet is sending her daughter to a private school. She takes a whole article, and many unnecessary words to explain what I can say in just three – she’s a hypocrite. She says: “I remember reading about Diane Abbott’s decision to send her son to the £10,000-a-year City of London school. She said she was a mother first and a politician second, a point that resonated strongly with me.” Precisely. She’s happy to inflict her left-wing social engineering on other people to make them conform to her ideal worldview (uniform standards, uniform people, uniform outcomes), but as soon as her own interests come in to play, she takes the conservative position. AMERICAN POLITICS Oh noes. The house of cards built by Grover Norquist has started to come crashing down as more and more elected officials repudiate his “tax pledge”. Whether you think the current tax burden in America is sustainable or not, I think most reasonable people can agree that Norquist’s pledge is overly restrictive on lawmakers, preventing them from closing unwarranted and discriminatory tax loopholes on the grounds that doing so would constitute a “tax increase”. Norquist, and his advocacy group Americans for Tax Reform, are one of several significant hurdles standing in the way of a fundamental simplification of the existing byzantine tax code. We should all cheer its demise, and hope that similar obstacles from the American left fall by the wayside too, in the name of meaningful, lasting reform. It is hard to disagree with this piece from Marbury, discussing the old-fashioned political art of persuasion, and the relative aptitudes of Obama and Clinton at using it. Through the lense of the Northern Irish “Good Friday” peace accord, Marbury looks at the way that President Clinton was able to flatter, cajole and reassure the key parties so that they reached a point where a deal could be signed, and how this skill is currently lacking in the Obama administration. Money quote: “Obama likes the big set-piece speech. But every policy he has backed, from the stimulus to healthcare, has declined in popularity the more speeches he made about it. His speeches explain things very well, very precisely. But they don’t change minds. This, it turns out, was the big hole in Obama’s campaign rhetoric of unification, of bringing red and blue together. He spoke about it eloquently, but he was never going to be the president who put it into action. Obama is a preacher, not a persuader. He’s terrific if you already agree with him, but doesn’t have much impact on those who don’t.” Jacob Weisberg, writing in Slate magazine, effectively deconstructs the Romney campaign’s attempts to smear President Obama with the “Chicago machine politician” label. Says Weisberg: “Of course, Romney isn’t interested in this kind of nuance. ‘Chicago-style politics’ is mainly just a way for him to call Obama corrupt without coming out and saying so”. At least some people in the Republican Party seem to have woken up to the demographic timebomb ticking away under their feet, and have started to lament, if not yet analyse, the fact that the vast majority of young people in America today would sooner give up their loud music and Pac-Man video games (or whatever it is that young people do for fun these days) than vote for a GOP candidate in a presidential election. Mitt Romney is apparently the latest Republican to develop a sense of outrage that no one outside of the grey haired brigade would be seen dead voting for him: ‘I don’t mean to be flip with this,’’ said Mitt Romney during a Q&A with students at the University of Chicago last week. “But I don’t see how a young American can vote for a Democrat.’’ He cheerfully apologized to anyone who might find such a comment “offensive,’’ but went on to explain why he was in earnest. The Democratic Party “is focused on providing more and more benefits to my generation, mounting trillion-dollar annual deficits my generation will never pay for,’’ Romney said. While Democrats are perpetrating “the greatest inter-generational transfer of wealth in the history of humankind,’’ Republicans are “consumed with the idea of getting federal spending down and creating economic growth and opportunity so we can balance our budget and stop putting these debts on you.’’ At which point the needle on my “Are You For Real?” machine jolted as far toward the “You Must Be Kidding” end of the spectrum as it could go before the whole machine exploded in a shower of sparks. The author himself does a good job of pouring cold water on any Republican claims to the mantle of fiscal restraint: But that debt wasn’t piled up without plenty of Republican help. During George W. Bush’s presidency, annual federal spending skyrocketed from $1.8 trillion to $3.4 trillion, and $4.9 trillion was added to the national debt. Bush left the White House, in fact, as the biggest spender since LBJ . Granted, the profligacy of Barack Obama has outstripped even Bush’s bacchanal: CBS reports that Obama has added more to the national debt in just three years and two months than Bush did in his entire eight years. Still, younger voters can hardly be blamed if they haven’t noticed that Republicans are “consumed with the idea of getting federal spending down.’’ Therefore I do not intend to say anything more about the glaring, shameless hypocrisy of the Republicans – the party that gifted America two unfunded wars, large tax breaks not balanced by spending cuts and the joke that is Medicare Part D – laying any claim whatsoever to competency in handling the nation’s finances. Except that I will say that much of the “profligacy of Barack Obama” mentioned by the author was the result of a fiscal stimulus implemented (despite its imperfections) at a time when the US economy was in freefall, and without which the tepid recovery currently being experienced would likely be nothing but a sweet dream. Mitt Romney and those others in the Republican Party who scratch their heads wondering why young people don’t like them miss the point entirely when they sulk that young people should embrace their economic policies. Though their fiscal policies may perhaps benefit young people in certain ways (and even this is arguable), there is no evidence based on past behaviour that they will actually have the political courage to implement them if voted into office. Old people (the beneficiaries of the “wealth transfers” that Romney claims to lament) actually vote in large numbers. Younger people don’t. The policy priorities of our political candidates duly reflect this fact. Besides, it is not the GOP’s economic policies that are the main problem. The problem is the fact that in a bad economy, the opposition party is spending more time talking about abortion, contraception, mass deportations of illegal immigrants, repealing ObamaCare, questioning the president’s eligibility to hold office, and reinstating “Don’t Ask, Don’t Tell” and a host of other socially regressive policy positions which are anathema to a majority of young people today than they are about how to reduce unemployment and help a population ill-equipped to perform the more highly-skilled, non-manufacturing jobs of tomorrow. Rick Santorum in particular often complains that the media focuses on his socially conservative policy positions and not his economic plan, but he can hardly expect young voters to thrust him into office on the back of his inspired ideas on the economy (spoiler – they are not that great) when they are more worried that he will cut off their unemployment insurance, or close down the Planned Parenthood centre where they go for medical care, or start a war with Iran. It is no coincidence that the one Republican presidential candidate who actually walks the fiscal conservatism walk and who doesn’t continually bleat on about social issues and the culture wars – Ron Paul – vastly outperforms his rivals with young voters, in primary after primary. Newsflash to Mitt Romney, Rick Santorum and Newt Gingrich: Even if you had a cogent economic policy (which, by the way, none of you do) you will never appeal to young people by just tweaking your fiscal message a little bit. You had a choice when you started your presidential campaigns, and in your desperation to secure the party base you chose to fearmonger and rant about “taking back America”, and fret about turning into a socialist state, and speak about the importance of individal freedom in one breath while promising to impose your religious values on the whole country in the next. Many young people would like an alternative to President Barack Obama, but you offer them nothing by way of a contrasting, conservative vision for the country that they could ever find acceptable. You offer them nothing. You offer racial minorities nothing. You offer women nothing. You offer the working poor and the unemployed nothing. And all of these constituencies will dutifully line up to vote for Barack Obama, and you will lose the presidential election on November 6th. It could be otherwise, if only you offered the American people a genuine acceptable choice when they cast their votes. I have a partly tongue-in-cheek list of US states that I am currently ‘boycotting’, or have no intention of visiting in the immediate future, either because of unfortunate things that have happened to me there, or most usually because of particularly stupid and offensive laws that have been either proposed or actually voted on and passed in their legislatures. Arizona was already strongly competing to join this exclusive list (it is difficult to join and even harder to be removed from the list) with the signing by Gov. Jan Brewer of their famous anti-illegal-immigration law, allowing state police to detain anyone suspected of being an illegal immigrant (quite how you tell such a person from a natural US citizen by their appearance or behaviour is anyone’s guess, but I think we all know the criteria they have in mind): But then came this gem that I was alerted to by a friend on Facebook – now, the Arizona State Senate Judiciary Committee (a pompous title for a pompous group of individuals) has endorsed a controversial bill that will, if passed, allow Arizona employees to exclude contraception coverage from the healthcare plans that they offer to their employees, if their religious beliefs or moral convictions encourage them to do so. Furthermore, the bill would also allow employers to demand proof of a medical prescription (for non birth-control related reasons) if an employee wishes to claim for contraceptive pills on their health insurance policy. The author of the bill – one Debbie Lesko, Republican of course – says that: “So, government should not be telling the organizations or mom and pop employers to do something against their moral beliefs.” Okay, well guess what. Maybe I’ll set up shop in Arizona and start a small business. But I am from a small and little-known religion that doesn’t believe in mammograms or cervical cancer screening. I don’t know why, my particular interpretation of my hypothetical holy book just tells me that to test for these diseases to allow early intervention would be an affront to God. So none of my female employees will get to benefit from these forms of healthcare as part of the insurance plan that I provide them. Oh, and my new religion also thinks that heart disease and erectile dysfunction are punishments from God that should be meekly accepted rather than treated, so no Viagra or anti-cholesterol medication for the gents. If you need Viagra to treat some other ailment not connected with erectile dysfunction we can maybe talk about coming to an agreement, but I’ll need a signed letter from your doctor explaining your precise medical history and needs. Can you imagine the uproar? Let us be quite clear. This is not about freedom of religion. Many states have been living under an expressed requirement that employers include birth control coverage in their healthcare plans for many years with nary a whisper of complaint until a Democrat named Barack Obama occupied the oval office. This is about slowly trying to establish a fundamentalist Christian theocracy in America, one in which even the overwhelming majority of Christians, myself included, would not wish to live in were it fully implemented. Republicans – who once criticised Obama because of the type of Christian church that he attended and the pastor who preached there – have decided that it would now be more politically fruitful to fan the embers of suspicion that he is in fact a muslim, and that he is launching an all-out assault on “Judeo-Christian” principles. And while we’re on the topic, can someone please initiate a sensible conversation about moving away from the current employer-based health insurance system in America? Aside from the damage it does to the economy in terms of issues such as impeding mobility of labour (especially important during the current fragile recovery with unemployment so high), if individuals purchased their own health insurance rather than relying on the employer to do it for them, we could sidestep this whole argument about coercing employers to act against their moral beliefs. If Debbie Lesko ever chose to leave her political career and return to the private sector, she wouldn’t have to stay up all night worrying about what naughty things her employees might be doing with the healthcare coverage that she paid for, because the employees would be paying the premiums and taking their chances that they won’t be struck down by lightning for daring to use a condom, or the pill. And I think everyone would sleep better at night as a result. Arizona, you have been teetering on the brink for a long time now. But congratulations, you have officially made the list. I decided to join the Roman Catholic church at eighteen years of age, and went through the Church’s RCIA programme (the Rite of Catholic Initiation of Adults), which required attending weekly lessons with the parish priest over a period of six months. I look back on the night that I was confirmed into the Church as one of the happiest and most sacred moments of my life, and though the strength of my faith (and my weekly Mass attendance) has seen several peaks and rather more lows in the intervening decade, I still consider myself a member of the Church, and I always intend to be. Many people have made similar conversions to the Church, notably two of the current Republican presidential contenders, Newt Gingrich, and Rick Santorum. It is said that there is no zealot like a convert, and though I may be the exception to the rule, Gingrich and Santorum appear to prove it rather well with many of their public pronouncements. In many respects, both men are probably better and more observant Catholics than me, at least now (Gingrich), and I don’t presume to judge them at all. What I will do, however, is call them out when they claim to represent the only political party that will defend Catholic teachings and priorities. Because that is pure, grade A baloney. Said Newt Gingrich of the ObamaCare requirement for employers operating in the public sphere, serving the public and employing people regardless of their religious affiliation, to offer health insurance that includes access to birth control: “I frankly don’t care what deal he tries to cut; this is a man who is deeply committed. If he wins re-election, he will wage war on the Catholic Church the morning after he is re-elected.” (Yes, I fear that the O RLY owl is going to be a frequent visitor to this blog). Really, Newt? Wage war? I’m curious to see Obama’s glistening new clone army sitting in storage, waiting for Inauguration Day in January 2013 when they will be activated and unleashed to desecrate churches and force people into unwilling same-sex marriages across the land. If I could talk with Mr. Gingrich and Mr. Santorum, I would say: the protection of life should not end at the moment of birth. I will never understand my Church’s current teaching on contraception – especially when male sexual enhancement drugs, in vitro fertilisation and other techniques that can result in the creation or destruction of a fertilised embryo are given a free pass, while contraception, the morning-after pill and stem cell research are not. But I can appreciate the consistency of the argument that all human life is precious, is worthy of respect, and that none should be taken unnecessarily. My own views on abortion are not yet fully developed, but I know that I would want it to be as rare as possible, and yet readily available at least under some limited circumstances (such as the survival of the mother, rape or incest, or in the case of catastrophic developmental anomalies). I understand your policies for caring for and protecting life while it is in the womb. But what effect would your policies have once these children are born? Is it important, as you so often say, that they are born into loving (married, heterosexual) families who are ready for a child, or does it not matter if they are unwanted and abused, or end up in the custody of the state until they reach eighteen years of age? It’s all very well advocating strongly for a new life until it reaches the nine month threshold, but what then? And to the Bishops, I would say: why do you deny holy communion to politicians who advocate for general public access to abortion services (while not supporting the practice themselves), but welcome with open arms those who support the death penalty, fight measures to improve social justice, support the torture of enemy prisoners or beat the drum for pre-emptive wars around the globe? You diminish your public standing, your credibility and the importance of these other important Church teachings when you do so. Andrew Sullivan makes a similar point in his excellent blog, with regard to the current enthusiasm in Republican circles to go to war with Iran: “I’d also argue that pre-emptive war based on an enemy’s alleged intentions, when it publicy declares the opposite, or based on inherent evil or insanity is counter to just war theory. Certainly the rhetoric of Santorum and Gingrich on this subject is a profound attack on Catholic just-war teaching. But don’t expect the Bishops to make any fuss about that. War and torture seem trivial issues to them, compared with access to contraception or gay rights.” Seriously, maybe I missed this in my RCIA classes. Will a Republican (since they are the ones who claim to have the direct hotline to God these days) please let me know which of these Church teachings it is okay to brazenly defy while still declaring myself a proud standard-bearer for the Church, and which are so inviolable that I would be literally declaring war on Catholicism if I dare to dissent? Thanks. “The America I know and love is not one in which my parents or my baby with Down Syndrome will have to stand in front of Obama’s ‘death panel’ so his bureaucrats can decide, based on a subjective judgment of their ‘level of productivity in society’, whether they are worthy of health care. Such a system is downright evil” – Sarah Palin, August 2009 “Barack Obama is the most dangerous president in modern American history. This administration has intellectually disarmed, it is morally disarmed, it is incapable of describing what threatens us” – Newt Gingrich, Republican Presidential Candidate, February 2012 “People have birth certificates. He doesn’t have a birth certificate. He may have one but there’s something on that, maybe religion, maybe it says he is a Muslim. I don’t know. Maybe he doesn’t want that. Or he may not have one. But I will tell you this. If he wasn’t born in this country, it’s one of the great scams of all time” – Donald Trump, Improbably Rich Idiot, March 2011 On the Federal Budget. The US national debt stood at $10.6 trillion when President Obama took office, and in 2011 reached $14.6 trillion. Cue lots of self-righteous bluster from the American right that Obama is wrecking the national finances and, to use a much overwrought phrase “running up the national credit card” that the next generation will have to pay off. You can agree or disagree with Obama’s economic stimulus, and TARP, and the auto bailouts – though as I remind my Republican friends, it is easy to criticise all of these measures and claim that they had no positive effect when none of us will ever have to live in an alternate reality where they had not taken place. What you cannot do, however, is pose as a staunch fiscal conservative and a concerned American worried about the financial stability of the United States if you have done any of the following: Voted to approve the wars in Iraq and Afghanistan without seeking additional revenues to fund them. Voted for Medicare Part D, the prescription drug programme for elderly Americans, again with no commensurate revenue increases (strange how “government-run healthcare” is an assault on individual liberty, with the huge exception of Medicare). Voted for or supported the Bush tax cuts of 2001 and 2003 that were not met with equal cuts to government spending. Obstructed the recent vote to raise the US debt ceiling, raising fears of a default and directly resulting in the downgrading of the government’s AAA credit rating. On Religious Liberty. I have amused myself watching several of the Republican presidential candidates twisting themselves in rhetorical knots trying to make the case that the founding fathers were only joking when they enshrined a “wall of separation between church and state” in the constitution (in Rick Santorum’s case, he went as far as to say that it made him physically sick to contemplate). Or rather, that it exists in much the same way as a cell membrane permits osmosis, allowing religion (or rather, certain favoured religions and denominations) to impose their beliefs beyond their congregations on the entire US population while making religious organisations themselves immune from any requirement to conform to state or federal laws. If we take as one example the recent furore over the fact that the Affordable Care Act (ACA, ObamaCare) mandates that insurance companies provide birth control coverage, it is telling that many of the religious prelates – including many Catholic Bishops – have lived under similar requirements to provide employees with insurance that includes birth control in their home states for many years without raising a chorus of objection, until the same issue came up at a federal level. One cannot help but feel that religion and the concept of religious freedom are being used as a convenient cudgel with which to bash the Democrats in an election year, rather than being truly respected and protected by the GOP. In terms of the Tea Party, there seems to be a genuine if uneven split between the minority true libertarians (of the Ron Paul mould) who believe in a separation of church and state and have the courage to say so, and the bulk of those others who are able to maintain in their minds the cognitive dissonance that must surely arise when you advocate for individual liberty in the economic realm on one hand, but insist that people abide by select teachings from your holy book (whichever it may be) on the other. On Healthcare. Being a conservative used to mean being a realist, dealing with the world as it is and hopefully proposing pragmatic, typically non-radical solutions. One of the persistent problems with the US healthcare system is the “free rider” problem. Hospitals are required to treat and care for any patient that arrives suffering from a grave, life-threatening injury or illness, regardless of whether or not that patient carries health insurance. Of course, this includes the more than 30% of Americans who lack such insurance. Even the most fervent tea-partier would (probably) pause before proposing that people be left to die on the street if they are in need of medical care but lacked insurance. Unfortunately, this creates a rather significant free rider problem, with US taxpayers and health insurance policyholders essentially paying to cover the cost of these uninsured healthcare expenses. This contributes to the unsustainable rate of inflation in US healthcare costs, makes no sense and is just plain silly. Even the conservative Heritage Foundation used to think so too, and at one time proposed an individual mandate requiring all citizens to purchase at least basic health insurance (http://www.forbes.com/sites/aroy/2011/10/20/how-a-conservative-think-tank-invented-the-individual-mandate). But now any such mandate is considered a grave assault on liberty. Okay, constitutional scholars can debate that point for a long time. But pragmatic conservatives should surely try to find a way around this issue, to solve the serious free rider problem which makes healthcare more expensive for everyone. Instead, the tea party rail against the “tyranny” of having to purchase healthcare, and yet say nothing about the free rider problem which hurts lower income people most of all in the form of higher insurance premiums and medical bills. Neither do they propose an alternative solution to address the fact that so many of their fellow citizens – some through choice but many through no fault of their own – live with the daily fear that accident or sudden illness could bring them to ruin. And no, promising to clamp down on medical malpractice lawsuits and muttering quietly about perhaps allowing insurers to sell policies across state lines, while both sensible ideas, do not solve a problem of this magnitude. I could go on to talk about “death panels” – the GOP’s term for the basic idea that end-of-life care counselling should be offered (not mandated, just offered) as part of health insurance policies in order that more people are given the opportunity to make these key decisions while they are young and healthy, and potentially avert the suffering and huge proportion of total lifetime medical expense which is incurred during the end stages of terminal diseases, through the issuance of Do Not Resuscitate Orders etc. But there is no need, because anyone who reads the language in the Affordable Care Act (ACA) and somehow extrapolates in their mind that offering end of life counselling as part of an insurance plan could in any way equate to a “death panel” that decides whether the disabled or infirm should live or die is clearly smoking something quite mind-alteringly potent and will not be swayed by anything committed to print here. I could also talk about the fact that the GOP’s constant use of the term “government-run healthcare”, or suggestions that government has taken over the healthcare industry (i.e. nationalisation) are ludicrous, alarmist and clearly and demonstrably false. But again, there is no need, because any thinking person should be able to see that while government may have now infringed on the way that consumers choose their health insurance provider (to some limited extent, in certain cases), this insurance and the healthcare itself is still provided by private-sector or non-profit organisations as much as it ever was. Those who scream “government takeover” or “socialism” would do well to go back to school and relearn the meaning of those terms – were it not for the fact that getting a college education is, of course, a form of snobbery these days. But there is no need to talk about these things. At present there is no reasoning or engaging at all on the topic of health reform with the Tea Party-beholden GOP, who, in the words of David Frum, “followed the most radical voices in the party and the movement, and [were led] to abject and irreversible defeat”. (http://www.frumforum.com/waterloo). ObamaCare is here now, with all of its benefits and imperfections. The Republicans had an opportunity to engage with the Democrats and ensure that some more conservative principles were included in the law. Instead, they chose obstructionism and got none of what they wanted. Why Now? I am curious about this, and I would love for any thinking, Tea Party-supporting readers to comment and to help educate me. I do not believe that the recent groundswell of constitutional originalism and small government fervour is entirely the result of resentment that a black man currently occupies the Oval Office. I think it is a factor, but not the only one, or even the main one necessarily. However, given the fact that the US federal government expanded in terms of raw expenditure, percentage of GDP, scope of activities and power over the individual for many years prior to the election of Barack Obama, I would like to understand – why the Tea Party, why now? Why the sudden need in 2009 for people to buy pocket editions of the US constitution, to dress up in 18th century clothes, to attend these rallies and rail against the subversion of America? Why deselect long-serving and relatively competent congressional representatives in favour of unknowledgeable and in some cases laughable primary challengers who vowed even before getting to Washington (or declaring on television that they are not a witch and being comprehensively beaten, in one depressing case) that they would never seek to strike a bipartisan deal? If you are a fiscal conservative, that’s great, campaign for greater fiscal responsibility. If you believe in small, limited government – marvellous, advocate strongly for it (I assume that your enthusiastic support of individual liberty applies to peoples’ bedroom and nuptial activities too though, right?) If you believe that some of the key edifices of the American social safety net and federal government are technically unconstitutional, then you can probably make that argument quite convincingly. But before you do any of those things, and if your name is not Ron Paul, please explain where you stood, and who and what you voted for in the months and years prior to Inauguration Day, 2009.
{ "pile_set_name": "Pile-CC" }
Influence of Cross-Linkers on the in Vitro Chondrogenesis of Mesenchymal Stem Cells in Hyaluronic Acid Hydrogels. This study aims to investigate the effect of the structures of cross-linkers on the in vitro chondrogenic differentiation of bone mesenchymal stem cells (BMSCs) in hyaluronic acid (HA)-based hydrogels. The hydrogels were prepared by the covalent cross-linking of methacrylated HA with different types of thiol-tailored molecules, including dithiothreitol (DTT), 4-arm poly(ethylene glycol) (PEG), and multiarm polyamidoamine (PAMAM) dendrimer using thiol-ene "click" chemistry. The microstructure, mechanical properties, diffusivity, and degradation rates of the resultant hydrogels were controlled by the structural feature of different cross-linkers. BMSCs were then encapsulated in the resulting hydrogels and cultured in chondrogenic conditions. Overall, chondrogenic differentiation was highly enhanced in the PEG-cross-linked HA hydrogels, as measured by glycosaminoglycan (GAG) and collagen accumulation. The physical properties of hydrogels, especially the mechanical property and microarchitecture, were resulted from the structures of different cross-linkers, which subsequently modulated the fate of BMSC differentiation.
{ "pile_set_name": "PubMed Abstracts" }
Q: How to Enumerate variable XML Document in .NET using Linq to XML I'm submitting a command to a ssh session and getting an XML response back which is variable depending on the type of query I'm running. I get the following type of XML returned... <CLIOutput> <Results> <ReturnCode>0</ReturnCode> <EventCode>23000</EventCode> <EventSummary>CLI command completed successfully.</EventSummary> </Results> <Data> <Row> <Client>kcllaptop</Client> <Domain>/Top/Top</Domain> </Row> <Row> <Client>testclient</Client> <Domain>/Top/Top</Domain> </Row> </Data> </CLIOutput> I then parse into an XDocument, and what I want to do is Enumerate through the different <Row> attributes in the <Data> section, given they change. They're always in DATA section, but the attribute names and numbers of them change. I can get the specific one in the example above, but I'm after a more generic method. I can get to the specifcs by _xDoc.Elements().<Data>.<Rows>(0).<Client>.ToValue but the <Client> name changes. What's the best way to enumerate through the rows returned in the element. Complete LInq newbie sorry. Thanks and Cheers, Al A: The answer by SLaks works. However, in VB.NET you don't need the .Elements() method. This will do the same thing: For Each row in _xDoc.<Data>.<Row> Console.WriteLine(row.<Client>.Value) Next
{ "pile_set_name": "StackExchange" }
Male cytogenetic evaluation prior to assisted reproduction procedures performed in Mar del Plata, Argentina. This paper aimed to estimate the frequency of occurrence and the types of chromosomal abnormalities found in 141 infertile men with abnormal semen parameters. the frequency and type of chromosomal abnormalities were determined with male mitotic karyotype analysis from peripheral blood through chromosome banding techniques before assisted reproduction procedures. In this series of 141 infertile men, 19 (13%) had chromosomal anomalies and 35 (25%) had polymorphic variants. The main chromosome abnormalities were reciprocal translocations and marker chromosomes in mosaic. These results stress the relevance of cytogenetic studies for infertile males as a diagnostic tool and a valuable input in genetic counseling.
{ "pile_set_name": "PubMed Abstracts" }
Salisbury (1818 ship) Salisbury was launched c.1814 in the almost certainly under another name and was possibly a prize. She was possibly captured by the British or sold to British owners in 1815. She made one voyage seal hunting in 1820 and transported settlers to South Africa in 1821. She was lost in 1827. Origins and career Salisbury origins and career are difficult to untangle because there were at various times several vessels by that name, all ranging between 117 and 125 tons burthen, and having similar trades. In 1821 Lloyd's Register (LR) carried two vessels named Salisbury, and the Register of Shipping carried four. It appears that LR missed one vessel completely and may have conflated two different vessels. Salisbury first appeared in LR in 1815 with S. Creedy, master, London owners, and trade London–Sierra Leone. Her origins were given as a foreign prize. She first appeared in the Register of Shipping (RS) with J. Creedy, master, Craig, owner, and trade London–Africa. Her origins were given as Portugal, built in 1812.<ref name=RS1816>[https://hdl.handle.net/2027/mdp.39015024214267?urlappend=%3Bseq=650 '"RS (1816), Seq.№1119.]</ref> However, in 1818 RS had two listings for Salisbury while LR had one that seemingly combined the two listings in RS. Seal hunting voyage (1820–1821): On 8 September 1820 Messrs Cannan, Smith and Millars appointed Captain Thomas Hodges, late master of , to command of Salisbury to engage in seal hunting. He sailed from England on 15 September, bound for the South Shetland Islands. He arrived at New South Shetland in January 1821 and left on 16 February. Salisbury called at Buenos Aires and arrived in the Downs on 13 May and in the Thames by 22 May. She returned with 9000, or 9,821, or 8,926 seal skins. FateSalisbury, of Liverpool, was lost off Cape Mount, Africa, on 1 June 1827. Her crew survived. Lloyd's List gave the name of her master as Bryan. Citations and references Citations References Jones, A.G.E. Jones (April 1985) British Sealing on New South Shetland 1819-1826: Part I", Great Circle'', Vol.7, No.1, pp. 9-22. Category:1814 ships Category:Age of Sail merchant ships of England Category:Sealing ships Category:Ships of the 1820 settlers Category:Maritime incidents in June 1827
{ "pile_set_name": "Wikipedia (en)" }
Guidelines Insights: Acute Lymphoblastic Leukemia, Version 1.2019. Survival outcomes for older adults with acute lymphoblastic leukemia (ALL) are poor and optimal management is challenging due to higher-risk leukemia genetics, comorbidities, and lower tolerance to intensive therapy. A critical understanding of these factors guides the selection of frontline therapies and subsequent treatment strategies. In addition, there have been recent developments in minimal/measurable residual disease (MRD) testing and blinatumomab use in the context of MRD-positive disease after therapy. These NCCN Guidelines Insights discuss recent updates to the NCCN Guidelines for ALL regarding upfront therapy in older adults and MRD monitoring/testing in response to ALL treatment.
{ "pile_set_name": "PubMed Abstracts" }
Q: What is the Java equivalent of JavaScript's resource folder? My Wicket web application contains a Flash (*.swf) FLV player. The following code: final String script = "var swfVersionStr = '10.0.0';" + "var xiSwfUrlStr = 'playerProductInstall.swf';" + "var flashvars = {file:'/proj/resources/video.flv'};" + "var params = {};" + "params.wmode = 'transparent';" + "params.quality = 'high';" + "params.allowscriptaccess = 'sameDomain';" + "params.allowfullscreen = 'true';" + "var attributes = {};" + "attributes.id = 'test';" + "attributes.name = 'test';" + "attributes.align = 'left';" + "swfobject.embedSWF('/proj/resources/mediaplayer.swf', 'movieDiv', '640', '480', swfVersionStr, xiSwfUrlStr, flashvars, params, attributes);" + "swfobject.createCSS('#flashContent', 'display:block;text-align:left;');"; add(new AbstractBehavior() { public void renderHead(IHeaderResponse response) { super.renderHead(response); response.renderOnLoadJavascript(script); } }); plays the FLV. The swfobject.js file is placed in the resource folder of my server. As I am testing it on localhost, the absolute path of resource folder is: /home/tapas/Desktop/proj/work/Jetty_0_0_0_0_80_proj.war__proj__qk44r3/webapp. Now, how can I save a file in the resource folder of my server by using Java? JavaScript identifies the resource folder path as /proj/resources/; what is the equivalent expression of this path in Java? I have tried: try{ File file=new File("/proj/resources/joymaa.txt"); if(file.exists()){ System.out.println("File exists"); }else{ System.out.println("File does not exists"); } }catch(Exception exception){ System.out.println(exception.getMessage()); } This is not displaying any error message, but it shows "File does not exists." A: try to find out what is your actual working directory of java... this might help: example I dont think that java could understand your relative path because it runs from different directory than javascript. Just my opinion, but give it a try.
{ "pile_set_name": "StackExchange" }